text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Typical points and families of expanding interval mappings
DCDS Home
Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems
July 2017, 37(7): 4035-4051. doi: 10.3934/dcds.2017171
A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian
Patrizia Pucci 1, , Mingqi Xiang 2, and Binlin Zhang 3,,
Dipartimento di Matematica e Informatica, Università degli Studi di Perugia, 06123 Perugia, Italy
College of Science, Civil Aviation University of China, Tianjin 300300, China
Department of Mathematics, Heilongjiang Institute of Technology, Harbin 150050, China
Received March 2016 Revised February 2017 Published April 2017
Full Text(HTML)
In this paper, we study an anomalous diffusion model of Kirchhoff type driven by a nonlocal integro-differential operator. As a particular case, we are concerned with the following initial-boundary value problem involving the fractional $p$-Laplacian $\left\{ \begin{array}{*{35}{l}} {{\partial }_{t}}u+M([u]_{s, p}^{p}\text{)}(-\Delta)_{p}^{s}u=f(x, t) & \text{in }\Omega \times {{\mathbb{R}}^{+}}, {{\partial }_{t}}u=\partial u/\partial t, \\ u(x, 0)={{u}_{0}}(x) & \text{in }\Omega, \\ u=0\ & \text{in }{{\mathbb{R}}^{N}}\backslash \Omega, \\\end{array}\text{ }\ \ \right.$ where $[u]_{s, p}$ is the Gagliardo $p$-seminorm of $u$, $Ω\subset \mathbb{R}^N$ is a bounded domain with Lipschitz boundary $\partialΩ$, $1 < p < N/s$, with $0 < s < 1$, the main Kirchhoff function $M:\mathbb{R}^{ + }_{0} \to \mathbb{R}^{ + }$ is a continuous and nondecreasing function, $(-Δ)_p^s$ is the fractional $p$-Laplacian, $u_0$ is in $L^2(Ω)$ and $f∈ L^2_{\rm loc}(\mathbb{R}^{ + }_0;L^2(Ω))$. Under some appropriate conditions, the well-posedness of solutions for the problem above is studied by employing the sub-differential approach. Finally, the large-time behavior and extinction of solutions are also investigated.
Keywords: Integro-differential operators, anomalous diffusion models, sub-differential approach, large-time behavior.
Mathematics Subject Classification: 35R11, 35B40, 35K55, 47G20.
Citation: Patrizia Pucci, Mingqi Xiang, Binlin Zhang. A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4035-4051. doi: 10.3934/dcds.2017171
[1] R. A. Adams, Sobolev Spaces, Pure and Applied Mathematics, 65, Academic Press, New York-London,, 1975.
G. Akagi and K. Matsuura, Well-posedness and large-time behaviors of solutions for a parabolic equations involving $p(x)$-Laplacian, Discrete Contin. Dyn. Syst., Dynamical systems, differential equations and applications, 8th AIMS Conference.Suppl., 1 (2011), 22-31. Google Scholar
G. Akagi, K. Matsuura, Nonlinear diffusion equations driven by the $p(·)$-Laplacian, Nonlinear Differential Equations Appl. NoDEA, 20 (2013), 37-64. doi: 10.1007/s00030-012-0153-6. Google Scholar
F. Andreu, J. M. Mazón, J. D. Rossi and J. Toledo, A nonlocal $p$-Laplacian evolution equation with nonhomogeneous Dirichlet boundary conditions, SIAM J. Math. Anal., 40 (2009), 1815-1851. doi: 10.1137/080720991. Google Scholar
S. Antontsev and S. Shmarev, Blow-up of solutions to parabolic equations with nonstandard growth conditions, J. Comput. Appl. Math.(234), 2010 (), 2633-2645. doi: 10.1016/j.cam.2010.01.026. Google Scholar
S. Antontsev, S. Shmarev, Vanishing solutions of anisotropic parabolic equations with variable nonlinearity, J. Math. Anal. Appl., 361 (2010), 371-391. doi: 10.1016/j.jmaa.2009.07.019. Google Scholar
D. Applebaum, Lévy processes-from probability to finance quantum groups, Notices Amer. Math. Soc., 51 (2004), 1336-1347. Google Scholar
G. Autuori, A. Fiscella and P. Pucci, Stationary Kirchhoff problems involving a fractional elliptic operator and a critical nonlinearity, Nonlinear Anal., 125 (2015), 699-714. doi: 10.1016/j.na.2015.06.014. Google Scholar
G. Autuori, P. Pucci and M. C. Salvatori, Global nonexistence for nonlinear Kirchhoff systems, Arch. Ration. Mech. Anal., 196 (2010), 489-516. doi: 10.1007/s00205-009-0241-x. Google Scholar
[10] H. Brézis, Operateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, Math Studies, Vol.5 North-Holland, Amsterdam, New York, 1973.
L. Caffarelli, Some nonlinear problems involving non-local diffusions, ICIAM 07-6th International Congress on Industrial and Applied Mathematics, Eur. Math. Soc., Zürich, (2009), 43-56. doi: 10.4171/056-1/3. Google Scholar
L. Caffarelli, Non-local diffusions, drifts and games, Nonlinear Partial Differential Equations, Abel Symposia, 7 (2012), 37-52. doi: 10.1007/978-3-642-25361-4_3. Google Scholar
E. Chasseigne, M. Chaves and J. D. Rossi, Asymptotic behaviour for nonlocal diffusion equations, J. Math. Pures Appl., 86 (2006), 271-291. doi: 10.1016/j.matpur.2006.04.005. Google Scholar
F. Colasuonno and P. Pucci, Multiplicity of solutions for $p(x)$-polyharmonic elliptic Kirchhoff equations, Nonlinear Anal., 74 (2011), 5962-5974. doi: 10.1016/j.na.2011.05.073. Google Scholar
C. Cortazar, M. Elgueta, J. D. Rossi and N. Wolanski, Boundary fluxes for nonlocal diffusion, J. Differential Equations, 234 (2007), 360-390. doi: 10.1016/j.jde.2006.12.002. Google Scholar
A. Di Castro, T. Kuusi and G. Palatucci, Nonlocal Harnack inequalities, J. Funct. Anal., 267 (2014), 1807-1836. doi: 10.1016/j.jfa.2014.05.023. Google Scholar
A. Di Castro, T. Kuusi and G. Palatucci, Local behavior of fractional $p$-minimizers, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 1279-1299. doi: 10.1016/j.anihpc.2015.04.003. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
J. M. do'O, O. H. Miyagaki and M. Squassina, Nonautonomous fractional problems with exponential growth, NoDEA Nonlinear Differential Equations Appl., 22 (2015), 1395-1410. doi: 10.1007/s00030-015-0327-0. Google Scholar
P. Fife, Some nonclassical trends in parabolic and parabolic-like evolutions, Trends in Nonlinear Analysis, Springer, Berlin, (2003), 153-191. Google Scholar
M. Fila, Boundedness of global solutions of nonlinear diffusion equations, J. Differential Equations, 98 (1992), 226-240. doi: 10.1016/0022-0396(92)90091-Z. Google Scholar
A. Fiscella, R. Servadei and E. Valdinoci, Density properties for fractional Sobolev spaces, Ann. Acad. Sci. Fenn. Math., 40 (2015), 235-253. doi: 10.5186/aasfm.2015.4009. Google Scholar
A. Fiscella and E. Valdinoci, A critical Kirchhoff type problem involving a nonlocal operator, Nonlinear Anal., 94 (2014), 156-170. doi: 10.1016/j.na.2013.08.011. Google Scholar
G. Franzina and G. Palatucci, Fractional $p$-eigenvalues, Riv. Math. Univ. Parma, 5 (2014), 373-386. Google Scholar
M. Gobbino, Quasilinear degenerate parabolic equations of Kirchhoff type, Math. Meth. Appl. Sci., 22 (1999), 375-388. doi: 10.1002/(SICI)1099-1476(19990325)22:5<375::AID-MMA26>3.0.CO;2-7. Google Scholar
N. Laskin, Fractional quantum mechanics and Lévy path integrals, Phys. Lett. A, 268 (2000), 298-305. doi: 10.1016/S0375-9601(00)00201-2. Google Scholar
N. Laskin, Fractional Schrödinger equation, Phys. Rev. E, 66 (2002), 056108, 7 pp. doi: 10.1103/PhysRevE.66.056108. Google Scholar
E. Lindgren and P. Lindqvist, Fractional eigenvalues, Calc. Var. Partial Differential Equations, 49 (2014), 795-826. doi: 10.1007/s00526-013-0600-1. Google Scholar
T. F. Ma, Remarks on an elliptic equation of Kirchhoff type, Nonlinear Anal., 63 (2005), 1967-1977. doi: 10.1016/j.na.2005.03.021. Google Scholar
X. Mingqi, G. Molica Bisci, G. H. Tian and B. L. Zhang, Infinitely many solutions for the stationary Kirchhoff problems involving the fractional $p$-Laplacian, Nonlinearity, 29 (2016), 357-374. doi: 10.1088/0951-7715/29/2/357. Google Scholar
M. Pérez-Llanosa and J. D. Rossi, Blow-up for a non-local diffusion problem with Neumann boundary conditions and a reaction term, Nonlinear Anal., 70 (2009), 1629-1640. doi: 10.1016/j.na.2008.02.076. Google Scholar
P. Pucci and S. Saldi, Critical stationary Kirchhoff equations in $\mathbb{R}^N$ involving nonlocal operators, Rev. Mat. Iberoam., 32 (2016), 1-22. doi: 10.4171/RMI/879. Google Scholar
P. Pucci and J. Serrin, Global nonexistence for abstract evolution equations with positive initial energy, J. Differential Equations, 150 (1998), 203-214. doi: 10.1006/jdeq.1998.3477. Google Scholar
P. Pucci, M. Q. Xiang and B. L. Zhang, Multiple solutions for nonhomogenous Schrodinger-Kirchhoff type equations involving the fractional $p-$Laplacian in $\mathbb{R}^N$, Calc. Var. Partial Differential Equations, 54 (2015), 2785-2806. doi: 10.1007/s00526-015-0883-5. Google Scholar
P. Pucci, M. Q. Xiang and B. L. Zhang, Existence and multiplicity of entire solutions for fractional $p$-Kirchhoff equations, Adv. Nonlinear Anal., 5 (2016), 27-55. doi: 10.1515/anona-2015-0102. Google Scholar
R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations, Mathematical Surveys and Monographs Vol. 49, American Mathematical Society, Providence, RI, 1997, xiv + 278 pp. Google Scholar
J. L. Vázquez, Nonlinear diffusion with fractional Laplacian operators, Nonlinear Partial Differential Equations, Abel Symp., Springer, Heidelberg, 7 (2012), 271-298. doi: 10.1007/978-3-642-25361-4_15. Google Scholar
M. Q. Xiang, B. L. Zhang and M. Ferrara, Existence of solutions for Kirchhoff type problem involving the non-local fractional $p$-Laplacian, J. Math. Anal. Appl., 424 (2015), 1021-1041. doi: 10.1016/j.jmaa.2014.11.055. Google Scholar
M. Q. Xiang, B. L. Zhang and M. Ferrara, Multiplicity results for the nonhomogeneous fractional $p$-Kirchhoff equations with concave-convex nonlinearities, Proc. Roy. Soc. A, 471 (2015), 20150034, 14 pp. doi: 10.1098/rspa.2015.0034. Google Scholar
M. Q. Xiang, B. L. Zhang and V. Rădulescu, Existence of solutions for perturbed fractional $p$-Laplacian equations, J. Differential Equations, 260 (2016), 1392-1413. doi: 10.1016/j.jde.2015.09.028. Google Scholar
Nestor Guillen, Russell W. Schwab. Neumann homogenization via integro-differential operators. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3677-3703. doi: 10.3934/dcds.2016.36.3677
Michel Chipot, Senoussi Guesmia. On a class of integro-differential problems. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1249-1262. doi: 10.3934/cpaa.2010.9.1249
Marco Di Francesco, Yahya Jaafra. Multiple large-time behavior of nonlocal interaction equations with quadratic diffusion. Kinetic & Related Models, 2019, 12 (2) : 303-322. doi: 10.3934/krm.2019013
Olivier Bonnefon, Jérôme Coville, Jimmy Garnier, Lionel Roques. Inside dynamics of solutions of integro-differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3057-3085. doi: 10.3934/dcdsb.2014.19.3057
Paola Loreti, Daniela Sforza. Observability of $N$-dimensional integro-differential systems. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 745-757. doi: 10.3934/dcdss.2016026
Xu Chen, Jianping Wan. Integro-differential equations for foreign currency option prices in exponential Lévy models. Discrete & Continuous Dynamical Systems - B, 2007, 8 (3) : 529-537. doi: 10.3934/dcdsb.2007.8.529
Thanh-Anh Nguyen, Dinh-Ke Tran, Nhu-Quan Nguyen. Weak stability for integro-differential inclusions of diffusion-wave type involving infinite delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3637-3654. doi: 10.3934/dcdsb.2016114
Eduardo Cuesta. Asymptotic behaviour of the solutions of fractional integro-differential equations and some time discretizations. Conference Publications, 2007, 2007 (Special) : 277-285. doi: 10.3934/proc.2007.2007.277
Elena Bonetti, Elisabetta Rocca. Global existence and long-time behaviour for a singular integro-differential phase-field system. Communications on Pure & Applied Analysis, 2007, 6 (2) : 367-387. doi: 10.3934/cpaa.2007.6.367
Jean-Michel Roquejoffre, Juan-Luis Vázquez. Ignition and propagation in an integro-differential model for spherical flames. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 379-387. doi: 10.3934/dcdsb.2002.2.379
Tomás Caraballo, P.E. Kloeden. Non-autonomous attractors for integro-differential evolution equations. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 17-36. doi: 10.3934/dcdss.2009.2.17
Walter Allegretto, John R. Cannon, Yanping Lin. A parabolic integro-differential equation arising from thermoelastic contact. Discrete & Continuous Dynamical Systems - A, 1997, 3 (2) : 217-234. doi: 10.3934/dcds.1997.3.217
Narcisa Apreutesei, Nikolai Bessonov, Vitaly Volpert, Vitali Vougalter. Spatial structures and generalized travelling waves for an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 537-557. doi: 10.3934/dcdsb.2010.13.537
Liang Zhang, Bingtuan Li. Traveling wave solutions in an integro-differential competition model. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 417-428. doi: 10.3934/dcdsb.2012.17.417
Yubo Chen, Wan Zhuang. The extreme solutions of PBVP for integro-differential equations with caratheodory functions. Conference Publications, 1998, 1998 (Special) : 160-166. doi: 10.3934/proc.1998.1998.160
Shihchung Chiang. Numerical optimal unbounded control with a singular integro-differential equation as a constraint. Conference Publications, 2013, 2013 (special) : 129-137. doi: 10.3934/proc.2013.2013.129
Narcisa Apreutesei, Arnaud Ducrot, Vitaly Volpert. Travelling waves for integro-differential equations in population dynamics. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 541-561. doi: 10.3934/dcdsb.2009.11.541
Tonny Paul, A. Anguraj. Existence and uniqueness of nonlinear impulsive integro-differential equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (5) : 1191-1198. doi: 10.3934/dcdsb.2006.6.1191
Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065
Tianling Jin, Jingang Xiong. Schauder estimates for solutions of linear parabolic integro-differential equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5977-5998. doi: 10.3934/dcds.2015.35.5977
Patrizia Pucci Mingqi Xiang Binlin Zhang | CommonCrawl |
Last 3 years (15)
Over 3 years (122)
Physics and Astronomy (60)
Materials Research (35)
Statistics and Probability (13)
MRS Online Proceedings Library Archive (25)
Microscopy and Microanalysis (13)
Epidemiology & Infection (11)
Psychological Medicine (10)
Proceedings of the International Astronomical Union (7)
Publications of the Astronomical Society of Australia (7)
Laser and Particle Beams (4)
Journal of Developmental Origins of Health and Disease (3)
The Journal of Agricultural Science (3)
High Power Laser Science and Engineering (2)
International Astronomical Union Colloquium (2)
Materials Research Society Internet Journal of Nitride Semiconductor Research (2)
The European Physical Journal - Applied Physics (2)
The Journal of Laryngology & Otology (2)
British Actuarial Journal (1)
Bulletin of Entomological Research (1)
Epidemiology and Psychiatric Sciences (1)
European Psychiatry (1)
Journal of Fluid Mechanics (1)
Materials Research Society (36)
International Astronomical Union (10)
MSC - Microscopical Society of Canada (5)
MAS - Microbeam Analysis Society (4)
BSAS (3)
AMA Mexican Society of Microscopy MMS (2)
Animal consortium (2)
Nestle Foundation - enLINK (2)
Brazilian Society for Microscopy and Microanalysis (SBMM) (1)
Canadian Neurological Sciences Federation (1)
Developmental Origins of Health and Disease Society (1)
EAAP (1)
European Psychiatric Association (1)
Institute and Faculty of Actuaries (1)
International Psychogeriatric Association (1)
International Soc for Twin Studies (1)
JLO (1984) Ltd (1)
MiMi / EMAS - European Microbeam Analysis Society (1)
Nutrition Society (1)
Society for Healthcare Epidemiology of America (SHEA) (1)
The Australian Society of Otolaryngology Head and Neck Surgery (1)
Cambridge Studies in Biological and Evolutionary Anthropology (1)
Changes in soil hydraulic properties due to organic amendment
Haimanote K. Bayabil, Fitsum T. Teshome, Niguss Solomon Hailegnaw, Jian Zhang, Yuncong C. Li
Journal: Experimental Results / Volume 3 / 2022
Published online by Cambridge University Press: 28 November 2022, e27
The effect of milorganite, a commercially available organic soil amendment, on soil nutrients, plant growth, and yield has been investigated. However, its effect on soil hydraulic properties remains less understood. Therefore, this study aimed to investigate the effect of milorganite amendment on soil evaporation, moisture retention, hydraulic conductivity, and electrical conductivity of a Krome soil. A column experiment was conducted with two milorganite application rates (15 and 30% v/v) and a non-amended control soil. The results revealed that milorganite reduced evaporation rates and the length of Stage I of the evaporation process compared with the control. Moreover, milorganite increased moisture retention at saturation and permanent wilting point while decreasing soil hydraulic conductivity. In addition, milorganite increased soil electrical conductivity. Overall, milorganite resulted in increased soil moisture retention; however, moisture in the soil may not be readily available for plants due to increased soil salinity.
GaLactic and Extragalactic All-sky Murchison Widefield Array survey eXtended (GLEAM-X) I: Survey description and initial data release
Murchison Widefield Array
N. Hurley-Walker, T. J. Galvin, S. W. Duchesne, X. Zhang, J. Morgan, P. J. Hancock, T. An, T. M. O. Franzen, G. Heald, K. Ross, T. Vernstrom, G. E. Anderson, B. M. Gaensler, M. Johnston-Hollitt, D. L. Kaplan, C. J. Riseley, S. J. Tingay, M. Walker
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 23 August 2022, e035
We describe a new low-frequency wideband radio survey of the southern sky. Observations covering 72–231 MHz and Declinations south of $+30^\circ$ have been performed with the Murchison Widefield Array "extended" Phase II configuration over 2018–2020 and will be processed to form data products including continuum and polarisation images and mosaics, multi-frequency catalogues, transient search data, and ionospheric measurements. From a pilot field described in this work, we publish an initial data release covering 1,447 $\mathrm{deg}^2$ over $4\,\mathrm{h}\leq \mathrm{RA}\leq 13\,\mathrm{h}$ , $-32.7^\circ \leq \mathrm{Dec} \leq -20.7^\circ$ . We process twenty frequency bands sampling 72–231 MHz, with a resolution of 2′–45′′, and produce a wideband source-finding image across 170–231 MHz with a root mean square noise of $1.27\pm0.15\,\mathrm{mJy\,beam}^{-1}$ . Source-finding yields 78,967 components, of which 71,320 are fitted spectrally. The catalogue has a completeness of 98% at ${{\sim}}50\,\mathrm{mJy}$ , and a reliability of 98.2% at $5\sigma$ rising to 99.7% at $7\sigma$ . A catalogue is available from Vizier; images are made available via the PASA datastore, AAO Data Central, and SkyView. This is the first in a series of data releases from the GLEAM-X survey.
The Changing Profile of Tenure-Track Faculty in Archaeology
Justin Cramb, Brandon T. Ritchison, Carla S. Hadden, Qian Zhang, Edgar Alarcón-Tinajero, Xianyan Chen, K. C. Jones, Travis Jones, Katharine Napora, Matthew Veres, Victor D. Thompson
Journal: Advances in Archaeological Practice / Volume 10 / Issue 4 / November 2022
Published online by Cambridge University Press: 05 May 2022, pp. 371-381
The goal for many PhD students in archaeology is tenure-track employment. Students primarily receive their training by tenure-track or tenured professors, and they are often tacitly expected—or explicitly encouraged—to follow in the footsteps of their advisor. However, the career trajectories that current and recent PhD students follow may hold little resemblance to the ones experienced by their advisors. To understand these different paths and to provide information for current PhD students considering pursuing a career in academia, we surveyed 438 archaeologists holding tenured or tenure-track positions in the United States. The survey, recorded in 2019, posed a variety of questions regarding the personal experiences of individual professors. The results are binned by the decade in which the respondent graduated. Evident patterns are discussed in terms of change over time. The resulting portraits of academic pathways through the past five decades indicate that although broad commonalities exist in the qualifications of early career academics, there is no singular pathway to obtaining tenure-track employment. We highlight the commonalities revealed in our survey to provide a set of general qualifications that might provide a baseline set of skills and experiences for an archaeologist seeking a tenure-track job in the United States.
Spatial epidemiological characteristics and exponential smoothing model application of tuberculosis in Qinghai Plateau, China
Y. Shang, T. T. Zhang, Z. F. Wang, B. Z. Ma, N. Yang, Y. T. Qiu, B. Li, Q. Zhang, Q. L. Huang, K. Y. Liu
Journal: Epidemiology & Infection / Volume 150 / 2022
Published online by Cambridge University Press: 12 January 2022, e37
The epidemic of tuberculosis has posed a serious burden in Qinghai province, it is necessary to clarify the epidemiological characteristics and spatial-temporal distribution of TB for future prevention and control measures. We used descriptive epidemiological methods and spatial statistical analysis including spatial correlation and spatial-temporal analysis in this study. Furthermore, we applied an exponential smoothing model for TB epidemiological trend forecasting. Of 43 859 TB cases, the sex ratio was 1.27:1 (M:F), and the average annual TB registered incidence was 70.00/100 000 of 2009–2019. More cases were reported in March and April, and the worst TB stricken regions were the prefectures of Golog and Yushu. High TB registered incidences were seen in males, farmers and herdsmen, Tibetans, or elderly people. 7132 cases were intractable, which were recurrent, drug resistant, or co-infected with other infections. Three likely cases clusters with significant high risk were found by spatial-temporal scan on data of 2009–2019. The exponential smoothing winters' additive model was selected as the best-fitting model to forecast monthly TB cases in the future. This research indicated that TB in Qinghai is still a serious threaten to the local residents' health. Multi-departmental collaboration and funds special for TB treatments and control are still needed, and the exponential smoothing model is promising which could be applied for forecasting of TB epidemic trend in this high-altitude province.
Role of saliva use during masturbation in the transmission of Chlamydia trachomatis in men who have sex with men
Xianglong Xu, Eric P.F. Chow, David Regan, Jason J. Ong, Richard T. Gray, Pingyu Zhou, Christopher K. Fairley, Lei Zhang
Published online by Cambridge University Press: 09 September 2021, e216
Masturbation is a common sexual practice in men, and saliva is often used as a lubricant during masturbation by men who have sex with men. However, the role of saliva use during masturbation in the transmission of chlamydia is still unclear. We developed population-level, susceptible-infected-susceptible compartmental models to explore the role of saliva use during masturbation on the transmission of chlamydia at multiple anatomical sites. In this study, we simulated both solo masturbation and mutual masturbation. Our baseline model did not include masturbation but included transmission routes (anal sex, oral-penile sex, rimming, kissing and sequential sexual practices) we have previously validated (model 1). We added masturbation to model 1 to develop the second model (model 2). We calibrated the model to five clinical datasets separately to assess the effects of masturbation on the prevalence of site-specific infection. The inclusion of masturbation (model 2) significantly worsened the ability of the models to replicate the prevalence of C. trachomatis. Using model 2 and the five data sets, we estimated that saliva use during masturbation was responsible for between 3.9% [95% confidence interval (CI) 2.0–6.8] and 6.2% (95% CI 3.8–10.5) of incident chlamydia cases at all sites. Our models suggest that saliva use during masturbation is unlikely to play a major role in chlamydia transmission between men, and even if it does have a role, about one in seven cases of urethral chlamydia might arise from masturbation.
Mood and suicidality amongst cyberbullied adolescents- a cross-sectional study from youth risk behavior survey
Y.C. Hsieh, P. Jain, N. Veluri, J. Bhela, B. Sheikh, F. Bangash, J. Gude, R. Subhedar, M. Zhang, M. Shah, Z. Mansuri, K. Aedma, T. Parikh
Journal: European Psychiatry / Volume 64 / Issue S1 / April 2021
Published online by Cambridge University Press: 13 August 2021, pp. S85-S86
There is a limited literature available showing mental health burden among adolescents following cyberbullying.
Aim is to evaluate the association of low mood and suicidality amongst cyberbullied adolescents.
A study on CDC National Youth Risk Behavior Surveillance (YRBS) (1991-2017). Responses from adolescence related to cyberbullying and suicidality were evaluated. Chi-square and mix-effect multivariable logistic regression analysis was performed to find out the association of cyberbullying with sadness/hopelessness, suicide consideration, plan, and attempts.
A total of 10,463 adolescents, 14.8% of adolescents faced cyberbullying a past year. There was a higher prevalence of cyberbullying in youths aged 15-17 years (25 vs 26 vs 23%), which included more females to males (68 vs 32%).(p<0.0001) Caucasians (53%) had the highest number of responses to being cyberbullied compared to Hispanics (24%), African Americans (11%).(p<0.0001) There was an increased prevalence of cyberbullied youths with feelings of sadness/hopelessness (59.6 vs 25.8%), higher numbers considering suicide (40.4 vs 13.2%), suicide plan (33.2 vs 10.8%), and multiple suicidal attempts in comparison to non-cyberbullied.(p<0.0001) On regression analysis, cyberbullied adolescence had a 155% higher chance of feeling sad and hopeless [aOR=2.55; 95%CI=2.39-2.72], considered suicide [1.52 (1.39-1.66)], and suicide plan [1.24 (1.13-1.36)].
In our study, cyberbullying was associated with negative mental health outcomes. Further research is warranted to examine the impact and outcomes of cyberbullying amongst adolescents and guiding the policies to mitigate the consequences.
No significant relationships.
Methane emissions from cattle manure during short-term storage with and without a plastic cover in different seasons
H. R. Zhang, K. J. Sun, L. F. Wang, Z. W. Teng, L. Y. Zhang, T. Fu, T. Y. Gao
Journal: The Journal of Agricultural Science / Volume 159 / Issue 3-4 / April 2021
Published online by Cambridge University Press: 10 June 2021, pp. 159-166
Manure is a primary source of methane (CH4) emissions into the atmosphere. A large proportion of CH4 from manure is emitted during storage, but this varies with storage methods. In this research, we tested whether covering a manure heap with plastic reduces CH4 emission during a short-term composting process. A static chamber method was used to detect the CH4 emission rate and the change of the physicochemical properties of cattle manure which was stored either uncovered (treatment UNCOVERED) or covered with plastic (treatment COVERED) for 30-day periods during the four seasons? The dry matter content of the COVERED treatment was significantly less than the UNCOVERED treatment (P < 0.01), and the C/N ratio of the COVERED treatment significantly greater than the UNCOVERED treatment (P > 0.05) under high temperature. In the UNCOVERED treatment, average daily methane (CH4) emissions were in the order summer > spring > autumn > winter. CH4 emissions were positively correlated with the temperature (R2 = 0.52, P < 0.01). Compared to the UNCOVERED treatment, the daily average CH4 emission rates from COVERED treatment manure were less in the first 19 days of spring, 13 days of summer, 10 days of autumn and 30 days of winter. In summary, covering the manure pile with plastic reduces the evaporation of water during storage; and in winter, long-term covering with plastic film reduces the CH4 emissions during the storage of manure.
Impact of in vitro embryo culture and transfer on blood pressure regulation in the adolescent lamb
Monalisa Padhee, I. Caroline McMillen, Song Zhang, Severence M. MacLaughlin, James A. Armitage, Geoffrey A. Head, Jack R. T. Darby, Jennifer M. Kelly, Skye R. Rudiger, David O. Kleemann, Simon K. Walker, Janna L. Morrison
Journal: Journal of Developmental Origins of Health and Disease / Volume 12 / Issue 5 / October 2021
Published online by Cambridge University Press: 13 November 2020, pp. 731-737
Nutrition during the periconceptional period influences postnatal cardiovascular health. We determined whether in vitro embryo culture and transfer, which are manipulations of the nutritional environment during the periconceptional period, dysregulate postnatal blood pressure and blood pressure regulatory mechanisms. Embryos were either transferred to an intermediate recipient ewe (ET) or cultured in vitro in the absence (IVC) or presence of human serum (IVCHS) and a methyl donor (IVCHS+M) for 6 days. Basal blood pressure was recorded at 19–20 weeks after birth. Mean arterial pressure (MAP) and heart rate (HR) were measured before and after varying doses of phenylephrine (PE). mRNA expression of signaling molecules involved in blood pressure regulation was measured in the renal artery. Basal MAP did not differ between groups. Baroreflex sensitivity, set point, and upper plateau were also maintained in all groups after PE stimulation. Adrenergic receptors alpha-1A (αAR1A), alpha-1B (αAR1B), and angiotensin II receptor type 1 (AT1R) mRNA expression were not different from controls in the renal artery. These results suggest there is no programmed effect of ET or IVC on basal blood pressure or the baroreflex control mechanisms in adolescence, but future studies are required to determine the impact of ET and IVC on these mechanisms later in the life course when developmental programming effects may be unmasked by age.
Neutron Star Extreme Matter Observatory: A kilohertz-band gravitational-wave detector in the global network
K. Ackley, V. B. Adya, P. Agrawal, P. Altin, G. Ashton, M. Bailes, E. Baltinas, A. Barbuio, D. Beniwal, C. Blair, D. Blair, G. N. Bolingbroke, V. Bossilkov, S. Shachar Boublil, D. D. Brown, B. J. Burridge, J. Calderon Bustillo, J. Cameron, H. Tuong Cao, J. B. Carlin, S. Chang, P. Charlton, C. Chatterjee, D. Chattopadhyay, X. Chen, J. Chi, J. Chow, Q. Chu, A. Ciobanu, T. Clarke, P. Clearwater, J. Cooke, D. Coward, H. Crisp, R. J. Dattatri, A. T. Deller, D. A. Dobie, L. Dunn, P. J. Easter, J. Eichholz, R. Evans, C. Flynn, G. Foran, P. Forsyth, Y. Gai, S. Galaudage, D. K. Galloway, B. Gendre, B. Goncharov, S. Goode, D. Gozzard, B. Grace, A. W. Graham, A. Heger, F. Hernandez Vivanco, R. Hirai, N. A. Holland, Z. J. Holmes, E. Howard, E. Howell, G. Howitt, M. T. Hübner, J. Hurley, C. Ingram, V. Jaberian Hamedan, K. Jenner, L. Ju, D. P. Kapasi, T. Kaur, N. Kijbunchoo, M. Kovalam, R. Kumar Choudhary, P. D. Lasky, M. Y. M. Lau, J. Leung, J. Liu, K. Loh, A. Mailvagan, I. Mandel, J. J. McCann, D. E. McClelland, K. McKenzie, D. McManus, T. McRae, A. Melatos, P. Meyers, H. Middleton, M. T. Miles, M. Millhouse, Y. Lun Mong, B. Mueller, J. Munch, J. Musiov, S. Muusse, R. S. Nathan, Y. Naveh, C. Neijssel, B. Neil, S. W. S. Ng, V. Oloworaran, D. J. Ottaway, M. Page, J. Pan, M. Pathak, E. Payne, J. Powell, J. Pritchard, E. Puckridge, A. Raidani, V. Rallabhandi, D. Reardon, J. A. Riley, L. Roberts, I. M. Romero-Shaw, T. J. Roocke, G. Rowell, N. Sahu, N. Sarin, L. Sarre, H. Sattari, M. Schiworski, S. M. Scott, R. Sengar, D. Shaddock, R. Shannon, J. SHI, P. Sibley, B. J. J. Slagmolen, T. Slaven-Blair, R. J. E. Smith, J. Spollard, L. Steed, L. Strang, H. Sun, A. Sunderland, S. Suvorova, C. Talbot, E. Thrane, D. Töyrä, P. Trahanas, A. Vajpeyi, J. V. van Heijningen, A. F. Vargas, P. J. Veitch, A. Vigna-Gomez, A. Wade, K. Walker, Z. Wang, R. L. Ward, K. Ward, S. Webb, L. Wen, K. Wette, R. Wilcox, J. Winterflood, C. Wolf, B. Wu, M. Jet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, X. Zhu
Published online by Cambridge University Press: 05 November 2020, e047
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
Epidemiological characteristics and spatial−temporal analysis of COVID-19 in Shandong Province, China
C. Qi, Y. C. Zhu, C. Y. Li, Y. C. Hu, L. L. Liu, D. D. Zhang, X. Wang, K. L. She, Y. Jia, T. X. Liu, X. J. Li
Published online by Cambridge University Press: 06 July 2020, e141
The pandemic of coronavirus disease 2019 (COVID-19) has posed serious challenges. It is vitally important to further clarify the epidemiological characteristics of the COVID-19 outbreak for future study and prevention and control measures. Epidemiological characteristics and spatial−temporal analysis were performed based on COVID-19 cases from 21 January 2020 to 1 March 2020 in Shandong Province, and close contacts were traced to construct transmission chains. A total of 758 laboratory-confirmed cases were reported in Shandong. The sex ratio was 1.27: 1 (M: F) and the median age was 42 (interquartile range: 32–55). The high-risk clusters were identified in the central, eastern and southern regions of Shandong from 25 January 2020 to 10 February 2020. We rebuilt 54 transmission chains involving 209 cases, of which 52.2% were family clusters, and three widespread infection chains were elaborated, occurring in Jining, Zaozhuang and Liaocheng, respectively. The geographical and temporal disparity may alert public health agencies to implement specific measures in regions with different risk, and should attach importance on how to avoid household and community transmission.
Effects of the acid–base treatment of corn on rumen fermentation and microbiota, inflammatory response and growth performance in beef cattle fed high-concentrate diet – CORRIGENDUM
J. Liu, K. Tian, Y. Sun, Y. Wu, J. Chen, R. Zhang, T. He, G. Dong
Journal: animal / Volume 14 / Issue 11 / November 2020
Published online by Cambridge University Press: 01 July 2020, p. 2442
Print publication: November 2020
Selective amplification of the chirped attosecond pulses produced from relativistic electron mirrors
F. Tan, S. Y. Wang, B. Zhang, Z. M. Zhang, B. Zhu, Y. C. Wu, M. H. Yu, Y. Yang, G. Li, T. K. Zhang, Y. H. Yan, F. Lu, W. Fan, W. M. Zhou, Y. Q. Gu
Journal: Laser and Particle Beams / Volume 38 / Issue 2 / June 2020
Published online by Cambridge University Press: 03 July 2020, pp. 165-168
Print publication: June 2020
In this paper, the generation of relativistic electron mirrors (REM) and the reflection of an ultra-short laser off the mirrors are discussed, applying two-dimension particle-in-cell simulations. REMs with ultra-high acceleration and expanding velocity can be produced from a solid nanofoil illuminated normally by an ultra-intense femtosecond laser pulse with a sharp rising edge. Chirped attosecond pulse can be produced through the reflection of a counter-propagating probe laser off the accelerating REM. In the electron moving frame, the plasma frequency of the REM keeps decreasing due to its rapid expansion. The laser frequency, on the contrary, keeps increasing due to the acceleration of REM and the relativistic Doppler shift from the lab frame to the electron moving frame. Within an ultra-short time interval, the two frequencies will be equal in the electron moving frame, which leads to the resonance between laser and REM. The reflected radiation near this interval and corresponding spectra will be amplified due to the resonance. Through adjusting the arriving time of the probe laser, a certain part of the reflected field could be selectively amplified or depressed, leading to the selective adjustment of the corresponding spectra.
Reducing protein content in the diet of growing goats: implications for nitrogen balance, intestinal nutrient digestion and absorption, and rumen microbiota
X. X. Zhang, Y. X. Li, Z. R. Tang, W. Z. Sun, L. T. Wu, R. An, H. Y. Chen, K. Wan, Z. H. Sun
Journal: animal / Volume 14 / Issue 10 / October 2020
Published online by Cambridge University Press: 08 May 2020, pp. 2063-2073
Print publication: October 2020
Reducing dietary CP content is an effective approach to reduce animal nitrogen excretion and save protein feed resources. However, it is not clear how reducing dietary CP content affects the nutrient digestion and absorption in the gut of ruminants, therefore it is difficult to accurately determine how much reduction in dietary CP content is appropriate. This study was conducted to investigate the effects of reduced dietary CP content on N balance, intestinal nutrient digestion and absorption, and rumen microbiota in growing goats. To determine N balance, 18 growing wether goats (25.0 ± 0.5 kg) were randomly assigned to one of three diets: 13.0% (control), 11.5% and 10.0% CP. Another 18 growing wether goats (25.0 ± 0.5 kg) were surgically fitted with ruminal, proximate duodenal, and terminal ileal fistulae and were randomly assigned to one of the three diets to investigate intestinal amino acid (AA) absorption and rumen microbiota. The results showed that fecal and urinary N excretion of goats fed diets containing 11.5% and 10.0% CP were lower than those of goats fed the control diet (P < 0.05). When compared with goats fed the control diet, N retention was decreased and apparent N digestibility in the entire gastrointestinal tract was increased in goats fed the 10% CP diet (P < 0.05). When compared with goats fed the control diet, the duodenal flow of lysine, tryptophan and phenylalanine was decreased in goats fed the 11.5% CP diet (P < 0.05) and that of lysine, methionine, tryptophan, phenylalanine, leucine, glutamic acid, tyrosine, essential AAs (EAAs) and total AAs (TAAs) was decreased in goats fed the 10.0% CP diet (P < 0.05). When compared with goats fed the control diet, the apparent absorption of TAAs in the small intestine was increased in goats fed the 11.5% CP diet (P < 0.05) and that of isoleucine, serine, cysteine, EAAs, non-essential AAs, and TAAs in the small intestine was increased in goats fed the 10.0% CP diet (P < 0.05). When compared with goats fed the control diet, the relative richness of Bacteroidetes and Fibrobacteres was increased and that of Proteobacteria and Synergistetes was decreased in the rumen of goats fed a diet with 10.0% CP. In conclusion, reducing dietary CP content reduced N excretion and increased nutrient utilization by improving rumen fermentation, enhancing nutrient digestion and absorption, and altering rumen microbiota in growing goats.
Dietary calcium deficiency suppresses follicle selection in laying ducks through mechanism involving cyclic adenosine monophosphate-mediated signaling pathway
W. Chen, W. G. Xia, D. Ruan, S. Wang, K. F. M. Abouelezz, S. L. Wang, Y. N. Zhang, C. T. Zheng
Ovarian follicle selection is a natural biological process in the pre-ovulatory hierarchy in birds that drives growing follicles to be selected within the ovulatory cycle. Follicle selection in birds is strictly regulated, involving signaling pathways mediated by dietary nutrients, gonadotrophic hormones and paracrine factors. This study aimed to test the hypothesis that dietary Ca may participate in regulating follicle selection in laying ducks through activating the signaling pathway of cyclic adenosine monophosphate (cAMP)/protein kinase A (PKA)/extracellular signal-regulated kinase (ERK), possibly mediated by gonadotrophic hormones. Female ducks at 22 weeks of age were initially fed one of two Ca-deficient diets (containing 1.8% or 0.38% Ca) or a Ca-adequate control diet (containing 3.6% Ca) for 67 days (depletion period), then all birds were fed the Ca-adequate diet for an additional 67 days (repletion period). Compared with the Ca-adequate control, ducks fed 0.38% Ca during the depletion period had significantly decreased (P < 0.05) numbers of hierarchical follicles and total ovarian weight, which were accompanied by reduced egg production. Plasma concentration of FSH was decreased by the diet containing 1.8% Ca but not by that containing 0.38%. The ovarian content of cAMP was increased with the two Ca-deficient diets, and phosphorylation of PKA and ERK1/2 was increased with 0.38% dietary Ca. Transcripts of ovarian estradiol receptor 2 and luteinizing hormone receptor (LHR) were reduced in the ducks fed the two Ca-deficient diets (P < 0.05), while those of the ovarian follicle stimulating hormone receptor (FSHR) were decreased in the ducks fed 0.38% Ca. The transcript abundance of ovary gap junction proteins, A1 and A4, was reduced with the Ca-deficient diets (P < 0.05). The down-regulation of gene expression of gap junction proteins and hormone receptors, the increased cAMP content and the suppressed hierarchical follicle numbers were reversed by repletion of dietary Ca. These results indicate that dietary Ca deficiency negatively affects follicle selection of laying ducks, independent of FSH, but probably by activating cAMP/PKA/ERK1/2 signaling pathway.
Effects of the acid–base treatment of corn on rumen fermentation and microbiota, inflammatory response and growth performance in beef cattle fed high-concentrate diet
Journal: animal / Volume 14 / Issue 9 / September 2020
Published online by Cambridge University Press: 16 April 2020, pp. 1876-1884
Print publication: September 2020
Beef cattle are often fed high-concentrate diet (HCD) to achieve high growth rate. However, HCD feeding is strongly associated with metabolic disorders. Mild acid treatment of grains in HCD with 1% hydrochloric acid (HA) followed by neutralization with sodium bicarbonate (SB) might modify rumen fermentation patterns and microbiota, thereby decreasing the negative effects of HCD. This study was thus aimed to investigate the effects of treatment of corn with 1% HA and subsequent neutralization with SB on rumen fermentation and microbiota, inflammatory response and growth performance in beef cattle fed HCD. Eighteen beef cattle were randomly allocated to three groups and each group was fed different diets: low-concentrate diet (LCD) (concentrate : forage = 40 : 60), HCD (concentrate : forage = 60 : 40) or HCD based on treated corn (HCDT) with the same concentrate to forage ratio as the HCD. The corn in the HCDT was steeped in 1% HA (wt/wt) for 48 h and neutralized with SB after HA treatment. The animal trial lasted for 42 days with an adaptation period of 7 days. At the end of the trial, rumen fluid samples were collected for measuring ruminal pH values, short-chain fatty acids, endotoxin (or lipopolysaccharide, LPS) and bacterial microbiota. Plasma samples were collected at the end of the trial to determine the concentrations of plasma LPS, proinflammatory cytokines and acute phase proteins (APPs). The results showed that compared with the LCD, feeding the HCD had better growth performance due to a shift in the ruminal fermentation pattern from acetate towards propionate, butyrate and valerate. However, the HCD decreased ruminal pH and increased ruminal LPS release and the concentrations of plasma proinflammatory cytokines and APPs. Furthermore, feeding the HCD reduced bacterial richness and diversity in the rumen. Treatment of corn increased resistant starch (RS) content. Compared with the HCD, feeding the HCDT reduced ruminal LPS and improved ruminal bacterial microbiota, resulting in decreased inflammation and improved growth performance. In conclusion, although the HCD had better growth performance than the LCD, feeding the HCD promoted the pH reduction and the LPS release in the rumen, disturbed the ruminal bacterial stability and increased inflammatory response. Treatment of corn with HA in combination with subsequent SB neutralization increased the RS content and helped counter the negative effects of feeding HCD to beef steers.
Fundamental physics with the Square Kilometre Array
Square Kilometre Array
A. Weltman, P. Bull, S. Camera, K. Kelley, H. Padmanabhan, J. Pritchard, A. Raccanelli, S. Riemer-Sørensen, L. Shao, S. Andrianomena, E. Athanassoula, D. Bacon, R. Barkana, G. Bertone, C. Bœhm, C. Bonvin, A. Bosma, M. Brüggen, C. Burigana, F. Calore, J. A. R. Cembranos, C. Clarkson, R. M. T. Connors, Á. de la Cruz-Dombriz, P. K. S. Dunsby, J. Fonseca, N. Fornengo, D. Gaggero, I. Harrison, J. Larena, Y.-Z. Ma, R. Maartens, M. Méndez-Isla, S. D. Mohanty, S. Murray, D. Parkinson, A. Pourtsidou, P. J. Quinn, M. Regis, P. Saha, M. Sahlén, M. Sakellariadou, J. Silk, T. Trombetti, F. Vazza, T. Venumadhav, F. Vidotto, F. Villaescusa-Navarro, Y. Wang, C. Weniger, L. Wolz, F. Zhang, B. M. Gaensler
Published online by Cambridge University Press: 27 January 2020, e002
The Square Kilometre Array (SKA) is a planned large radio interferometer designed to operate over a wide range of frequencies, and with an order of magnitude greater sensitivity and survey speed than any current radio telescope. The SKA will address many important topics in astronomy, ranging from planet formation to distant galaxies. However, in this work, we consider the perspective of the SKA as a facility for studying physics. We review four areas in which the SKA is expected to make major contributions to our understanding of fundamental physics: cosmic dawn and reionisation; gravity and gravitational radiation; cosmology and dark energy; and dark matter and astroparticle physics. These discussions demonstrate that the SKA will be a spectacular physics machine, which will provide many new breakthroughs and novel insights on matter, energy, and spacetime.
The role of DNA damage as a therapeutic target in autosomal dominant polycystic kidney disease
Jennifer Q. J. Zhang, Sayanthooran Saravanabavan, Alexandra Munt, Annette T. Y. Wong, David C. Harris, Peter C. Harris, Yiping Wang, Gopala K. Rangan
Journal: Expert Reviews in Molecular Medicine / Volume 21 / 2019
Published online by Cambridge University Press: 26 November 2019, e6
Autosomal dominant polycystic kidney disease (ADPKD) is the most common monogenic kidney disease and is caused by heterozygous germ-line mutations in either PKD1 (85%) or PKD2 (15%). It is characterised by the formation of numerous fluid-filled renal cysts and leads to adult-onset kidney failure in ~50% of patients by 60 years. Kidney cysts in ADPKD are focal and sporadic, arising from the clonal proliferation of collecting-duct principal cells, but in only 1–2% of nephrons for reasons that are not clear. Previous studies have demonstrated that further postnatal reductions in PKD1 (or PKD2) dose are required for kidney cyst formation, but the exact triggering factors are not clear. A growing body of evidence suggests that DNA damage, and activation of the DNA damage response pathway, are altered in ciliopathies. The aims of this review are to: (i) analyse the evidence linking DNA damage and renal cyst formation in ADPKD; (ii) evaluate the advantages and disadvantages of biomarkers to assess DNA damage in ADPKD and finally, (iii) evaluate the potential effects of current clinical treatments on modifying DNA damage in ADPKD. These studies will address the significance of DNA damage and may lead to a new therapeutic approach in ADPKD.
Strategic investment in tuberculosis control in the Republic of Bulgaria
T. N. Doan, T. Varleva, M. Zamfirova, M. Tyufekchieva, A. Keshelava, K. Hristov, A. Yaneva, B. Gadzheva, S. Zhang, S. Irbe, R. Ragonnet, E. S. McBryde, J. M. Trauer
As Bulgaria transitions away from Global Fund grant, robust estimates of the comparative impact of the various response strategies under consideration are needed to ensure sustained effectiveness of the tuberculosis (TB) programme. We tailored an established mathematical model for TB control to the epidemic in Bulgaria to project the likely outcomes of seven intervention scenarios. Under existing programmatic conditions projected forward, the country's targets for achieving TB elimination in the coming decades will not be achieved. No interventions under consideration were predicted to accelerate the baseline projected reduction in epidemiological indicators significantly. Discontinuation of the 'Open Doors' program and activities of non-governmental organisations would result in a marked exacerbation of the epidemic (increasing incidence in 2035 by 6–8% relative to baseline conditions projected forward). Changing to a short course regimen for multidrug-resistant TB (MDR-TB) would substantially decrease MDR-TB mortality (by 21.6% in 2035 relative to baseline conditions projected forward). Changing to ambulatory care for eligible patients would not affect TB burden but would be markedly cost-saving. In conclusion, Bulgaria faces important challenges in transitioning to a primarily domestically-financed TB programme. The country should consider maintaining currently effective programs and shifting towards ambulatory care to ensure program sustainability.
Probing the cold magnetised Universe with SPICA-POL (B-BOP)
Exploring Astronomical Evolution with SPICA
Ph. André, A. Hughes, V. Guillet, F. Boulanger, A. Bracco, E. Ntormousi, D. Arzoumanian, A.J. Maury, J.-Ph. Bernard, S. Bontemps, I. Ristorcelli, J.M. Girart, F. Motte, K. Tassis, E. Pantin, T. Montmerle, D. Johnstone, S. Gabici, A. Efstathiou, S. Basu, M. Béthermin, H. Beuther, J. Braine, J. Di Francesco, E. Falgarone, K. Ferrière, A. Fletcher, M. Galametz, M. Giard, P. Hennebelle, A. Jones, A. A. Kepley, J. Kwon, G. Lagache, P. Lesaffre, F. Levrier, D. Li, Z.-Y. Li, S. A. Mao, T. Nakagawa, T. Onaka, R. Paladino, N. Peretto, A. Poglitsch, V. Revéret, L. Rodriguez, M. Sauvage, J. D. Soler, L. Spinoglio, F. Tabatabaei, A. Tritsis, F. van der Tak, D. Ward-Thompson, H. Wiesemeyer, N. Ysard, H. Zhang
Space Infrared Telescope for Cosmology and Astrophysics (SPICA), the cryogenic infrared space telescope recently pre-selected for a 'Phase A' concept study as one of the three remaining candidates for European Space Agency (ESA's) fifth medium class (M5) mission, is foreseen to include a far-infrared polarimetric imager [SPICA-POL, now called B-fields with BOlometers and Polarizers (B-BOP)], which would offer a unique opportunity to resolve major issues in our understanding of the nearby, cold magnetised Universe. This paper presents an overview of the main science drivers for B-BOP, including high dynamic range polarimetric imaging of the cold interstellar medium (ISM) in both our Milky Way and nearby galaxies. Thanks to a cooled telescope, B-BOP will deliver wide-field 100–350 $\mu$m images of linearly polarised dust emission in Stokes Q and U with a resolution, signal-to-noise ratio, and both intensity and spatial dynamic ranges comparable to those achieved by Herschel images of the cold ISM in total intensity (Stokes I). The B-BOP 200 $\mu$m images will also have a factor $\sim $30 higher resolution than Planck polarisation data. This will make B-BOP a unique tool for characterising the statistical properties of the magnetised ISM and probing the role of magnetic fields in the formation and evolution of the interstellar web of dusty molecular filaments giving birth to most stars in our Galaxy. B-BOP will also be a powerful instrument for studying the magnetism of nearby galaxies and testing Galactic dynamo models, constraining the physics of dust grain alignment, informing the problem of the interaction of cosmic rays with molecular clouds, tracing magnetic fields in the inner layers of protoplanetary disks, and monitoring accretion bursts in embedded protostars.
In situ Electric Field Manipulation of Ferroelectric Vortices
Christopher T. Nelson, Zijian Hong, Cheng Zhang, Ajay K. Yadav, Sujit Das, Shang-Lin Hsu, Miaofang Chi, Philip D Rack, Long-Qing Chen, Lane W. Martin, Ramamoorthy Ramesh
Journal: Microscopy and Microanalysis / Volume 25 / Issue S2 / August 2019
Published online by Cambridge University Press: 05 August 2019, pp. 1844-1845
Print publication: August 2019 | CommonCrawl |
Global higher integrability for very weak solutions to nonlinear subelliptic equations
Guangwei Du1 &
Junqiang Han1
In this paper we consider the following nonlinear subelliptic Dirichlet problem:
$$ \textstyle\begin{cases} X^{*}A(x,u,Xu)+ B(x,u,Xu)=0,& x\in\Omega,\\ u-u_{0}\in W_{X,0}^{1,r}(\Omega), \end{cases} $$
where \(X=\{X_{1},\ldots,X_{m}\}\) is a system of smooth vector fields defined in \(\mathbf{R}^{n}\) with globally Lipschitz coefficients satisfying Hörmander's condition, and we prove the global higher integrability for the very weak solutions.
Introduction and main result
The theory of very weak solutions was introduced in the work of Iwaniec and Sbordone [1]. Iwaniec and Sbordone realized that the usual Sobolev assumption for weak solutions to p-harmonic equation can be relaxed to a slightly weaker Sobolev space and proved that very weak solutions are actually classical weak solutions by using the nonlinear Hodge decomposition to construct suitable test functions. Based on Whitney's extension theorem and theory of \(A_{p}\) weights, Lewis [2] showed a completely different proof and obtained the same result to certain elliptic systems. After [1] and [2], many authors have devoted their energy to the study of the regularity of such solutions; see for example [3–5] and the references therein. We mention here that Xie and Fang [5] obtained the global higher integrability for very weak solutions to a class of nonlinear elliptic systems with Lipschitz boundary condition by using Hodge decomposition to construct a suitable test function. Recently the authors in [6] proved the global regularity result for a second-order degenerate elliptic systems of p-Laplacian type in the Euclidean setting.
In 2005, Zatorska-Goldstein [7] showed the local higher integrability of very weak solutions to the nonlinear subelliptic equations
$$ X^{*}A(x,u,Xu)+ B(x,u,Xu)=0, \quad x\in\Omega, $$
where \(\Omega\subset\mathbf{R}^{n}\) is a bounded domain and \(X=\{ X_{1},\ldots,X_{m}\}\) (\(m\leq n\)) is a system of smooth vector fields in \(\mathbf{R}^{n}\) with globally Lipschitz coefficients satisfying the Hörmander's condition and \(X^{*}=(X_{1}^{*},\ldots,X_{m}^{*})\) is a family of operators formal adjoint to \(X_{j}\) in \(L^{2}\).
In this work we are concerned with the boundary value problem for (1.1) with the boundary condition \(u-u_{0}\in W_{X,0}^{1,r}(\Omega)\), i.e.,
and establish the global higher integrability for very weak solutions. We assume that the functions \(A=(A_{1},\ldots,A_{m}):\mathbf {R}^{n}\times\mathbf{R}\times\mathbf{R}^{m}\rightarrow\mathbf {R}^{m}\) and \(B:\mathbf{R}^{n}\times\mathbf{R}\times\mathbf{R}^{m}\rightarrow \mathbf{R}\) are both Carathéodory functions satisfying
$$\begin{aligned} &\bigl\vert A(x,u,\xi) \bigr\vert \leq\alpha\bigl(\vert u \vert ^{p-1}+\vert \xi \vert ^{p-1}\bigr) , \end{aligned}$$
$$\begin{aligned} &\bigl\vert B(x,u,\xi) \bigr\vert \leq\alpha\bigl(\vert u \vert ^{p-1}+\vert \xi \vert ^{p-1}\bigr) , \end{aligned}$$
$$\begin{aligned} &\bigl\langle A(x,u,\xi)-A(x,v,\zeta),\xi-\zeta\bigr\rangle \geq \beta \vert \xi-\zeta \vert ^{2}\bigl(\vert \xi \vert +\vert \zeta \vert \bigr)^{p-2} , \end{aligned}$$
for a.e. \(x\in\mathbf{R}^{n}\), \(u\in\mathbf{R}\) and \(\xi\in\mathbf {R}^{m}\). Here \(p\geq2\), α, β are positive constants.
A function \(u\in W_{X}^{1,r}(\Omega)\) (\(r< p\)) is called a very weak solution to (1.1) if
$$ \int_{\Omega}A(x,u,Xu)\cdot X\varphi\,dx+ \int_{\Omega}B(x,u,Xu)\varphi\,dx=0 $$
holds for all \(\varphi\in C_{0}^{\infty}(\Omega)\).
In the above definition, the very weak means the integrable exponent is strictly lower than the natural exponent p and if \(r=p\), this is the classical definition of weak solution to (1.1).
To get our result, some regularity assumption introduced in [8] should be imposed on Ω. Let us first recall the notion of uniform \((X,p)\)-fatness which can be found in [9]: A set \(E\subset\mathbf{R}^{n}\) is called uniformly \((X,p)\) -fat if there exist constants \(C_{0},R_{0}>0\) such that
$$ \operatorname{cap}_{p}\bigl(E\cap\bar{B}(x,R),B(x,2R)\bigr)\geq C_{0} \operatorname{cap}_{p}\bigl(\bar{B}(x,R),B(x,2R) \bigr) $$
for all \(x\in\partial E\) and \(0< R< R_{0}\), where \(\operatorname{cap}_{p}\) is the variational p-capacity defined in Section 2.
We consider the following hypotheses on Ω:
(\(H_{1}\)):
there exists a constant \(C_{1}\geq1\) such that, for all \(x\in\Omega\),
$$ \vert B_{\rho(x)} \vert \leq C_{1}\bigl\vert B_{\rho(x)}\cap\bigl( \mathbf{R}^{n}\setminus\Omega\bigr) \bigr\vert , $$
where \(\rho(x)=2\operatorname{dist}(x,\mathbf{R}^{n}\setminus\Omega)\);
the complement \(\mathbf{R}^{n}\setminus\Omega\) of Ω is uniformly \((X,p)\)-fat.
Under the hypotheses stated above, we prove the following.
Assume that \(u_{0}\in W_{X}^{1,s}(\Omega)\), \(s>p\). Then there exists a \(\delta>0\) such that if \(u\in W_{X}^{1,p-\delta}(\Omega)\) is a very weak solution to the Dirichlet problem (1.2), we have \(u\in W_{X}^{1,p+\tilde{\delta}}(\Omega)\) for some \(\tilde{\delta}>0\).
The key technical tool in proving Theorem 1.1 is a Sobolev type inequality with a capacity term. With it we can prove a reverse Hölder inequality for the generalized gradient Xu of a very weak solution, which allows us to get the global higher integrability of Xu. This paper is organized as follows. In Section 2 we collect some known results on Carnot-Carathéodory spaces and prove a Sobolev type inequality characterized by capacity. Section 3 is devoted to the proof of Theorem 1.1.
Some known results and a Sobolev type inequality
Let \(\{X_{1},\ldots,X_{m}\}\) be a system of \(C^{\infty}\)-smooth vector fields in \(\mathbf{R}^{n} (n\geq3)\) satisfying Hörmander's condition (see [10]):
$$ \operatorname{rank} \bigl( \operatorname{Lie}\{X_{1},\ldots ,X_{m}\} \bigr) =n. $$
The generalized gradient is denoted by \(Xu=(X_{1}u,\ldots,X_{m}u)\) and its length is given by
$$ \bigl\vert Xu(x) \bigr\vert = \Biggl( \sum_{j=1}^{m} \bigl\vert X_{j} u(x) \bigr\vert ^{2} \Biggr) ^{\frac{1}{2}}. $$
An absolutely continuous curve \(\gamma:[a,b]\rightarrow\mathbf {R}^{n}\) is said to be admissible with respect to the system \(\{X_{1},\ldots,X_{m}\}\), if there exist functions \(c_{i}(t), a\leq t\leq b\), satisfying
$$ \sum_{i=1}^{m} c_{i}(t)^{2} \leq1 \quad\mbox{and}\quad\gamma'(t)=\sum _{i=1}^{m} c_{i}(t)X_{i}\bigl( \gamma(t)\bigr). $$
The Carnot-Carathéodory distance \(d(x,y)\) generated by \(\{ X_{1},\ldots,X_{m}\}\) is defined as the infimum of those \(T>0\) for which there exists an admissible path \(\gamma:[0, T]\rightarrow\mathbf{R}^{n}\) with \(\gamma(0)=x\), \(\gamma(T)=y\).
By the accessibility theorem of Chow [11], the distance d is a metric and therefore \((\mathbf{R}^{n},d)\) is a metric space which is called the Carnot-Carathéodory space associated with the system \(\{ X_{1},\ldots,X_{m}\}\). The ball is denoted by
$$ B(x_{0},R)=\bigl\{ x\in\mathbf{R}^{n}:d(x,x_{0})< R \bigr\} . $$
For \(\sigma>0\) and \(B=B(x_{0}, R)\), we will write σB to indicate \(B(x_{0},\sigma R)\) and diamΩ the diameter of Ω with respect to d.
It was proved in [12] that the identity map is a homeomorphism of \((\mathbf{R}^{n},d)\) into \(\mathbf{R}^{n}\) with the usual Euclidean metric, and every set which is bounded with respect to the Euclidean metric is also bounded with respect to d. Moreover, by a result of Garofalo and Nhieu [13], Proposition 2.11, if the given vector fields have globally Lipschitz coefficients in addition, then a subset of \(\mathbf{R}^{n}\) is bounded with respect to d if and only if it is bounded with respect to the Euclidean metric.
Hereafter we assume that the vector fields \(X_{1},\ldots,X_{m}\) satisfy the Hörmander condition and have globally Lipschitz coefficients.
[12, 14]
For every bounded open set \(\Omega\subset\mathbf{R}^{n}\) there exists \(C_{d}\geq1\) such that
$$\begin{aligned} \bigl\vert B(x,2R) \bigr\vert \leq C_{d}\bigl\vert B(x,R) \bigr\vert \end{aligned}$$
for any \(x\in\Omega\) and \(0< R\leq5\operatorname{diam}\Omega\).
Here, \(\vert B(x,R) \vert \) denotes the Lebesgue measure of \(B(x,R)\). The best constant \(C_{d}\) in (2.1) is called the doubling constant, the measure such that (2.1) holds is called a doubling measure and the homogeneous dimension relative to Ω is \(Q=\log_{2}C_{d}\).
Given \(1\leq p<\infty\), we define the Sobolev space \(W_{X}^{1,p}(\Omega)\) by
$$ W_{X}^{1,p}(\Omega)= \bigl\{ u\in L^{p}(\Omega ):X_{j}u\in L^{p}(\Omega), j=1,2,\ldots,m \bigr\} , $$
endowed with the norm
$$ \Vert u \Vert _{W_{X}^{1,p}(\Omega)}=\Vert u \Vert _{L^{p}(\Omega)}+\Vert Xu \Vert _{L^{p}(\Omega)}. $$
Here, \(X_{j}u\) is the distributional derivative of \(u\in L_{\operatorname{loc}}^{1}(\Omega)\) given by the identity
$$ \langle X_{j}u,\varphi\rangle= \int_{\Omega}u X_{j}^{*}\varphi\,dx, \quad \varphi \in C_{0}^{\infty}(\Omega). $$
The space \(W_{X}^{1,p}(\Omega)\) is a Banach space which admits \(C^{\infty}(\Omega)\cap W_{X}^{1,p}(\Omega)\) as its dense subset. The completion of \(C_{0}^{\infty}(\Omega)\) under the norm \(\Vert \cdot \Vert _{W_{X}^{1,p}(\Omega)}\) is denoted by \(W_{X,0}^{1,p}(\Omega )\). The following Sobolev-Poincaré inequalities can be found in [14] and [15]:
Let Q be the homogeneous dimension relative to Ω, \(B=B(x_{0},R)\subset\Omega, 0< R<\operatorname {diam}\Omega, 1\leq p<\infty\). There exists a constant \(C>0\) such that, for every \(u\in W^{1,p}_{X}(B)\),
( ⨍ B | u − u B | κ p d x ) 1 κ p ≤CR ( ⨍ B | X u | p d x ) 1 p ,
where u B = ⨍ B udx= 1 | B | ∫ B udx, and \(1\leq\kappa\leq{Q/(Q-p)}\), if \(1\leq p< Q\); \(1\leq \kappa<\infty\), if \(p\geq Q\). Moreover, for any \(u\in W^{1,p}_{X,0}(B)\),
( ⨍ B | u | κ p d x ) 1 κ p ≤CR ( ⨍ B | X u | p d x ) 1 p .
Next we recall a Gehring lemma on the metric measure space \((Y,d,\mu )\), where d is a metric and μ is a doubling measure.
Let \(q\in[q_{0},2Q]\), \(q_{0}>1\) is fixed. Assume that functions f, g are nonnegative and \(g\in L_{\operatorname{loc}}^{q}(Y,\mu)\), \(f\in L_{\operatorname {loc}}^{r_{0}}(Y,\mu)\), for some \(r_{0}>q\). If there exist constants \(b>1\) and θ such that for every ball \(B\subset\sigma B\subset Y\) the following inequality holds:
⨍ B g q dμ≤b [ ( ⨍ σ B g d μ ) q + ⨍ σ B f q d μ ] +θ ⨍ σ B g q dμ,
then there exist nonnegative constants \(\theta_{0}=\theta _{0}(q_{0},Q,C_{d},\sigma)\) and \(\varepsilon_{0}=\varepsilon _{0}(b,q_{0},Q,C_{d},\sigma)\) such that if \(0<\theta<\theta_{0}\) then \(g\in L_{\operatorname{loc}}^{p}(Y,\mu)\) for \(p\in [q,q+\varepsilon_{0})\).
For the Hardy-Littlewood maximal functions
$$ M f(x)=\sup_{R>0}\frac{1}{\vert B(x,R) \vert } \int_{B(x,R)}\bigl\vert f(y) \bigr\vert \,dy $$
$$ M_{\Omega}f(x)=\sup_{R>0}\frac{1}{\vert B(x,R) \vert } \int_{B(x,R)\cap\Omega}\bigl\vert f(y) \bigr\vert \,dy, $$
we will use the following properties proved in [14] and [15].
If \(f\in L^{p}(\Omega)\), \(1< p\leq\infty\), then \(M_{\Omega}f\in L^{p}(\Omega)\) and there exists a constant \(C=C(C_{d}, p)>0\) such that
$$ \Vert M_{\Omega}f \Vert _{L^{p}(\Omega)}\leq C\Vert f \Vert _{L^{p}(\Omega).} $$
If \(u\in W^{1,p}_{X,\rm loc}(\Omega)\), \(1< p <\infty\), then there exists \(C>0\) such that, for a.e. \(x, y\in \Omega\),
$$ \bigl\vert u(x)-u(y) \bigr\vert \leq Cd(x,y) \bigl( M_{\Omega }\vert Xu \vert (x)+M_{\Omega} \vert Xu \vert (y) \bigr) . $$
Moreover, for any \(B=B(x_{0},R)\subset\Omega\) and \(u\in W^{1,p}_{X}(B)\), we have
$$ \bigl\vert u(x)-u_{B} \bigr\vert \leq CRM_{B}\vert Xu \vert (x),\quad\textit{a.e.}~x\in B. $$
It is worth noting that from Lemma 2.5 and Lemma 2.2 we can infer that, for a.e. \(x\in B\) and \(u\in W^{1,p}_{X,0}(B)\),
$$ \bigl\vert u(x) \bigr\vert \leq CRM_{B}\vert Xu \vert (x). $$
Let \(\omega(x)\geq0\) be a locally integrable function, we say that \(\omega\in A_{p}\), \(1< p <\infty\), if there exists some positive constant A such that
sup B ⊂ R n ( ⨍ B ω d x ) ( ⨍ B ω 1 1 − p d x ) p − 1 ≤A<∞.
Assume \(\omega\in L_{\operatorname {loc}}^{1}(\mathbf{R}^{n})\) is nonnegative and \(1< p <\infty\). Then \(\omega\in A_{p}\) if and only if there exists a constant \(C>0\) such that
$$ \int_{\mathbf{R}^{n}}\vert Mf \vert ^{p}\omega\,dx\leq C \int_{\mathbf{R}^{n}}\vert f \vert ^{p}\omega\,dx, $$
for all \(f\in L^{p}(\omega(x)\,dx)\).
The \((X,p)\) -capacity of a compact set \(K\subset\Omega\) in Ω is defined by
$$ \operatorname{cap}_{p} (K,\Omega)=\inf\biggl\{ \int_{\Omega} \vert Xu \vert ^{p}\,dx:u\in C_{0}^{\infty}(\Omega), u=1 \mbox{ on } K \biggr\} $$
and for an arbitrary set \(E\subset\Omega\), the \((X,p)\)-capacity of E is
$$ \operatorname{cap}_{p} (E,\Omega)=\inf_{\substack{G\subset\Omega\operatorname{open}\\ E\subset G}}\sup _{\substack{K\subset G\\{K~\mathrm{compact}}}} \operatorname{cap}_{p}(K,\Omega). $$
We will use the following two-sided estimate of \((X,p)\)-capacity in [16]: For \(x\in\Omega\) and \(0< R<\operatorname{diam}\Omega\), there exist \(C_{1}, C_{2}>0\) such that
$$ C_{1}\frac{\vert B(x,R) \vert }{R^{p}}\leq {\operatorname{cap}}_{p} \bigl(\bar{B}(x,R),B(x,2R)\bigr)\leq C_{2}\frac{\vert B(x,R) \vert }{R^{p}}. $$
If \(\mathbf {R}^{n}\backslash\Omega\) is uniformly \((X,p)\)-fat, then there exists \(1< q< p\) such that \(\mathbf {R}^{n}\backslash\Omega\) is also uniformly \((X,q)\)-fat.
The uniform \((X,q)\)-fatness also implies uniform \((X,p)\)-fatness for all \(p\geq q\), which is a simple consequence of Hölder's and Young's inequality.
At the end of this section we prove a Sobolev type inequality characterized by capacity. A similar inequality in the Euclidean setting can be found in [8].
Let \(\Omega\subset\mathbf {R}^{n}\) be a bounded open set with the homogeneous dimension Q, \(1< q<\infty\) and \(0< R<\operatorname{diam}\Omega\). For any \(x\in \Omega\), denote \(B=B(x,R)\) and \(N(\varphi)=\{y\in\bar{B}:\varphi (y)=0\}\). Then there exists a constant \(C=C(Q,q)>0\) such that, for all \(\varphi\in C^{\infty}(2B)\cap W_{X}^{1,q}(2B)\),
( ⨍ 2 B | φ | κ q d x ) 1 κ q ≤C ( 1 cap q ( N ( φ ) , 2 B ) ∫ 2 B | X φ | q d x ) 1 q ,
where \(1\leq\kappa\leq{Q/(Q-q)}\) if \(1\leq q< Q\) and \(1\leq\kappa <\infty\) if \(q\geq Q\).
We always assume \(\varphi_{2B}\neq0\); otherwise, (2.5) follows immediately from Lemma 2.2 and (2.4). Let \(\eta\in C_{0}^{\infty}(2B), 0\leq\eta\leq1\) such that \(\eta =1\) on B̄ and \(\vert X\eta \vert \leq\frac{c}{R}\). Denoting \(v=\eta(\varphi_{2B}-\varphi)/\varphi_{2B}\), then \(v\in C_{0}^{\infty}(2B)\) and \(v=1\) in \(N(\varphi)\). It follows from Lemma 2.2 that
$$\begin{aligned} {\operatorname{cap}}_{q}\bigl(N(\varphi),2B\bigr)&\leq \int_{2B}\vert Xv \vert ^{q}\,dx \\ &\leq \vert \varphi_{2B} \vert ^{-q} \int_{2B}\vert X\eta \vert ^{q}\vert \varphi -\varphi_{2B} \vert ^{q}\,dx+\vert \varphi _{2B} \vert ^{-q} \int_{2B}\vert X\varphi \vert ^{q}\,dx \\ &\leq C\vert \varphi_{2B} \vert ^{-q} \int_{2B}\vert X\varphi \vert ^{q}\,dx, \end{aligned}$$
$$ \vert \varphi_{2B} \vert \leq C \biggl( \frac{1}{\operatorname{cap}_{q}(N(\varphi),2B)} \int_{2B}\vert X\varphi \vert ^{q}\,dx \biggr) ^{\frac{1}{q}}. $$
Then Lemma 2.2 and (2.6) lead to
( ⨍ 2 B | φ | κ q d x ) 1 κ q ≤ ( ⨍ 2 B | φ − φ 2 B | κ q d x ) 1 κ q + | φ 2 B | ≤ C R ( ⨍ 2 B | X φ | q d x ) 1 q + C ( 1 cap q ( N ( φ ) , 2 B ) ∫ 2 B | X φ | q d x ) 1 q ≤ C ( 1 cap q ( N ( φ ) , 2 B ) ∫ 2 B | X φ | q d x ) 1 q ,
where in the last step we used the estimate
$$ {\operatorname{cap}}_{q}\bigl(N(\varphi),2B\bigr)\leq{ \operatorname{cap}}_{q}(\bar{B},2B)\leq C\vert B \vert R^{-q}. $$
The proof is complete. □
Assume that the function \(u\in W_{X}^{1,p-\delta}(\Omega)(\delta <\frac{1}{2})\) is a very weak solution to the Dirichlet problem (1.2). Choose a ball \(B_{0}\) such that \(\overline{\Omega}\subset\frac {1}{2}B_{0}\) and let B be a ball of radius R with \(3B\subset B_{0}\) for fixed \(0< R<1\). There are two cases: (i) \(3B\subset\Omega\) or (ii) \(3B\backslash\Omega\neq\emptyset\). In the case (i), the following estimate has been proved in [7]:
⨍ B 2 |Xu | p − δ dx≤θ ⨍ 3 B |Xu | p − δ dx+b [ ⨍ 3 B | u | p − δ + ( ⨍ 3 B | X u | t d x ) p − δ t ] ,
where θ small enough, \(b>1\), \(\max \{ 1,(p-\delta)_{*} \} < t< p-\delta\).
When \(3B\backslash\Omega\neq\emptyset\), a similar inequality (see (3.31) below) will be achieved.
Step 1. Let η be a smooth cut-off function on 2B, i.e. \(\eta\in C_{0}^{\infty}(2B)\) such that
$$ 0\leq\eta\leq1, \qquad\eta=1\quad \mbox{on } B \quad\mbox{and} \quad \vert X \eta \vert \leq c/R. $$
Define \(\hat{u}=\eta(u-u_{0})\) and
$$ E_{\mu}=\bigl\{ x\in\mathbf{R}^{n}:M\vert X \hat{u} \vert (x)\leq\mu\bigr\} , \quad\operatorname{for } \mu>0. $$
We conclude from Lemma 2.5 and the assumption \((H_{1})\) that û is Lipschitz continuous on \(E_{\mu}\cup(\mathbf {R}^{n}\setminus\Omega)\).
Indeed, if \(x,y\in E_{\mu}\cap\Omega\), then Lemma 2.5 implies \(\vert \hat{u}(x)-\hat{u}(y) \vert \leq c\mu d(x,y)\); if \(x,y\in\mathbf{R}^{n}\setminus\Omega\), then \(\hat{u}(x)=\hat {u}(y)=0\). We set \(B_{\rho_{x}}=B(x,\rho_{x})\) with \(\rho _{x}=2\operatorname{dist}(x,\mathbf{R}^{n}\setminus\Omega)\) for the case \(x\in E_{\mu}\cap\Omega\) and \(y\in\mathbf{R}^{n}\setminus\Omega\). Since û is zero on \(\mathbf{R}^{n}\setminus\Omega\), it follows that
$$\begin{aligned} \int_{{B_{{\rho_{x}}}} \cap({\mathbf{{R}}^{n}}\setminus\Omega )}\vert \hat{u} - {\hat{u}_{{B_{{\rho_{x}}}}}} \vert \,dz & =\int_{{B_{{\rho_{x}}}} \cap({\mathbf{{R}}^{n}}\setminus\Omega)} \vert {\hat{u}_{{B_{{\rho_{x}}}}}} \vert \,dz \\ &=\bigl\vert {B_{{\rho_{x}}}} \cap\bigl({\mathbf{{R}}^{n}} \setminus\Omega\bigr) \bigr\vert \vert {\hat{u}_{{B_{{\rho _{x}}}}}} \vert \end{aligned}$$
and then, from assumption \((H_{1})\) and Lemma 2.2,
| u ˆ B ρ x | ≤ C 1 | B ρ x ∩ ( R n ∖ Ω ) | | B ρ x | | u ˆ B ρ x | = C 1 | B ρ x | ∫ B ρ x ∩ ( R n ∖ Ω ) | u ˆ − u ˆ B ρ x | d z ≤ C 1 ⨍ B ρ x | u ˆ − u ˆ B ρ x | d z ≤ c C 1 ρ x ⨍ B ρ x | X u ˆ | d z ≤ c C 1 ρ x M | X u ˆ | ( x ) ≤ c C 1 μ ρ x .
Therefore, we have by (2.2) and (3.2)
$$\begin{aligned} \bigl\vert \hat{u}(x)-\hat{u}(y) \bigr\vert &=\bigl\vert \hat{u}(x) \bigr\vert \\ &\leq\bigl\vert \hat{u}(x)-\hat{u}_{B_{\rho_{x}}} \bigr\vert +\vert \hat{u}_{B_{\rho_{x}}} \vert \\ &\leq c{\rho_{x}}M\vert X\hat{u} \vert (x)+c C_{1}\mu {\rho_{x}} \\ &\leq c C_{1}\mu{\rho_{x}} \\ &\leq cC_{1}\mu d(x,y). \end{aligned}$$
It follows that û is a Lipschitz function on \(E_{\mu}\cup (\mathbf{R}^{n}\setminus\Omega)\) with the Lipschitz constant \(cC_{1}\mu\).
As in [7], we can use the Kirszbraun theorem (see e.g. [17]) to extend û to a Lipschitz function \(v_{\mu}\) defined on \(\mathbf{R}^{n}\) with the same Lipschitz constant. Moreover, there exists \(\mu_{0}\) such that, for every \(\mu\geq\mu_{0}\), \(\operatorname{supp}{v_{\mu}}\subset3B\cap\Omega\).
In fact, let \(D=2B\cap\Omega\) and \(x\in\mathbf{R}^{n}\backslash (3B\cap\Omega)\), we have by Lemma 2.1 that
M|X u ˆ |(x)= sup B ′ ∋ x , B ′ ∩ 2 B ≠ ∅ ⨍ B ′ |X u ˆ |(y)dy≤ C d | 2 B | ∫ D |X u ˆ |(y)dy,
where \(\vert B' \vert >\vert B \vert \), \(C_{d}\) is the doubling constant. Setting
$$ \mu_{0}=\frac{C_{d}}{\vert 2B \vert } \int_{D}\vert X\hat{u} \vert (y)\,dy, $$
then \(M\vert X\hat{u} \vert (x)\leq\mu, \mu\geq\mu_{0}\), which implies \(v_{\mu}(x)=\hat{u}(x)=0\) for \(x\in\mathbf {R}^{n}\backslash(3B\cap\Omega)\). So we can take the function \(v_{\mu}\) as a test function in (1.6).
Let \(\mu\geq\mu_{0}\) and take \(v_{\mu}\) as a test function in (1.6) to have
$$\begin{aligned} \int_{3B \cap\Omega}A(x,u,Xu) \cdot X{v_{\mu}}\,dx + \int_{3B \cap\Omega} B(x,u,Xu){v_{\mu}}\,dx=0. \end{aligned}$$
Noting that \({v_{\mu}}=\hat{u}\) on \((3B\cap\Omega)\cap{E_{\mu}}\) and that \(\operatorname{supp}\hat{u}\subset D\), we have by the structure conditions on \(A(x,u,\xi)\) and \(B(x,u,\xi)\)
$$\begin{aligned} & \int_{D\cap E_{\mu}} A(x,u,Xu)\cdot X\hat{u}\,dx+ \int_{D\cap E_{\mu}} B(x,u,Xu)\hat{u}\,dx \\ &\quad\leq \int_{(3B\cap\Omega)\backslash E_{\mu}} \bigl\vert A(x,u,Xu) \bigr\vert \vert Xv_{\mu} \vert \,dx+ \int_{(3B\cap\Omega)\backslash E_{\mu}} \bigl\vert B(x,u,Xu) \bigr\vert \vert v_{\mu} \vert \,dx \\ &\quad\leq c\mu \int_{(3B\cap\Omega)\backslash E_{\mu}} \bigl( \vert u \vert ^{p-1}+\vert Xu \vert ^{p-1} \bigr)\,dx, \end{aligned}$$
where in the last inequality we use the fact that \(\vert Xv_{\mu} \vert \leq c\mu\), \(\vert v_{\mu} \vert \leq cR\mu\) (see [7]).
Multiplying both sides of (3.3) by \(\mu^{-(1+\delta)}\) and integrating over \((\mu_{0},\infty)\), we get
$$\begin{aligned} L&:= \int_{\mu_{0}}^{\infty} \int_{D\cap E_{\mu}}{\mu}^{-(1+\delta)} \bigl( A(x,u,Xu)\cdot X \hat{u}+B(x,u,Xu)\hat{u} \bigr) \,dx\,d\mu \\ &\leq c \int_{\mu_{0}}^{\infty} \int_{(3B\cap\Omega)\backslash E_{\mu}}\mu^{-\delta} \bigl( \vert u \vert ^{p-1}+\vert Xu \vert ^{p-1} \bigr) \,dx\,d\mu:=P. \end{aligned}$$
Interchanging the order of integration and applying (3.2), we have
$$\begin{aligned} P &=c \int_{3B} \int_{{\mu_{0}}}^{M\vert X\hat{u} \vert } {{\mu^{ - \delta}} \bigl( { \vert u{ \vert ^{p - 1}} + \vert Xu{ \vert ^{p - 1}}} \bigr)\, d\mu\,dx} \\ &\leq\frac{c}{1-\delta} \int_{(3B\cap\Omega)\backslash E_{\mu_{0}}}\bigl(M\vert X\hat{u} \vert \bigr)^{1-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\,dx \\ &\leq c \int_{3B\cap\Omega} \bigl( \vert u \vert ^{p-\delta}+\vert Xu \vert ^{p-\delta} \bigr)\,dx+c \int_{3B\cap\Omega}\bigl(M\vert X\hat{u} \vert \bigr)^{p-\delta}\,dx. \end{aligned}$$
Using Lemma 2.4 and Lemma 2.8, we have
$$\begin{aligned} & c \int_{3B\cap\Omega}\bigl(M\vert X\hat{u} \vert \bigr)^{p-\delta}\,dx \\ &\quad \leq c \int_{D}\vert X\hat{u} \vert ^{p-\delta}\,dx \\ &\quad \leq c \int_{D}\vert Xu-Xu_{0} \vert ^{p-\delta}\,dx+\frac{c}{R^{p-\delta}} \int_{2B}\vert u-u_{0} \vert ^{p-\delta}\,dx \\ &\quad \leq c \int_{D}\vert Xu-Xu_{0} \vert ^{p-\delta}\,dx +\frac{c\vert 2B \vert }{R^{p-\delta}} \biggl( \frac{1}{\operatorname{cap}_{p-\delta}(N(u-u_{0}),2B)} \int_{2B}\vert Xu-Xu_{0} \vert ^{p-\delta}\,dx \biggr) , \end{aligned}$$
where \(N(u-u_{0})= \{ x\in\bar{B}:u(x)=u_{0}(x) \} \). Since \(u-u_{0}\) vanishes outside Ω, we have \(\mathbf{R}^{n}\setminus\Omega \subset\{u-u_{0}=0\}\). On the other hand, by Lemma 2.7 and assumption \((H_{2})\), there exists \(\delta_{0}\) such that if \(0<\delta<\delta _{0}\), \(\mathbf{R}^{n}\setminus\Omega\) is uniformly \((X,p-\delta )\)-fat, and hence
$$\begin{aligned} {\operatorname{cap}}_{p-\delta}\bigl(N(u-u_{0}),2B \bigr)&\geq{\operatorname{cap}}_{p-\delta}\bigl(\bar{B}\cap\bigl( \mathbf{R}^{n}\setminus\Omega\bigr),2B\bigr) \\ &\geq c \operatorname{cap}_{p-\delta}(\bar{B},2B)\geq c\vert B \vert R^{-(p-\delta)}. \end{aligned}$$
From (3.6) and the doubling condition, we derive
$$\begin{aligned} c \int_{3B\cap\Omega}\bigl(M\vert X\hat{u} \vert \bigr)^{p-\delta}\,dx&\leq c \int_{D}\vert X\hat{u} \vert ^{p-\delta}\,dx \\ &\leq c \int_{D}\vert Xu \vert ^{p-\delta}\,dx+c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx, \end{aligned}$$
and then (3.5) becomes
$$ P\leq c \int_{3B\cap\Omega} \vert u \vert ^{p-\delta}\,dx+c \int_{3B\cap\Omega} \vert Xu_{0} \vert ^{p-\delta}\,dx+c \int_{3B\cap\Omega} \vert Xu \vert ^{p-\delta}\,dx. $$
As regards the estimation of L, by changing the order of integration, we have
$$\begin{aligned} L&= \int_{\mu_{0}}^{\infty} \int_{D}\mu^{-(1 +\delta)}\bigl(A(x,u,Xu)\cdot X\hat{u} + B(x,u,Xu)\hat{u}\bigr)\chi_{\{M\vert X\hat{u} \vert (x)\leq\mu\} }\,dx\,d\mu \\ &= \int_{D\backslash{E_{\mu_{0}}}} \int_{M\vert X\hat{u} \vert }^{\infty}{\mu^{-(1 + \delta )}}\bigl(A(x,u,Xu) \cdot X\hat{u} + B(x,u,Xu)\hat{u}\bigr)\,dx\,d\mu \\ & \quad {}+ \int_{D \cap{E_{{\mu_{0}}}}} \int_{{\mu_{0}}}^{\infty}{\mu^{-(1 +\delta)}}\bigl(A(x,u,Xu) \cdot X\hat{u} + B(x,u,Xu)\hat{u}\bigr)\,dx\,d\mu \\ &=\frac{1}{\delta} \int_{D\backslash E_{\mu_{0}}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(A(x,u,Xu)\cdot X\hat{u}+B(x,u,Xu)\hat{u}\bigr)\,dx \\ &\quad{}+\frac{1}{\delta} \int_{D\cap E_{\mu_{0}}}\mu_{0}^{-\delta}\bigl(A(x,u,Xu) \cdot X\hat{u}+B(x,u,Xu)\hat{u}\bigr)\,dx. \end{aligned}$$
Since \(D\setminus E_{\mu_{0}}=D\setminus(D\cap E_{\mu_{0}})\), (1.3) and (1.4) imply
$$\begin{aligned} L&=\frac{1}{\delta} \int_{D}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}A(x,u,Xu)\cdot X\hat{u}\,dx-\frac{1}{\delta} \int_{D\cap E_{\mu_{0}}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}A(x,u,Xu)\cdot X\hat{u}\,dx \\ &\quad{}+\frac{1}{\delta} \int_{D\backslash E_{\mu_{0}}} \bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}B(x,u,Xu)\hat{u}\,dx \\ &\quad{}+\frac{1}{\delta} \int_{D\cap E_{\mu_{0}}}\mu_{0}^{-\delta}\bigl(A(x,u,Xu) \cdot X\hat{u}+B(x,u,Xu)\hat{u}\bigr)\,dx \\ &\geq\frac{1}{\delta} \int_{D}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}A(x,u,Xu)\cdot X\hat{u}\,dx \\ &\quad{}-\frac{2\alpha}{\delta} \int_{D\cap E_{\mu_{0}}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\vert X\hat{u} \vert \,dx \\ &\quad{}-\frac{\alpha}{\delta} \int_{D}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\vert \hat{u} \vert \,dx \\ &:=\frac{1}{\delta}(I_{1}-2\alpha I_{2}-\alpha I_{3}). \end{aligned}$$
Step 2. Next, we will estimate \(I_{i}\) (\(i=1,2,3\)) one by one.
Now for estimation of \(I_{1}\). To this end, define the sets
$$\begin{aligned}& {D_{1}} = \bigl\{ x \in D\setminus B:M\vert X\hat{u} \vert \le \delta\bigl({M_{D}}\vert Xu-Xu_{0} \vert \bigr)\bigr\} , \\& {D_{2}}= \bigl\{ x \in D\setminus B:M\vert X\hat{u} \vert > \delta \bigl({M_{D}}\vert Xu-Xu_{0} \vert \bigr)\bigr\} \end{aligned}$$
and \(B_{\Omega}=B\cap\Omega\). Thus
$$\begin{aligned} I_{1} &= \int_{B_{\Omega}\cup D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(A(x,u,Xu)-A(x,u_{0},Xu_{0})\bigr) \cdot\eta X(u-u_{0})\,dx \\ &\quad{} + \int_{B_{\Omega}\cup D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}A(x,u_{0},Xu_{0})\cdot\eta (Xu-Xu_{0})\,dx \\ &\quad{} + \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}A(x,u,Xu)\cdot X\eta(u-u_{0})\,dx \\ &\quad{} + \int_{D_{1}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}A(x,u,Xu)\cdot X\hat{u}\,dx. \end{aligned}$$
Since \(( M\vert X\hat{u} \vert ) ^{-\delta}\le \vert X\hat{u} \vert ^{-\delta}\) a.e., it follows from (1.5) and (1.3) that
$$\begin{aligned} I_{1}&\geq\beta \int_{B_{\Omega}}\bigl( M\vert X\hat{u} \vert \bigr)^{-\delta} \vert Xu-Xu_{0} \vert ^{p}\,dx \\ &\quad{} -\alpha\biggl( \int_{B_{\Omega}} \vert X\hat{u} \vert ^{-\delta} \bigl( \vert u_{0} \vert ^{p-1}+ \vert Xu_{0} \vert ^{p-1} \bigr) \\ &\quad{} \times \vert Xu-Xu_{0} \vert \,dx + \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u_{0} \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\vert Xu-Xu_{0} \vert \,dx \\ &\quad{} + \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\bigl\vert X\eta(u-u_{0}) \bigr\vert \,dx \\ &\quad{} + \int_{D_{1}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\vert X\hat{u} \vert \,dx \biggr) \\ &:=I_{11}-\alpha(I_{12}+I_{13}+I_{14}+I_{15}). \end{aligned}$$
Since the function \((M\vert X\hat{u} \vert )^{-\delta}\) is an \(A_{p}\)-weight, we obtain from Lemma 2.6 that
$$ I_{11}\geq c\beta \int_{B_{\Omega}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(M_{B_{\Omega}} \vert Xu-Xu_{0} \vert \bigr)^{p}\,dx. $$
By the doubling condition and Lemma 2.8 we see that, for \(x\in\frac{B}{2}\cap\Omega\),
M | X u ˆ | ( x ) ≤ sup B ′ ∋ x , B ′ ⊂ B ⨍ B ′ | X u ˆ | d y + sup B ′ ∋ x , B ′ ∩ ∂ B ≠ ∅ ⨍ B ′ | X u ˆ | d y ≤ M B Ω | X ( u − u 0 ) | + c R ( ⨍ 2 B | u − u 0 | s ′ d x ) 1 s ′ + c ( 1 | 2 B | ∫ D | X u − X u 0 | s ′ d x ) 1 s ′ ≤ M B Ω | X ( u − u 0 ) | + c R ( 1 cap s ′ ( N ( u − u 0 ) , 2 B ) ∫ 2 B | X ( u − u 0 ) | s ′ d x ) 1 s ′ + c ( 1 | 2 B | ∫ D | X u − X u 0 | s ′ d x ) 1 s ′ ≤ M B Ω | X ( u − u 0 ) | + c ( 1 | 2 B | ∫ D | X u − X u 0 | s ′ d x ) 1 s ′ ,
where \(\max \{ 1,(p-\delta)_{*} \} < s'< p-\delta\) is such that \(\mathbf{R}^{n}\setminus\Omega\) is uniformly \((X,s')\)-fat and the last inequality comes from an argument similar to (3.6).
To continue, we define
$$ G= \biggl\{ x\in\frac{B}{2}\cap\Omega:M_{B_{\Omega}}\bigl\vert X(u-u_{0}) \bigr\vert \geq c \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu-Xu_{0} \vert ^{s'}\,dx \biggr) ^{\frac{1}{s'}} \biggr\} . $$
So from (3.11) we see that \(M\vert X\hat{u} \vert \leq cM_{B_{\Omega}} \vert X(u-u_{0}) \vert \) on G, and then
$$\begin{aligned} I_{11}&\geq c \int_{G}\bigl(M_{B_{\Omega}} \vert Xu-Xu_{0} \vert \bigr)^{-\delta}\bigl(M_{B_{\Omega}} \vert Xu-Xu_{0} \vert \bigr)^{p}\,dx \\ &\geq c \int_{\frac{B}{2}\cap\Omega} \vert Xu-Xu_{0} \vert ^{p-\delta}\,dx-c\vert B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu-Xu_{0} \vert ^{s'}\,dx \biggr) ^{\frac{p-\delta}{s'}} \\ &\geq c \int_{\frac{B}{2}\cap\Omega} \vert Xu \vert ^{p-\delta}\,dx-c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx-c\vert B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu \vert ^{s'}\,dx \biggr) ^{\frac{p-\delta}{s'}}. \end{aligned}$$
Using the fact \(X\hat{u}=X(u-u_{0})\) on B and Young's inequality, we have
$$\begin{aligned} I_{12} &\leq c \int_{D} \bigl( \vert u_{0} \vert ^{p-\delta}+\vert Xu_{0} \vert ^{p-\delta} \bigr)\,dx +c \varepsilon \int_{D}\vert Xu-Xu_{0} \vert ^{p-\delta}\,dx \\ &\leq c \int_{D}\vert u_{0} \vert ^{p-\delta}\,dx+c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx+c\varepsilon \int_{D}\vert Xu \vert ^{p-\delta}\,dx. \end{aligned}$$
Next from the definition of \(D_{2}\) and Lemma 2.4, we see
$$\begin{aligned} I_{13} &\leq c\delta^{-\delta} \int_{D}\bigl(M_{D}\vert Xu-Xu_{0} \vert \bigr)^{1-\delta}\bigl(\vert u_{0} \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\,dx \\ &\leq c \int_{D} \bigl( \vert u_{0} \vert ^{p-\delta}+\vert Xu_{0} \vert ^{p-\delta} \bigr)\,dx +c \varepsilon \int_{D}\vert Xu-Xu_{0} \vert ^{p-\delta}\,dx \\ &\leq c \int_{D}\vert u_{0} \vert ^{p-\delta}\,dx+c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx+c\varepsilon \int_{D}\vert Xu \vert ^{p-\delta}\,dx. \end{aligned}$$
For \(I_{14}\), we have by using \(\vert X\eta(u-u_{0}) \vert \leq \vert X\hat{u} \vert +\vert Xu-Xu_{0} \vert \)
$$\begin{aligned} I_{14} &\leq c \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu-Xu_{0} \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\bigl\vert X\eta(u-u_{0}) \bigr\vert \,dx \\ &\leq c \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr) \bigl(\vert X\hat{u} \vert + \vert Xu-Xu_{0} \vert \bigr)\,dx \\ & \quad {} +c \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta} \vert Xu-Xu_{0} \vert ^{p-1}\vert X \eta \vert \vert u-u_{0} \vert \,dx \\ &\leq c \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\vert X\hat{u} \vert \,dx \\ & \quad {} +c \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\vert Xu-Xu_{0} \vert \,dx \\ & \quad {} +\frac{c}{R} \int_{D_{2}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta} \vert Xu-Xu_{0} \vert ^{p-1}\vert u-u_{0} \vert \,dx \\ &:=K_{1}+K_{2}+K_{3}. \end{aligned}$$
Using Young's inequality and (3.7), we get
$$\begin{aligned} K_{1}&\leq c \int_{D}\bigl(\vert u \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\vert X\hat{u} \vert ^{1-\delta}\,dx \\ &\leq c \int_{D}\vert u \vert ^{p-\delta}\,dx+c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx+c\varepsilon \int_{D}\vert X{u} \vert ^{p-\delta}\,dx. \end{aligned}$$
By the definition of \(D_{2}\) and noting that \(\vert X(u-u_{0}) \vert \le M_{D}\vert X(u-u_{0}) \vert \) a.e. D,
$$\begin{aligned} K_{2} &\leq c\delta^{-\delta} \int_{D}\vert Xu-Xu_{0} \vert ^{1-\delta} \bigl(\vert u \vert ^{p-1}+\vert Xu_{0} \vert ^{p-1}\bigr)\,dx \\ &\leq c \int_{D}\vert u \vert ^{p-\delta}\,dx+c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx+c\varepsilon \int_{D}\vert Xu \vert ^{p-\delta}\,dx. \end{aligned}$$
Finally, by Young's inequality,
$$\begin{aligned} K_{3} &\leq\frac{c\delta^{-\delta}}{R} \int_{D_{2}}\bigl(M_{D}\vert Xu-Xu_{0} \vert \bigr)^{-\delta} \vert Xu-Xu_{0} \vert ^{p-1} \vert u-u_{0} \vert \,dx \\ &\leq\frac{c\delta^{-\delta}}{R} \int_{D}\vert Xu-Xu_{0} \vert ^{p-1-\delta} \vert u-u_{0} \vert \,dx \\ &\leq c\varepsilon \int_{{D}}\vert Xu-Xu_{0} \vert ^{p -\delta}\,dx + c \int_{{D}}{\biggl\vert {\frac{{u-{u_{0}}}}{R}} \biggr\vert ^{p-\delta}}\,dx. \end{aligned}$$
In order to estimate the second component of the right-hand side, we let \(s''=(p-\delta)(1-\vartheta)\), where \(0<\vartheta<\frac{p-\delta }{p-\delta+Q}\) if \(p-\delta\leq Q\) and \(0<\vartheta<\min \{ \frac {p-\delta-Q}{p-\delta},\frac{1}{2} \} \) if \(p-\delta> Q\). Denote
$$ \kappa= \textstyle\begin{cases} \frac{Q}{Q-s''},&s''< Q,\\ 2,&s''>Q, \end{cases} $$
then \(\kappa s''\geq p-\delta\). Using Lemma 2.7 and Lemma 2.8, we derive
( ⨍ 2 B | u − u 0 R | p − δ d x ) 1 p − δ ≤ c R − 1 ( ⨍ 2 B | u − u 0 | κ s ″ d x ) 1 κ s ″ ≤ c R − 1 ( 1 cap s ″ ( N ( u − u 0 ) , 2 B ) ∫ 2 B | X ( u − u 0 ) | s ″ d x ) 1 s ″ ≤ c ( 1 | 2 B | ∫ D | X u − X u 0 | s ″ ) 1 s ″ ,
where the proof of the last inequality is similar to (3.6). Therefore,
$$ c \int_{2B}\biggl\vert \frac{u-u_{0}}{R}\biggr\vert ^{p-\delta}\,dx \leq c\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu-Xu_{0} \vert ^{s''}\,dx \biggr) ^{\frac{{p-\delta}}{s''}}. $$
Inserting (3.18) into (3.17), we have
$$\begin{aligned} K_{3}&\leq c\varepsilon \int_{D} \vert Xu \vert ^{p -\delta}\,dx + c \int_{D} \vert Xu_{0} \vert ^{p - \delta}\,dx \\ &\quad {}+ c\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D} \vert Xu \vert ^{s''}\,dx \biggr) ^{\frac{{p-\delta}}{s''}}. \end{aligned}$$
A combination of (3.15), (3.16) and (3.19) implies
$$\begin{aligned} I_{14}&\leq c \int_{D}\bigl(\vert u \vert ^{p-\delta}+\vert Xu_{0} \vert ^{p-\delta}\bigr)\,dx \\ &\quad{} +c\varepsilon \int_{D}\vert Xu \vert ^{p-\delta}\,dx + c\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D} \vert Xu \vert ^{s''}\,dx \biggr) ^{\frac{{p-\delta}}{s''}}. \end{aligned}$$
The definition of \(D_{1}\) and Lemma 2.4 give
$$\begin{aligned} I_{15} &\leq c \int_{D_{1}}\bigl(M\vert X\hat{u} \vert \bigr)^{1-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\,dx \\ &\leq c\delta^{1-\delta} \int_{D}\bigl(M_{D}\vert Xu-Xu_{0} \vert \bigr)^{1-\delta}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\,dx \\ &\leq c\delta^{1-\delta} \biggl[ \int_{D}\vert Xu-Xu_{0} \vert ^{p-\delta}\,dx + \int_{D} \bigl( \vert u \vert ^{p-\delta}+\vert Xu \vert ^{p-\delta} \bigr)\,dx \biggr] \\ &\leq c \int_{D} \bigl( \vert u \vert ^{p-\delta}+\vert Xu_{0} \vert ^{p-\delta} \bigr)\,dx+c\delta \int_{D}\vert Xu \vert ^{p-\delta}\,dx. \end{aligned}$$
The previous estimates show that
$$\begin{aligned} I_{1}&\geq c \int_{\frac{B}{2}\cap\Omega} \vert Xu \vert ^{p-\delta}\,dx -c \int_{D} \bigl( \vert u \vert ^{p-\delta}+\vert u_{0} \vert ^{p-\delta}+\vert Xu_{0} \vert ^{p-\delta} \bigr)\,dx \\ & \quad{} - c(\varepsilon+\delta) \int_{D}\vert Xu \vert ^{p-\delta}\,dx-c\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu \vert ^{t}\,dx \biggr) ^{\frac{p-\delta}{t}}, \end{aligned}$$
where \(t=\max\{s',s''\}< p-\delta\).
Now we address the estimation of \(I_{2}\). Using (3.7), we have
$$\begin{aligned} I_{2}&\leq \int_{D}\vert u \vert ^{p-1}\vert X\hat{u} \vert ^{1-\delta}\,dx + \int_{D\cap{E_{\mu_{0}}}}\bigl(M\vert X\hat{u} \vert \bigr)^{-\delta} \vert Xu \vert ^{p-1}\vert X\hat{u} \vert \,dx \\ &\leq c \int_{D}\vert u \vert ^{p-\delta}\,dx+c\varepsilon \int_{D}\vert X\hat{u} \vert ^{p-\delta}\,dx + \int_{D\cap E_{\mu_{0}}}\vert Xu \vert ^{p-1}\bigl(M\vert X \hat{u} \vert \bigr)^{1-\delta}\,dx \\ &\leq c \int_{D} \bigl( \vert u \vert ^{p-\delta}+\vert Xu_{0} \vert ^{p-\delta} \bigr)\,dx \\ &\quad{} + c\varepsilon \int_{D}\vert Xu \vert ^{p-\delta}\,dx + \int_{D\cap E_{\mu_{0}}}\vert Xu \vert ^{p-1}\bigl(M\vert X \hat{u} \vert \bigr)^{1-\delta}\,dx. \end{aligned}$$
To estimate the last integral in (3.23), let \(0<\tau<\frac {1}{2}\) and \(x\in D\cap E_{\mu_{0}}\). If \(\vert Xu \vert \geq \tau^{-1}\mu_{0}\), then \(M\vert X\hat{u} \vert \leq\mu_{0}\leq \tau \vert Xu \vert \) and
$$ \vert Xu{ \vert ^{p - 1}} {\bigl(M\vert X\hat{u} \vert \bigr)^{1 - \delta}} \le \vert Xu{ \vert ^{p - 1}} {\bigl( \tau \vert Xu \vert \bigr)^{1 - \delta}} = {\tau ^{1 - \delta}} \vert Xu{ \vert ^{p - \delta}}; $$
if \(\vert Xu \vert <\tau^{-1}\mu_{0}\), then
$$ \vert Xu{ \vert ^{p - 1}} {\bigl(M\vert X\hat{u} \vert \bigr)^{1 - \delta}} \le{ \bigl( {{\tau^{ - 1}} {\mu _{0}}} \bigr) ^{p - 1}}\mu_{0}^{1 - \delta} \le{ \tau^{1 - p}}\mu_{0}^{p - \delta}. $$
By (3.24) and (3.25), we deduce that, for any \(x\in D\cap E_{\mu_{0}}\),
$$ \vert Xu \vert ^{p-1}\bigl(M\vert X\hat{u} \vert \bigr)^{1-\delta}\leq c \bigl( \tau^{1-\delta} \vert Xu \vert ^{p-\delta}+\tau^{1-p}\mu_{0}^{p-\delta} \bigr) . $$
For the second term in (3.26), we first observe from the proof of (3.11) that
1 R ( ⨍ 2 B | u − u 0 | s ′ d x ) 1 s ′ ≤c ( 1 | 2 B | ∫ D | X u − X u 0 | s ′ d x ) 1 s ′ .
Noticing \(\mu_{0}=\frac{c}{\vert 2B \vert }\int_{D}\vert X\hat {u} \vert \,dx\), we have from Hölder's inequality
τ 1 − p μ 0 p − δ ≤ c τ 1 − p ( 1 | 2 B | ∫ D | X η ( u − u 0 ) + η X ( u − u 0 ) | d x ) p − δ ≤ c τ 1 − p ( 1 R ( ⨍ 2 B | u − u 0 | s ′ d x ) 1 s ′ ) p − δ + c τ 1 − p ( 1 | 2 B | ∫ D | X ( u − u 0 ) | s ′ d x ) p − δ s ′ ≤ c τ 1 − p ( 1 | 2 B | ∫ D | X ( u − u 0 ) | s ′ d x ) p − δ s ′ ≤ c τ 1 − p ( 1 | 2 B | ∫ D | X u 0 | p − δ d x ) + c τ 1 − p ( 1 | 2 B | ∫ D | X u | s ′ d x ) p − δ s ′ .
By (3.26) and (3.27), it follows that
$$\begin{aligned} & \int_{D\cap E_{\mu_{0}}}\vert Xu \vert ^{p-1}\bigl(M\vert X \hat{u} \vert \bigr)^{1-\delta}\,dx \\ &\quad\leq c\tau^{1-\delta} \int_{D}\vert Xu \vert ^{p-\delta}\,dx+c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx +c\tau^{1-p}\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu \vert ^{s'}\,dx \biggr) ^{\frac{p-\delta}{s'}}. \end{aligned}$$
Taking (3.28) into (3.23), we have
$$\begin{aligned} I_{2}&\leq c \int_{D}\vert u \vert ^{p-\delta}\,dx+ c \int_{D}\vert Xu_{0} \vert ^{p-\delta}\,dx \\ &\quad{} +c \bigl( \varepsilon+\tau^{1-\delta} \bigr) \int_{D}\vert Xu \vert ^{p-\delta}\,dx+c\tau ^{1-p}\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu \vert ^{s'}\,dx \biggr) ^{\frac{p-\delta}{s'}}. \end{aligned}$$
For the estimation of \(I_{3}\): From (2.3), Lemma 2.8 and a similar process to the proof of (3.18), we have
$$\begin{aligned} I_{3}&\leq c \int_{D}\bigl(\vert u \vert ^{p-1}+\vert Xu \vert ^{p-1}\bigr)\vert \hat{u} \vert ^{1-\delta}\,dx \\ &\leq \int_{D} \vert u{ \vert ^{p - \delta}}+ c\varepsilon \int_{D} \vert Xu{ \vert ^{p - \delta}}\,dx + c \int_{D}\vert u - u_{0} \vert ^{p -\delta}\,dx \\ &\leq c \int_{D}\bigl(\vert u \vert ^{p - \delta}+ \vert X{u_{0}} { \vert ^{p - \delta}}\bigr)\,dx \\ &\quad{} + c\varepsilon \int_{D}\vert Xu \vert ^{p -\delta}\,dx + c\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{D}\vert Xu \vert ^{t}\,dx \biggr) ^{\frac{{p -\delta} }{{t}}}, \end{aligned}$$
Step 3. Taking into account (3.4), (3.8), substituting (3.22), (3.29) and (3.30) into (3.9), and letting \(\varepsilon=\tau^{1-\delta}\), it follows that
$$\begin{aligned} &\int_{\frac{B}{2}\cap\Omega} \vert Xu \vert ^{p-\delta}\, dx \\ & \quad \leq c \int_{3B \cap\Omega} \bigl( \vert u \vert ^{p - \delta} +\vert {u_{0}} \vert ^{p - \delta} +\vert X{u_{0}} \vert ^{p - \delta} \bigr)\,dx \\ &\quad \quad{}+c \bigl( \delta+\tau^{1-\delta} \bigr) \int_{3B\cap\Omega} \vert Xu \vert ^{p-\delta}\,dx +c\tau ^{1-p}\vert 2B \vert \biggl( \frac{1}{\vert 2B \vert } \int_{3B\cap\Omega} \vert Xu \vert ^{t}\,dx \biggr) ^{\frac{p-\delta}{t}}. \end{aligned}$$
To sum up the cases \(3B\subset\Omega\) and \(3B\backslash\Omega\neq \emptyset\), we let
$$ g(x)= \textstyle\begin{cases} \vert Xu \vert ^{t}, &x\in\Omega,\\ 0, & x\in\mathbf{R}^{n}\backslash\Omega, \end{cases} $$
$$ f(x)= \textstyle\begin{cases} ( \vert u-u_{0} \vert +\vert u_{0} \vert +\vert Xu_{0} \vert ) ^{t}, & x\in\Omega,\\ 0, & x\in\mathbf{R}^{n}\backslash\Omega. \end{cases} $$
Thus we have from (3.1) and (3.31)
⨍ B 2 g q dx≤b [ ( ⨍ 3 B g d x ) q + ⨍ 3 B f q d x ] +θ ⨍ 3 B g q dx,
where \(q=\frac{p-\delta}{t}\), \(\theta=c ( \delta+\tau^{1-\delta} ) \) and \(b = c{\tau^{1 - p}}\). Choosing τ, δ small enough, we see by Lemma 2.3 that there exists \(t_{1}=p-\delta+\varepsilon_{0}\), for some \(\varepsilon_{0}>0\), such that \(\vert Xu \vert \in L^{t_{1}}(\Omega)\).
Furthermore, we will show that there exists \({t_{2}}>r=p-\delta\) such that \(u \in L^{t_{2}}(\Omega)\). Since \(u-u_{0}\in W_{X,0}^{1,r}(\Omega)\), we obtain from Lemma 2.2 that, for \(r< Q\), \(r^{*}=Qr/(Q-r)\),
$$ \biggl( \int_{\Omega} \vert u-u_{0} \vert ^{r^{*}}\,dx \biggr) ^{\frac{1}{r^{*}}}\leq C(\Omega) \biggl( \int_{\Omega}\bigl\vert X(u-u_{0}) \bigr\vert ^{r}\,dx \biggr) ^{\frac{1}{r}}< \infty. $$
Taking \(t_{2}=\min\{s,r^{*}\}>r\), we have
$$\begin{aligned} \biggl( \int_{\Omega} \vert u \vert ^{t_{2}}\,dx \biggr) ^{\frac{1}{t_{2}}} &\leq\biggl( \int_{\Omega} \vert u-u_{0} \vert ^{t_{2}}\,dx \biggr) ^{\frac{1}{t_{2}}}+ \biggl( \int_{\Omega} \vert u_{0} \vert ^{t_{2}}\,dx \biggr) ^{\frac{1}{t_{2}}} \\ &\leq C \biggl( \int_{\Omega} \vert u-u_{0} \vert ^{r^{*}}\,dx \biggr) ^{\frac{1}{r^{*}}}+ \biggl( \int_{\Omega} \vert u_{0} \vert ^{t_{2}}\,dx \biggr) ^{\frac{1}{t_{2}}} \end{aligned}$$
and then \(u\in L^{t_{2}}(\Omega)\) by \(u_{0}\in L^{s}(\Omega)\). If \(r\geq Q\) then we can apply the above reasoning for any \(r^{*}<\infty\) to obtain \(u\in L^{t_{2}}(\Omega)\).
We set \(\tilde{p}=\min\{t_{1},t_{2}\}>p-\delta\) and \(u\in W_{X}^{1,\tilde{p}}(\Omega)\). Repeating the preceding reasoning, we know that there exists \(\tilde{\delta}>0\) such that \(u\in W_{X}^{1,p+\tilde{\delta}}(\Omega)\) and the proof is complete.
In this paper, we obtained the global higher integrability for very weak solutions to the Dirichlet problem for a nonlinear subelliptic equation on Carnot-Carathéodory spaces which implies that such solutions are classical weak solutions. It is a generalization of the corresponding result in the classical Euclidean setting.
Iwaniec, T, Sbordone, C: Weak minima of variational integrals. J. Reine Angew. Math. 454, 143-161 (1994)
Lewis, JL: On very weak solutions of certain elliptic systems. Commun. Partial Differ. Equ. 18(9-10), 1515-1537 (1993)
Giannetti, F, Passarelli di Napoli, A: On very weak solutions of degenerate p-harmonic equations. Nonlinear Differ. Equ. Appl. 14(5-6), 739-751 (2007)
Kinnunen, J, Lewis, JL: Very weak solutions of parabolic systems of p-Laplacian type. Ark. Mat. 40(1), 105-132 (2002)
Xie, S, Fang, A: Global higher integrability for the gradient of very weak solutions of a class of nonlinear elliptic systems. Nonlinear Anal. 53(7-8), 1127-1147 (2003)
Fattorusso, L, Molica Bisci, G, Tarsia, A: A global regularity result for some degenerate elliptic systems. Nonlinear Anal. 125, 54-66 (2015)
Zatorska-Goldstein, A: Very weak solutions of nonlinear subelliptic equations. Ann. Acad. Sci. Fenn., Math. 30(2), 407-436 (2005)
Kilpeläinen, T, Koskela, P: Global integrability of the gradients of solutions to partial differential equations. Nonlinear Anal. 23(7), 899-909 (1994)
Danielli, D, Garofalo, N, Phuc, NC: Inequalities of Hardy-Sobolev type in Carnot-Carathéodory spaces. In: Sobolev Spaces in Mathematics. I: Sobolev Type Inequalities, pp. 117-151. Springer, New York (2009)
Hörmander, L: Hypoelliptic second order differential equations. Acta Math. 119, 147-171 (1967)
Chow, WL: Über systeme von linearen partiellen differentialgleichungen erster ordnung. Math. Ann. 117, 98-105 (1939)
Nagel, A, Stein, EM, Wainger, S: Balls and metrics defined by vector fields. I: basic properties. Acta Math. 155, 103-147 (1985)
Garofalo, N, Nhieu, DM: Lipschitz continuity, global smooth approximations and extension theorems for Sobolev functions in Carnot-Carathéodory spaces. J. Anal. Math. 74, 67-97 (1998)
Hajłasz, P, Koskela, P: Sobolev met Poincaré. Mem. Am. Math. Soc. 688, 1-101 (2000)
Lu, G: Weighted Poincaré and Sobolev inequalities for vector fields satisfying Hörmander's condition and applications. Rev. Mat. Iberoam. 8(3), 367-439 (1992)
Danielli, D: Regularity at the boundary for solutions of nonlinear subelliptic equations. Indiana Univ. Math. J. 44(1), 269-286 (1995)
Federer, H: Geometric Measure Theory. Die Grundlehren der Mathematischen Wissenschaften, vol. 153. Springer, New York (1969)
The authors are grateful to anonymous reviewers for their careful reading of this paper and their insightful comments and suggestions, which improved the paper a lot. The current work is supported by the National Natural Science Foundation of China (No. 11271299).
Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, Shaanxi, 710129, P.R. China
Guangwei Du & Junqiang Han
Guangwei Du
Junqiang Han
Correspondence to Junqiang Han.
Du, G., Han, J. Global higher integrability for very weak solutions to nonlinear subelliptic equations. Bound Value Probl 2017, 93 (2017). https://doi.org/10.1186/s13661-017-0825-6
nonlinear subelliptic equations
very weak solutions
global higher integrability | CommonCrawl |
How to determine the characteristic length in reynolds number calculations in general?
I understand that the reynolds number is given by the expression $Re=\frac{\rho v L}{\mu}$, where $\rho$ is the density, $v$ is the fluid velocity and $\mu$ is the dynamic viscosity. For any given fluid dynamics problem, $\rho$, $v$, and $\mu$ are trivially given. But what exactly is the characteristic length $L$? How exactly do I calculate it? What can I use from a given problem to determine the characteristic length automatically?
$\begingroup$ Could you explain why the Reynoldsnumber is the similarity which describes your flow problem? $\endgroup$ – rul30 Oct 10 '15 at 15:09
I would like to approach this question from a mathematical perspective which can be fruitful as discussed in some of the comments and answers. The given answers are useful, however i would like to add:
In general the smallest available length scale is the characteristic length scale.
Sometimes (e.g. in dynamic systems) there is no fixed length scale to choose as a characteristic length scale. In such cases often a dynamic length scale can be found.
Characteristic length scales:
TL;DWTR: for $R/L\ll1$, $R$ is the characteristic length scale; for $R/L\gg1$, $L$ is the characteristic length scale. This implies that the smaller length scale is (usually) the characteristic length scale.
Consider the pipe flow case discussed in the other answers; there is the radius $R$ but also the length $L$ of the pipe. Usually we take the pipe diameter to be the characteristic length scale but is this always the case? Well, lets look at this from a mathematical perspective; let's define the dimensionless coordinates: $$\bar{x}=\frac{x}{L} \quad \bar{y}=\frac{y}{R} \quad \bar{u}=\frac{u}{U} \quad \bar{v}=\frac{v}{V} \quad \bar{p}=\frac{p}{\rho U^2}$$
Here, $L$, $R$, $U$, $V$ are $x$-$y$ coordinate and velocity scales but not necessarily their characteristic scales. Note that the choice of the pressure scale $P=\rho U^2$ is only valid for $\mathrm{Re}\gg1$. The case $\mathrm{Re}\ll1$ requires a rescaling.
Transforming the continuity equation to dimensionless quantities:
$$\boldsymbol{\nabla}\cdot\boldsymbol{u}=0 \rightarrow \partial_{\bar{x}}\bar{u}+\partial_{\bar{y}}\bar{v}=0$$
which can only be the case when we assume $\frac{U}{V}\frac{R}{L}\sim1$ or $\frac{V}{U}\sim\frac{R}{L}$. Knowing this, the Reynolds number may be redefined:
$$\mathrm{Re}=\frac{UR}{\nu}=\frac{U}{V}\frac{R}{L}\frac{VL}{\nu}=\frac{VL}{\nu}=\hat{\mathrm{Re}}$$
Similarly, let's transform the Navier-Stokes equations ($x$-component only to keep it short): $$\boldsymbol{u}\cdot\boldsymbol{\nabla u}=-\frac{1}{\rho}\boldsymbol{\nabla}p+\nu\triangle\boldsymbol{u}$$ $$\bar{u}\partial_{\bar{x}}\bar{u}+\bar{v}\partial_{\bar{y}}\bar{u}=-\partial_{\bar{x}}\bar{p}+\frac{1}{\mathrm{Re}}\left[\frac{R}{L}\partial_{\bar{x}}^{2}\bar{u}+\frac{L}{R}\partial_{\bar{y}}^{2}\bar{u}\right]$$ We see here the Reynolds number occuring naturally as part of the scaling process. However, depending on the geometric ratio $R/L$, the equations may require rescaling. Consider the two cases:
The pipe radius is much smaller than the pipe length (i.e. $R/L\ll1$):
The transformed equation then reads: $$\bar{u}\partial_{\bar{x}}\bar{u}+\bar{v}\partial_{\bar{y}}\bar{u}=-\partial_{\bar{x}}\bar{p}+\frac{1}{\mathrm{Re}}\frac{L}{R}\partial_{\bar{y}}^{2}\bar{u}$$ Here we have a problem because the term $\frac{1}{\mathrm{Re}}\frac{L}{R}$ could be very large and a properly scaled equation only has coefficients $O(1)$ or smaller. So we require a rescaling of the $\bar{x}$ coordinate, $\bar{v}$ velocity and $\bar{p}$ pressure: $$\hat{x}=\bar{x}\left(\frac{R}{L}\right)^{\alpha}\quad\hat{v}=\bar{v}\left(\frac{R}{L}\right)^{-\alpha}\quad\hat{p}=\bar{p}\left(\frac{R}{L}\right)^{\beta}$$ This choice of rescaled quantities ensures that the continuity equation remains of the form: $$\partial_{\hat{x}}\bar{u}+\partial_{\bar{y}}\hat{v}=0$$ The Navier-Stokes equations in terms of the rescaled quantities yields: $$\bar{u}\partial_{\hat{x}}\bar{u}+\hat{v}\partial_{\bar{y}}\bar{u}=-\partial_{\hat{x}}\hat{p}+\frac{1}{\mathrm{Re}}\partial_{\bar{y}}^{2}\bar{u}$$ which is properly scaled with coefficients of $O(1)$ or smaller when we take the values $\alpha=-1,\,\beta=0$. This indicates the pressure scale didn't need any rescaling but the length and velocities scales have been redefined: $$\hat{x}=\bar{x}\frac{L}{R}=\frac{x}{R}\quad\hat{v}=\bar{v}\frac{R}{L}=\bar{v}\frac{V}{U}=\frac{v}{U}\quad\hat{p}=\bar{p}=\frac{p}{\rho U^{2}}$$ and we see that the characteristic length and velocity scale for respectively $x$ and $v$ isn't $L$ and $V$ as assumed at the beginning but $R$ and $U$.
The pipe radius is much larger than the pipe length (i.e. $R/L\gg1$):
The transformed equation then reads: $$\bar{u}\partial_{\bar{x}}\bar{u}+\bar{v}\partial_{\bar{y}}\bar{u}=-\partial_{\bar{x}}\bar{p}+\frac{1}{\mathrm{Re}}\frac{R}{L}\partial_{\bar{x}}^{2}\bar{u}$$ Likewise to the previous case, $\frac{1}{\mathrm{Re}}\frac{R}{L}$ could be very large and requires a rescaling. Except this time we require a rescaling of the $\bar{y}$ coordinate, $\bar{u}$ velocity and $\bar{p}$ pressure: $$\hat{y}=\bar{y}\left(\frac{R}{L}\right)^{\alpha}=\frac{y}{L}\quad\hat{u}=\bar{u}\left(\frac{R}{L}\right)^{-\alpha}\quad\hat{p}=\bar{p}\left(\frac{R}{L}\right)^{\beta}$$ This choice of rescaled quantities again ensures that the continuity equation remains of the form: $$\partial_{\bar{x}}\hat{u}+\partial_{\hat{y}}\bar{v}=0$$ The Navier-Stokes equations in terms of the rescaled quantities yields: $$\hat{u}\partial_{\bar{x}}\hat{u}+\bar{v}\partial_{\hat{y}}\hat{u}=-\partial_{\bar{x}}\hat{p}+\frac{1}{\mathrm{\hat{\mathrm{Re}}}}\partial_{\bar{x}}^{2}\hat{u}$$ which is properly scaled with coefficients of $O(1)$ or smaller when we take the values $\alpha=1\,\beta=-2$. This indicates the length, velocities and pressure scales have been redefined: $$\hat{y}=\bar{y}\frac{R}{L}=\frac{y}{L}\quad\hat{u}=\bar{u}\frac{L}{R}=\bar{u}\frac{U}{V}=\frac{u}{V}\quad\hat{p}=\bar{p}\left(\frac{L}{R}\right)^{2}=\bar{p}\left(\frac{U}{V}\right)^{2}=\frac{p}{\rho V^{2}}$$ and we see that the characteristic length, velocity and pressure scales for respectively $x$, $v$ and $p$ isn't $R$, $U$, $\rho U^{2}$ as assumed at the beginning but $L$, $V$ and $\rho V^{2}$.
In case you had forgotten the point of this all: for $R/L\ll1$, $R$ is the characteristic length scale; for $R/L\gg1$, $L$ is the characteristic length scale. This implies that the smaller length scale is (usually) the characteristic length scale.
Dynamic length scales:
Consider diffusion of a species into semi-infinite domain. As it is infinite in one direction, it does not have a fixed length scale. Instead a length scale is established by the 'boundary layer' slowly penetrating into the domain. This 'penetration length' as the characteristic length scale is sometimes called is given as: $$\delta\left(t\right) = \sqrt{\pi D t}$$
where $D$ is the diffusion coefficient and $t$ is the time. As seen, there is no length scale $L$ involved as it is determined completely by the diffusion dynamics of the system. For an example of such a system see my answer to this question.
nluiginluigi
$\begingroup$ What exactly do you mean by available when you say " smallest available length scale"? What exactly determines what is available and what isn't? $\endgroup$ – Paul Oct 15 '15 at 18:05
$\begingroup$ @Paul 'available' was meant in relation to obvious geometric length scales like length, height, width, diameter, etc. This in contrast to dynamic length scales which are much less obvious and are determined by the dynamics of the system. $\endgroup$ – nluigi Oct 15 '15 at 19:59
$\begingroup$ Is there any particular justification for generally using the "smallest available length" as opposed to any other length available? $\endgroup$ – Paul Feb 26 '17 at 2:59
$\begingroup$ @Paul The gradients are generally the largest there so most of the transport occurs at the small length scales $\endgroup$ – nluigi Feb 26 '17 at 11:28
This is a practical, empirical question, not a theoretical one that can be "solved" by mathematics. One way to answer it is to start from what Reynolds number means physically: it represents the ratio between "typical" inertia forces and viscous forces in the flow field.
So, you look at a typical flow pattern, and choose the best length measurement to represent that ratio of forces.
For example, in flow through a circular pipe, the viscous (shear) forces depend on the velocity profile from the axis of the pipe to the walls. If the velocity along the axis of the pipe remains the same, doubling the radius will (roughly) halve the rate of shear between the axis and the walls (where the velocity is zero). So the radius, or the diameter, are a good choices for the characteristic length.
Obviously Re will be different (by a factor of 2) if you choose the radius or the diameter, so in practice everybody makes the same choice and everybody uses the same critical value of Re for the transition from laminar to turbulent flow. From a practical engineering point of view, the size of a pipe is specified by its diameter since that is what is easy to measure, so you might as well use the diameter for Re also.
For a pipe that is approximately circular, you might decide (by a similar sort of physical argument) that the circumference of the pipe is really the most important length, and therefore compare the results with circular pipes by using an "equivalent diameter" defined as (circumference / pi).
On the other hand, the length of the pipe doesn't have much influence on the fluid flow pattern, so for most purposes that would be a poor choice of characteristic length for Re. But if you are considering flow in a very short "pipe" where the length is much less than the diameter, the length might be the best number to use as the parameter describing the flow.
$\begingroup$ I disagree with your statement that math can not help here. The procedure you describe would be of no use in many cases with no obvious length scales, such as a boundary layer. That is the question at hand. Dimensional analysis of the governing equations has proved quite helpful in finding relevant length scales in laminar and turbulent boundary layers, e.g., the laminar boundary layer thickness scaling and viscous length scales, respectively. The far-field scaling of thermal plumes is another case where it's much less obvious how to do the analysis you suggest, but dimensional analysis helps. $\endgroup$ – Ben Trettel Oct 11 '15 at 19:52
$\begingroup$ @BenTrettel - I agree that a dimensional analysis can greatly help in determining the characteristic length scale. See my answer for a 'simple' example. $\endgroup$ – nluigi Oct 15 '15 at 9:51
There are three main ways to determine which groups of terms (more general than just length or time scales) are relevant. The first is by math, which could involve solving a problem or an analogous or appropriate problem analytically and seeing which terms appear and making selections which simplify things as appropriate (more on this below). The second approach is by trial and error, more or less. The third is by precedent, usually when someone else in the past has already done some sort of the previously mentioned analysis in this problem or related ones.
There are a number of ways to do theoretical analysis, but one useful one in engineering is non-dimensionalizing governing equations. Sometimes, the characteristic length is obvious, as is the case in a pipe flow. But other times, there are no obvious characteristic lengths, as is the case in free shear flows, or a boundary layer. In these cases, you can make the characteristic length a free variable, and choose one which simplifies the problem. Here are some good notes on non-dimensionalization, which have the following suggestions for finding characteristic time and length scales:
(always) Make as many nondimensional constants equal to one as possible.
(usually) Make the constants that appear in the initial or boundary conditions equal to one.
(usually) If there is a nondimensional constant that, if we were to set it equal to zero, would simplify the problem significantly, allow it to remain free and then see when we can make it small.
The other main approach is to solve a problem entirely and see which groups of terms appear. Generally the relevant length is obvious if you are grabbing the term from this type of theoretical analysis, though this sort of analysis is often easier said than done.
But how do you figure out a good length if you don't have a theoretical analysis to go off of? Often, it doesn't matter too much which length you pick. Some people seem to think this is confusing, because they were taught that turbulence transition occurs at $Re$ of 2300 (for a pipe), or 500,000 (for a flat plate). Recognize that in the pipe case, it doesn't matter if you pick the diameter or radius. That just scales the critical Reynolds number by a factor of two. What does matter is making sure that any criteria you use are consistent with the definition of the Reynolds number you use, and the problem you are studying. It's tradition that dictates that we use the diameter for pipe flows.
Also, to be general, analysis or experimentation could suggest another number, say the Biot number, which also has a "characteristic length" in it. The procedures in this case are identical to that already mentioned.
Sometimes you can make a heuristic analysis to determine the relevant length. In the Biot number example, this characteristic length is usually given as the volume of an object divided by its surface area, because this makes sense for heat transfer problems. (Larger volume = slower heat transfer to center and larger surface area = faster heat transfer to center.) But I suppose it's possible to derive this from certain approximations. You can make a similar argument justifying the hydraulic diameter.
Ben TrettelBen Trettel
$\begingroup$ If I choose L arbitrarily and the problem is non-canonical such that the flow regimes and analytical solutions are not known a priori, then trial and error is really the only way? $\endgroup$ – Paul Oct 10 '15 at 2:01
$\begingroup$ I don't think so. You might be able to get something useful by non-dimensionalizing the relevant governing equations with arbitrary length and time scales. This is generally my first step when analyzing a problem with clear governing equations but no clear length or time scales. If you are confused about how to do this in your particular case, post it as a question on here and I'll give it a shot. $\endgroup$ – Ben Trettel Oct 10 '15 at 2:18
Not the answer you're looking for? Browse other questions tagged fluid-mechanics or ask your own question.
Reynolds Number
Reynolds Number for a Rectangular Duct
Is inviscid flow necessarily turbulent?
How can I translate a pressure boundary condition to a velocity boundary condition for incompressible, viscous flow?
Determining Reynolds number
How to calculate Reynolds in rectangular pipe
Using pressure drop to correlate mass flow
Pump power and fluid viscosity
Dimension Analysis For fluid mechanic | CommonCrawl |
Invariant measures for a stochastic Fokker-Planck equation
KRM Home
A Vlasov-Poisson plasma of infinite mass with a point charge
April 2018, 11(2): 337-355. doi: 10.3934/krm.2018016
On a Fokker-Planck equation for wealth distribution
Marco Torregrossa 1,, and Giuseppe Toscani 2,
Department of Mathematics, University of Pavia, Pavia, Italy
Department of Mathematics, University of Pavia, and IMATI-CNR, Pavia, Italy
* Corresponding authorr: Marco Torregrossa
Received January 2017 Revised May 2017 Published January 2018
Fund Project: This work has been written within the activities of the National Group of Mathematical Physics (GNFM) of INdAM (National Institute of High Mathematics), and partially supported by the MIUR-PRIN Grant 2015PA5MP7 "Calculus of Variations".
We study here a Fokker-Planck equation with variable coefficient of diffusion and boundary conditions which appears in the study of the wealth distribution in a multi-agent society [2, 10, 22]. In particular, we analyze the large-time behavior of the solution, by showing that convergence to the steady state can be obtained in various norms at different rates.
Keywords: Wealth distribution, kinetic theory, Fokker-Planck equations, large-time behavior.
Mathematics Subject Classification: Primary: 91D10; Secondary: 35Q84, 82B21, 94A17.
Citation: Marco Torregrossa, Giuseppe Toscani. On a Fokker-Planck equation for wealth distribution. Kinetic & Related Models, 2018, 11 (2) : 337-355. doi: 10.3934/krm.2018016
A. Arnold, P. Markowich, G. Toscani and A. Unterreiter, On convex Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations, Comm. Partial Differential Equations, 26 (2001), 43-100. doi: 10.1081/PDE-100002246. Google Scholar
J. F. Bouchaud and M. Mézard, Wealth condensation in a simple model of economy, Physica A, 282 (2000), 536-545. doi: 10.1016/S0378-4371(00)00205-3. Google Scholar
M. J. Cáceres and G. Toscani, Kinetic approach to long time behavior of linearized fast diffusion equations, J. Statist. Phys., 128 (2007), 883-925. doi: 10.1007/s10955-007-9329-6. Google Scholar
J. A. Carrillo and G. Toscani, Contractive probability metrics and asymptotic behavior of dissipative kinetic equations, Riv. Mat. Univ. Parma (7), 6 (2007), 75-198. Google Scholar
J. A. Carrillo, S. Cordier and G. Toscani, Over-populated tails for conservative-in-the-mean inelastic Maxwell models, Discr. Cont. Dynamical Syst. A, 24 (2009), 59-81. doi: 10.3934/dcds.2009.24.59. Google Scholar
A. Chakraborti, Distributions of money in models of market economy, Int. J. Modern Phys. C, 13 (2002), 1315-1321. doi: 10.1142/S0129183102003905. Google Scholar
A. Chakraborti and B. K. Chakrabarti, Statistical mechanics of money: Effects of saving propensity, Eur. Phys. J. B, 17 (2000), 167-170. Google Scholar
A. Chatterjee, B. K. Chakrabarti and R. B. Stinchcombe, Master equation for a kinetic model of trading market and its analytic solution, Phys. Rev. E, 72 (2005), 026126. doi: 10.1103/PhysRevE.72.026126. Google Scholar
H. Chernoff, A note on an inequality involving the normal distribution, Ann. Probab., 9 (1981), 533-535. doi: 10.1214/aop/1176994428. Google Scholar
S. Cordier, L. Pareschi and G. Toscani, On a kinetic model for a simple market economy, J. Statist. Phys., 120 (2005), 253-277. doi: 10.1007/s10955-005-5456-0. Google Scholar
B. Düring, D. Matthes and G. Toscani, Kinetic Equations modelling Wealth Redistribution: A comparison of approaches, Phys. Rev. E, 78 (2008), 056103, 12pp. Google Scholar
B. Düring, D. Matthes and G. Toscani, A Boltzmann-type approach to the formation of wealth distribution curves, (Notes of the Porto Ercole School, June 2008), Riv. Mat. Univ. Parma, 1 (2009), 199-261. Google Scholar
W. Feller, Two singular diffusion problems, Ann. Math., 54 (1951), 173-182. doi: 10.2307/1969318. Google Scholar
W. Feller, An Introduction to Probability Theory and Its Applications, Vol. Ⅰ. John Wiley & Sons Inc., New York, 1968. Google Scholar
G. Furioli, A. Pulvirenti, E. Terraneo and G. Toscani, Fokker-Planck equations in the modelling of socio-economic phenomena, Math. Mod. Meth. Appl. Scie., 27 (2017), 115-158. doi: 10.1142/S0218202517400048. Google Scholar
G. Gabetta, G. Toscani and B. Wennberg, Metrics for probability distributions and the trend to equilibrium for solutions of the Boltzmann equation, J. Statist. Phys., 81 (1995), 901-934. doi: 10.1007/BF02179298. Google Scholar
O. Johnson and A. Barron, Fisher information inequalities and the central limit theorem, Probab. Theory Related Fields, 129 (2004), 391-409. doi: 10.1007/s00440-004-0344-0. Google Scholar
C. A. Klaassen, On an inequality of Chernoff, Ann. Probability, 13 (1985), 966-974. doi: 10.1214/aop/1176992917. Google Scholar
C. Le Bris and P. L. Lions, Existence and uniqueness of solutions to Fokker-Planck type equations with irregular coefficients, Comm. Partial Differential Equations, 33 (2008), 1272-1317. doi: 10.1080/03605300801970952. Google Scholar
D. Matthes, A. Juengel and G. Toscani, Convex Sobolev inequalities derived from entropy dissipation, Arch. Rat. Mech. Anal., 199 (2011), 563-596. doi: 10.1007/s00205-010-0331-9. Google Scholar
D. Matthes and G. Toscani, On steady distributions of kinetic models of conservative economies, J. Statist. Phys., 130 (2008), 1087-1117. doi: 10.1007/s10955-007-9462-2. Google Scholar
[22] L. Pareschi and G. Toscani, Interacting Multiagent Systems. Kinetic Equations & Monte Carlo Methods, Oxford University Press, Oxford, 2013. Google Scholar
V. Pareto, Cours d'Économie Politique, Tome Premier, Rouge Éd., Lausanne 1896; Tome second, Pichon Éd., Paris, 1897. doi: 10.3917/droz.paret.1964.01. Google Scholar
G. Toscani, Entropy dissipation and the rate of convergence to equilibrium for the Fokker-Planck equation, Quart. Appl. Math., 57 (1999), 521-541. doi: 10.1090/qam/1704435. Google Scholar
G. Toscani and C. Villani, Probability Metrics and Uniqueness of the Solution to the Boltzmann Equation for a Maxwell Gas, J. Statist. Phys., 94 (1999), 619-637. doi: 10.1023/A:1004589506756. Google Scholar
Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003
Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229
Olivier Ley, Erwin Topp, Miguel Yangari. Some results for the large time behavior of Hamilton-Jacobi equations with Caputo time derivative. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021007
Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051
Emre Esentürk, Juan Velazquez. Large time behavior of exchange-driven growth. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 747-775. doi: 10.3934/dcds.2020299
Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112
Xiaoping Zhai, Yongsheng Li. Global large solutions and optimal time-decay estimates to the Korteweg system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1387-1413. doi: 10.3934/dcds.2020322
Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310
Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321
Sabine Hittmeir, Laura Kanzler, Angelika Manhart, Christian Schmeiser. Kinetic modelling of colonies of myxobacteria. Kinetic & Related Models, 2021, 14 (1) : 1-24. doi: 10.3934/krm.2020046
Jean-Claude Saut, Yuexun Wang. Long time behavior of the fractional Korteweg-de Vries equation with cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1133-1155. doi: 10.3934/dcds.2020312
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137
Guido Cavallaro, Roberto Garra, Carlo Marchioro. Long time localization of modified surface quasi-geostrophic equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020336
Marco Torregrossa Giuseppe Toscani | CommonCrawl |
Game Designer - Forge of Empires
Game Designer - New Mobile Game
The mighty MMM concept in hyper-casual game design
by Narek Aghekyan on 12/17/19 01:36:00 pm
This article discusses an approach to make more interesting and pleasurable hyper-casual games. The aim is to achieve both higher retention rates and lower CPI numbers for hyper-casual games. The article discusses general approaches to design interesting games, and then projects that knowledge on hyper-casual genre by considering its peculiarities. It is based on several psychological experiments presented by professor Daniel Kahneman in his book Thinking, Fast and Slow [1], who received the 2002 Nobel Prize in Economic Sciences for his pioneering work with Amos Tversky on decision making.
Today hyper-casual game genre is very popular among players and game developers. Although there are predictions that the market is shifting from hyper-casual to hybrid-casual games [2] but this does not mean hyper-casual games are dying. This means the genre is actively transforming and evolving. Nowadays there are hundreds of game development studios and teams that make hyper-casual games, and dozens of publishers who offer hyper-casual game publishing services. Therefore, it is critical to analyze and understand the reasons behind successful titles.
So why some games succeed and make to the tops and others - don't? What key points we might miss while making a game - hyper-casual games specifically? Why are some games good, but they never make that important transition from good to great?
In 2008 the App Store was born. There was a time when people could make a game or an app overnight and earn money. In 2017 App Store featured over 2.1 million apps [3]. At that time App Store was already so saturated and the competition for exposure and attention was so high that developers had to work long and hard on their games and also had to get significant download traffic obtained by marketing or store featuring to hope for something. It looked like those dream days were gone, when small teams with very small budgets had a chance to succeed. But, fortunately, life is not linear.
In 2017 Voodoo significantly contributed to popularizing hyper-casual genre. Although Voodoo didn't pioneer making hyper-casual games, it took a couple of important steps towards establishing the genre: it understood customer needs, created design guidelines for the genre, a successful marketing and monetization scheme, named that genre and started to educate development studios how to make good hyper-casual games. Soon also created a dashboard where hundreds of studios can test their games quickly and transparently. The latest is also a big step forward, as even today some well known publishers test a game's KPIs and hide from the developer what creatives have been used for testing or even hide the CPI numbers by giving a vague information.
Pioneering of hyper-casual games is usually attributed to Ketchapp, sometimes people argue that this is not a new genre, this is just the renaissance of arcade games of the 70s [4]. But in this context it is not important when the first blossoms appeared. It is important that thanks to Voodoo in 2017 hyper-casual genre has become a well formed and established genre and captured the top charts of US App Store.
This was really a second breath for small game studios like us to get back their hopes to make small games and earn a significant amount of money [5,6]. Soon many other publishers joined the new trend and offered publishing services for hyper-casual games.
Today, only 2 years later, hyper-casual game market is estimated at about 2.5 billion in annual revenue. There are hundreds of millions of dollars invested in hyper-casual games, [6] and dozens of well known and unknown publishers are looking for game development studios.
But how much game studios are in control of making hyper-casual games? Are they rolling the dice or professional game designers are able to craft the desired experience for the audience and make a hit game for the market? One particular problem with hyper-casual games is that those are so small and so simple that usually it is hard to understand how it is possible to apply the historically accumulated game design knowledge to such games [7]. One of such knowledge is the concept of interest curve. In this article interest curves are discussed and it is explained how a game designer can make his game more interesting even when the game is very small.
The Interest Curve
Jesse Schell in his book The Art of Game Design discusses an interest curve of an entertainment experience [8]. Interest curve is a simple concept - it is the dependence of the customer interest over time while consuming the entertainment experience.
In that chapter Jesse Schell tells a very exciting story. At the age of 16 he started his career as a professional entertainer in an amusement park. One day the head of his show troupe, a magician named Mark Tripp, has taught him how the interest curve of a performance should be. Namely, he taught that keeping the same content (events) but re-ordering them might significantly enhance the quality of the experience for a magic or juggling show. Based on that advice Jessie Shell describes a good pattern of an interest curve of a well thought entertainment experience (see Figure 1 - taken from the book).
Figure 1, An example of an interest curve for a successful entertainment experience
The main takeaways from this graph are that:
The customer (game player, theme park guest, movie/theatre customer) comes with some initial non-zero interest, maybe because of the Ads, or because of friends' referrals (point A).
As an entertainer, during the first moments you need to increase his interest in order to create some expectations for the whole show. This is called "the hook". (point B)
Later there should be no flat part on the interest curve, because otherwise the consumer may leave the experience. The interest should rise and fall, but only to rise again (points C, D, E, F).
The last part of the experience should be the grand climax. This is where the experience should become the most interesting and this is where the story is resolved (point G). It is desirable that the customer leaves the experience with some interest still left, in order to return again. Jesse Schell mentions that the leftover interest is what show business veterans say "leave them wanting more" (point H).
This is a very important lesson for a game designer to know how to order interesting moments along the experience timeline.
This topic is also discussed in "Game Design Workshop" [9]. Tracy Fullerton describes how a classic dramatic arc is constructed (see Figure 2 - taken from the book).
Figure 2: Classic dramatic arc
It all starts with an exposition, where the consumer gets acquainted with the characters, the situation and the initial conflict, i.e. the premise. This creates a tension for the consumer to engage and wait for its resolution (the hook). This is a psychological phenomenon called Need For Closure (NFC) that describes "an individual's desire for a firm answer to a question and an aversion toward ambiguity" (from Wikipedia). [10]. The further development of the plot is being done by escalating the conflict. At some point the tension is on its maximum - the climax point. And then the resolution follows where the built tension is released.
So, basically, this is the same concept - no matter how you call it the interest curve or the dramatic arc.
But what if I make a hyper-casual game? There is no story, no plot, a very small amount of narrative is present - if present at all. How can we use this very important knowledge in hyper-casual games?
What does psychology teach us?
To understand what tools we have to operate with player's interest in the scope of hyper-casual games, we need to understand some psychology. Here I will refer to results described by prof. Daniel Kahneman in his book Thinking, Fast and Slow. In this book he writes about many-many concepts and experiments that are directly useful to game designers. Those experiments are covering many important aspects of decision making, and game designers must know about those experiments for the following reasons:
While making games we make decisions, and we need to understand what hidden forces may act on us while we make our decisions.
While making games we work with a multi-profile team, with diversity of opinions and ways of thinking. We need to understand how our colleagues make their decisions, and how we can help them make better decisions.
While playing a game a player needs to make decisions. If the player does not make decisions while playing a game, the game playing is becoming a static content consumption such as a book or a movie consumption is [11]. Hence the designers should understand how players make decisions and nudge them to make better, more pleasurable ones. As Sid Meier once said, we need to protect the players from themselves [12].
In the following 2 subtitles I will present psychological experiments that, hopefully, will change your attitude towards game development forever. I strongly recommend reading the book Thinking, Fast and Slow, if not the whole book then at least Chapter 35 "Two Selves", to fully understand the obtained results. Here I will summarize the main details only and cite some experiments from the book. We will learn about two concepts - Peak-End rule and its generalization Less Is More rule. Let's examine them one by one starting with Peak-End rule.
Peak-End rule
Prof. Kahneman wanted to understand how people experience pain (or pleasure) and how they remember them. For that reason, during experiments they were measuring the intensity of pain of colonoscopy patients. Back then when they were doing those experiments, colonoscopy was not administered with an anesthetic as well as an amnesic drug and was painful.
Citing from the book:
The patients were prompted every 60 seconds to indicate the level of pain they experienced at the moment. The data shown are on a scale where zero is "no pain at all" and 10 is "intolerable pain." As you can see, the experience of each patient varied considerably during the procedure, which lasted 8 minutes for patient A and 24 minutes for patient B (the last reading of zero pain was recorded after the end of the
procedure). A total of 154 patients participated in the experiment; the shortest procedure lasted 4 minutes, the longest 69 minutes.
Next, consider an easy question: Assuming that the two patients used the scale of pain similarly, which patient suffered more? No contest. There is general agreement that patient B had the worse time. Patient B spent at least as much time as patient A at any level of pain, and the "area under the curve" is clearly larger for B than for A. The key factor, of course, is that B's procedure lasted much longer.
When the procedure was over, all participants were asked to rate "the total amount of pain" they had experienced during the procedure. The wording was intended to encourage them to think of the integral of the pain they had reported, reproducing the hedonimeter totals. Surprisingly, the patients did nothing of the kind. The statistical analysis revealed two findings, which illustrate a pattern we have observed in other experiments:
Peak-end rule: The global retrospective rating was well predicted by the average of the level of pain reported at the worst moment of the experience and at its end.
Duration neglect: The duration of the procedure had no effect whatsoever on the ratings of total pain.
You can now apply these rules to the profiles of patients A and B. The worst rating (8 on the 10-point scale) was the same for both patients, but the last rating before the end of the procedure was 7 for patient A and only 1 for patient B. The peak-end average was therefore 7.5 for patient A and only 4.5 for patient B. As expected, patient A retained a much worse memory of the episode than patient B. It was the bad luck of patient A that the procedure ended at a bad moment, leaving him with an unpleasant memory.
Prof. Kahnemen explains that we have two selves - experiencing and remembering, i.e. we experience a process differently than we remember. Now the question rises, which self is deciding? If there will be a choice for us, which experience we want to repeat who will decide - our experiencing or remembering self? Prof. Kahneman explains that the decision-making power is in the hand of our remembering self and here is a clear experiment to demonstrate that. Again citing from his brilliant book Thinking, Fast and Slow.
To demonstrate the decision-making power of the remembering self, my colleagues and I designed an experiment, using a mild form of torture that I will call the cold-hand situation (its ugly technical name is cold-pressor). Participants are asked to hold their hand up to the wrist in painfully cold water until they are invited to remove it and are offered a warm towel. The subjects in our experiment used their free hand to control arrows on a keyboard to provide a continuous record of the pain they were enduring, a direct communication from their experiencing self. We chose a temperature that caused moderate but tolerable pain: the volunteer participants were of course free to remove their hand at any time, but none chose to do so.
Each participant endured two cold-hand episodes:
The short episode consisted of 60 seconds of immersion in water at 14° Celsius, which is experienced as painfully cold, but not intolerable. At the end of the 60 seconds, the experimenter instructed the participant to remove his hand from the water and offered a warm towel.
The long episode lasted 90 seconds. Its first 60 seconds were identical to the short episode. The experimenter said nothing at all at the end of the 60 seconds. Instead he opened a valve that allowed slightly warmer water to flow into the tub. During the additional 30 seconds, the temperature of the water rose by roughly 1°, just enough for most subjects to detect a slight decrease in the intensity of pain.
Our participants were told that they would have three cold-hand trials, but in fact they experienced only the short and the long episodes, each with a different hand. The trials were separated by seven minutes. Seven minutes after the second trial, the participants were given a choice about the third trial. They were told that one of their experiences would be repeated exactly, and were free to choose whether to repeat the experience they had had with their left hand or with their right hand. Of course, half the participants had the short trial with the left hand, half with the right; half had the short trial first, half began with the long, etc. This was a carefully controlled experiment.
The experiment was designed to create a conflict between the interests of the experiencing and the remembering selves, and also between experienced utility and decision utility. From the perspective of the experiencing self, the long trial was obviously worse. We expected the remembering self to have another opinion. The peak-end rule predicts a worse memory for the short than for the long trial, and duration neglect predicts that the difference between 90 seconds and 60 seconds of pain will be ignored. We therefore predicted that the participants would have a more favorable (or less unfavorable) memory of the long trial and choose to repeat it. They did. Fully 80% of the participants who reported that their pain diminished during the final phase of the longer episode opted to repeat it, thereby declaring themselves willing to suffer 30 seconds of needless pain in the anticipated third trial.
The subjects who preferred the long episode were not masochists and did not deliberately choose to expose themselves to the worse experience; they simply made a mistake. If we had asked them, "Would you prefer a 90-second immersion or only the first part of it?" they would certainly have selected the short option. We did not use these words, however, and the subjects did what came naturally: they chose to repeat the episode of which they had the less aversive memory. The subjects knew quite well which of the two exposures was longer — we asked them — but they did not use that knowledge. Their decision was governed by a simple rule of intuitive choice: pick the option you like the most, or dislike the least.
Prof. Kahneman mentions that this is a particular case of Less Is More rule - i.e. less overall torture was remembered as more in the remembering self. He then wraps up the chapter writing that there were classic studies on rats experiencing both pain and pleasure. Those experiments have also shown the same results - duration neglect, only the intensity was important. This was a huge finding for me as a game designer. But before I summarize this knowledge and go further there is something more about Less Is More rule that I have read in this book and I want to share with you.
Less Is More rule
In 1998 social psychologist Christopher Hsee has published a paper titled "Less Is Better: When Low-value Options Are Valued More Highly than High-value Options" in Journal of Behavioral Decision Making [13]. Prof. Kahneman has accumulated not only his experiments in his book, but also other groundbreaking experiments to explain the current state of knowledge in psychology of decision making. Here I have copied the parts about Hsee's above mentioned experiment as well as an experiment done by experimental economist John List related to this topic:
Christopher Hsee, of the University of Chicago, asked people to price sets of dinnerware offered in a clearance sale in a local store, where dinnerware regularly runs between $30 and $60. There were three groups in his experiment. The display below was shown to one group; Hsee labels that joint evaluation, because it allows a comparison of the two sets. The other two groups were shown only one of the two sets; this is single evaluation. Joint evaluation is a within-subject experiment, and single evaluation is between-subjects.
Set A: 40 pieces
Set B: 24 pieces
Dinner plates 8, all in good condition 8, all in good condition
Soup/salad bowls 8, all in good condition 8, all in good condition
Dessert plates 8, all in good condition 8, all in good condition
Cups 8, 2 of them broken -
Saucers 8, 7 of them broken -
Assuming that the dishes in the two sets are of equal quality, which is worth more? This question is easy. You can see that Set A contains all the dishes of Set B, and seven additional intact dishes, and it must be valued more. Indeed, the participants in Hsee's joint evaluation experiment were willing to pay a little more for Set A than for Set B: $32 versus $30.
The results reversed in single evaluation, where Set B was priced much higher than Set A: $33 versus $23. We know why this happened. Sets (including dinnerware sets!) are represented by norms and prototypes. You can sense immediately that the average value of the dishes is much lower for Set A than for Set B, because no one wants to pay for broken dishes. If the average dominates the evaluation, it is not surprising that Set B is valued more. Hsee called the resulting pattern less is more. By removing 16 items from Set A (7 of them intact), its value is improved.
Hsee's finding was replicated by the experimental economist John List in a real market for baseball cards. He auctioned sets of ten high-value cards, and identical sets to which three cards of modest value were added. As in the dinnerware experiment, the larger sets were valued more than the smaller ones in joint evaluation, but less in single evaluation. From the perspective of economic theory, this result is troubling: the economic value of a dinnerware set or of a collection of baseball cards is a sum-like variable. Adding a positively valued item to the set can only increase its value.
Now when we already know about Less Is More rule and Peak-End rule, we surely can discuss how to make our games more interesting.
So here are the main takeaways:
Humans feel but don't remember the duration of the pain or pleasure. Instead they remember the intensity of the peak.
People decide not based on how they have experienced, but how they remember their experience.
Adding smaller value items may decrease the value of the overall thing.
So while making a game it is really important to make the peak as high as possible. And we should remove all the flat parts from the interest curve, as they might damage the overall experience.
Jesse Schell even talks about Less Is More in his book. He explains how they have enhanced Aladdin's Magic Carpet VR experience for Disneyland by removing (providing a shortcut) the flat part of the interest curve from the experience (see Figure 3 - taken from Art of Game Design)
Figure 3: Interest curve for Aladdin's Magic Carpet VR experience for Disneyland
You can consider this as another proof of Less Is More rule taken from the entertainment industry.
One more exciting thing that I would like to mention about The Art of Game Design book is that in its first edition it was missing one concept, which was added in the second edition. It is called The Lens of Moments. Here it is shown in Figure 4
Figure 4: The Lens of Moments from The Art of Game Design: A Deck of Lenses, Second Edition
Basically, Schell is telling us to create key moments - memories. He is suggesting us to use the properties of remembering self - the one that is responsible for our decisions. In this context, I think, Jesse Schell has described surprisingly good what needs to be done to enhance the interest curve of an experience considering that he didn't use the theoretical basis that psychology is providing. As we already know the more precise, scientific picture is described by the Peak-End and the Less Is More rules.
Projection of psychological knowledge onto hyper-casual games: The MMM concept
Now, finally, let's discuss this in the context of hyper-casual games. There is no story in them, no interconnected levels, no chain of dramatic events. What can we do to make them interesting? At this point the answer should be pretty obvious. In hyper-casual games we deal with one or two simple and intuitively understandable mechanics. What we can do is to make that mechanics very memorable by creating emotional peaks through that mechanics.
If that sounds unclear let me give you some examples. When someone talks about The Matrix movie what do you recall from that movie first? Maybe this?
In order to be more on the subject, let's discuss hyper-casual hit games published this year by Voodoo. When you talk about Aquapark.io what is the first thing you recall? Maybe these shortcut jumps?
What about Crazy Kick!?
These are not just my guesses. There is an available data proving that adding those mechanics to the game changed its KPIs significantly. Below I will present some examples of games published by Voodoo. I have a permission from Voodoo to present retention rates but, unfortunately, I can't present CPIs in absolute numbers. But I will present here the relative improvement of CPIs in percents which will be enough to understand how much the game has been affected by the change.
The first version of Aquapark.io without the jump mechanics had D1 47%, CPI C1. With the jump (as Voodoo calls it - the hack) and a bit refinement to make the jump more understandable (adjusting sea color, in order to make the character visible during a jump; adjusting the jump, so that it does not land and rotate) Cassette Studio achieved D1 48%, CPI C2. For calculating the relative improvements I use the following formula:
\(RelativeImpr =\frac{|C1 - C2|}{C1} \cdot 100\%\)
According to this formula the relative improvement of CPI has been about 44%. With further enhancing the avatar and making the jump even more satisfying (jumping right into the pool), they have achieved D1 47%, relative improvement of CPI was 59%. I think the numbers talk themselves.
What about Crazy Kick! metrics? The first test with dribbling mechanics has shown pretty good results: D1 25%, CPI C3. But when a kicking mechanics, shown in the GIF above, (the hack) has been added the numbers have changed as follows: D1 36%, CPI C4. Here the relative improvement of CPI has been 38%.
Another case - Roller Splat! First test had only 6 levels, 6 puzzles and D1 was 35%. When Neon Play made the view part more exciting
Replaced the paint roller (the avatar) with a ball
Made the ball movement much more reactive (respond to the input faster, move faster)
Feeling of hitting a wall
The paint trail effect
the same game changed D1 from 35% to 55%, i.e. 57% improvement in D1, just by my making the same thing more exciting and memorable - by enhancing the peak pleasure. No level design was changed, no levels were added.
You might think that the amount of content is what creates the retention. But I think this is not the only way to go. Indeed, the amount of content might help for a long term retention, because people get hedonic fatigue from experiencing the same pleasure again and again [8 Sellers]. But for D1 retention you need a very pleasurable moment. Aquapark.io in it's initial test had only 1 level with D1 equal to 47%. Voodoo's Roller Splat! had only 6 puzzles - only 6 levels and D1 was 35%. Think this way, when you listen to a great song, what does make you listen to it the next day again? Did the content change? What about eating a very tasty apple every morning? The content is the same, you just wish to experience it again.
Therefore, while making hyper-casual games think about what would be the Most Memorable Moment of your game - the MMM. Make it more exciting, enhance the peak of the feelings created by that episode. This is because the peak of the feelings stays in the memory and then becomes the deciding factor for wanting more. Make the Triple-M of your game shine and both D1 and CPI will improve significantly.
I want to outline clearly that games are complex systems. I am not saying that you need to take your game and put there a bunch of memorable moments. It should all be nicely connected and deliver a single experience as a whole (keep the gestalt). Especially hyper-casual games have two other very important components the clarity and choices. These are also very important concepts and they are not understood identically by everyone. By clarity I mean how clear is:
The goal of the game
The means of pursuing the goal - the mechanics and the control.
And by choices I mean:
Breadth - The number of different things you can do within a game (e.g. dribble and kick, or slide and jump)
Depth - The number of reasons you have to do the same thing (e.g. in Aquapark.io you can slide in order to 1) dodge an obstacle, 2) catch a speeder boost, 3) jump from the slide and 4) kick an opponent)
Sometimes operations and mechanics can be combined to create even more choices - this is called synergy.
In every experience people remember the peak of their feelings and the end. The deciding factor both for downloading and for returning to your game is what remains in the players memory after seeing the game's Ad or playing the game. In hyper-casual games there are no clear endings, hence we really need to think about building the Most Memorable Moment. Games are being significantly affected by the MMM. Particularly, in the examples above, the corresponding changes resulted in improvements both in CPI and D1 results. The improvements have varied from 38% to 59%, which is a significant factor from transforming a good game to a great one.
For a good retention and marketability of your game you should consider introducing one very exciting and, hence, a memorable moment to the player's experience - the MMM.
Thinking, Fast and Slow by Daniel Kahneman, 2013
https://www.deconstructoroffun.com/blog/four-reasons-why-the-hypercasual-gold-rush-is-coming-to-an-end
https://en.wikipedia.org/wiki/App_Store_(iOS)
https://www.ironsrc.com/blog/what-are-hyper-casual-games-and-how-do-you-monetize-them/
https://mobilefreetoplay.com/5-reasons-why-voodoo-beats-small-game-developers-on-the-app-store/
https://venturebeat.com/2019/03/24/the-truth-about-hypercasual-games/
https://www.gamasutra.com/blogs/NarekAghekyan/20190812/348478/7_Mustread_Books_for_Game_Designers.php
The Art of Game Design: A Book of Lenses by Jesse Schell, 2nd edition, 2015, Chapter 16 Experiences Can be Judged by Their Interest Curves
Game Design Workshop: A Playcentric Approach to Creating Innovative Games by Tracy Fullerton, 4 edition 2018, Chapter 4 Working with Dramatic Elements
https://en.wikipedia.org/wiki/Closure_(psychology)
Advanced Game Design: A Systems Approach, by Michael Sellers, 1st edition, 2017
https://www.gdcvault.com/play/1012186/The-Psychology-of-Game-Design
Hsee, Christopher K. (1998). "Less Is Better: When Low-value Options Are Valued More Highly than High-value Options" (PDF). Journal of Behavioral Decision Making. 11 (2): 107–121
Gunfire Games — Austin, Texas, United States
Airship Syndicate — Austin, Texas, United States
355799 blog /blogs/NarekAghekyan/20191217/355799/The_mighty_MMM_concept_in_hypercasual_game_design.php 1029937 43671123 Loading Comments | CommonCrawl |
Stable Knots and Links in Electromagnetic Fields
Benjamin Bode ORCID: orcid.org/0000-0002-8368-74711
Communications in Mathematical Physics volume 387, pages 1757–1770 (2021)Cite this article
Persistent topological structures in physical systems have become increasingly important over the last years. Electromagnetic fields with knotted field lines play a special role among these, since they can be used to transfer their knottedness to other systems like plasmas and quantum fluids. In null electromagnetic fields the electric and the magnetic field lines evolve like unbreakable elastic filaments in a fluid flow. In particular, their topology is preserved for all time, so that all knotted closed field lines maintain their knot type. We use an approach due to Bateman to prove that for every link L there is such an electromagnetic field that satisfies Maxwell's equations in free space and that has closed electric and magnetic field lines in the shape of L for all time. The knotted and linked field lines turn out to be projections of real analytic Legendrian links with respect to the standard contact structure on the 3-sphere.
Knotted structures appear in physical fields in a wide range of areas of theoretical physics; in liquid crystals [24, 29, 30], optical fields [14], Bose-Einstein condensates [32], fluid flows [17, 18], the Skyrme-Faddeev model [39], quantum mechanics [4, 8, 9] and several others.
Mathematical constructions of initially knotted configurations in physical fields make experiments and numerical simulations possible. However, the knot typically changes or disappears as the field evolves with time as prescribed by some differential equation or energy functional. There are some results regarding the motion of knotted vortex lines and the existence of stationary solutions of the harmonic oscillator [8, 15], the hydrogen atom [16] and nonlinear Schrödinger equations with harmonic forces [6]. Moreover, there exist smooth solutions to certain Schrödinger equations (such as the Gross–Pitaevskii equation) that describe any prescribed time evolution of a knot [20], i.e. any possible sequence of reconnection events as (topologically) described by a surface, a cobordism between the knot at time \(t_0\) and the knot at time \(t_1\), in \(3+1\) dimensions. In particular, this implies the existence of solutions that contain a given knot for all time, i.e., the knot is stable or robust. However, more general (i.e., regarding more general differential equations) explicit analytic constructions of such solutions are not known.
In the case of electromagnetic fields and Maxwell's equations, the first knotted solution was discovered by Synge (cf. p. 366 in [40]) and its topological properties were found by Rañada [33]. This field contains closed magnetic and electric field lines that form the Hopf link for all time. Using methods from [10, 27] we can algorithmically construct for any given link L a vector field \(\mathbf {B}:\mathbb {R}^3\rightarrow \mathbb {R}^3\) that has a set of closed field lines in the shape of L and that can be taken as an initial configuration of the magnetic part of an electromagnetic field, say at time \(t=0\). However, these links cannot be expected to be stable, since they usually undergo reconnection events as time progresses and the field evolves according to Maxwell's equations, or they disappear altogether. While there are not many rigorous results concerning the time evolution and reconnections of knots in electromagnetic fields, the setting of the Navier-Stokes equations has been analysed in some detail [19]. Necessary and sufficient conditions for the stability of knotted field lines in electromagnetic fields are known [28], but so far only the family of torus links has been constructed and thereby been proven to arise as stable knotted field lines in electromagnetism.
In [26] Kedia et al. offer a construction of null electromagnetic fields with stable torus links as closed electric and magnetic field lines using an approach developed by Bateman [3]. In this article we prove that their construction can be extended to any link type, implying the following result:
Theorem 1
There is a smooth family of diffeomorphisms \(\varPhi _t:\mathbb {R}^3\rightarrow \mathbb {R}^3\), with \(\varPhi _0\) equal to the identity map, such that for every n-component link type \(L=L_1\cup L_2\cup \cdots \cup L_n\) and every subset \(I\subset \{1,2,\ldots ,n\}\) there is a representative of the link type L, which we also denote by L, and an electromagnetic field \(\mathbf {F}\) that satisfies Maxwell's equations in free space with \(\varPhi _t(L)\) being a set of closed field lines (electric or magnetic) of \(\mathbf {F}\) for all time t, with closed electric field lines that are \(\bigcup _{i\in I}\varPhi _t(L_i)\) and closed magnetic field lines that are \(\bigcup _{i\notin I}\varPhi _t(L_i)\).
This shows not only that every pair of links \(L_1\) and \(L_2\) can arise as a set of robust closed electric and magnetic field lines, respectively, but also that any linking between the components of \(L_1\) and \(L_2\) can be realised.
We would like to point out that the subset I of the set of components of L does not need to be non-empty or proper for the theorem to hold. As a special case, we may choose L and I such that \(\bigcup _{i\in I}L_i\) and \(\bigcup _{i\notin I}L_i\) are ambient isotopic, which shows the following generalisation of the results in [26].
Corollary 1
There is a smooth family of diffeomorphisms \(\varPhi _t:\mathbb {R}^3\rightarrow \mathbb {R}^3\), with \(\varPhi _0\) equal to the identity map, such that for any link L there is an electromagnetic field \(\mathbf {F}\) that satisfies Maxwell's equations in free space and whose electric and magnetic field both have a set of closed field lines for all values of time t, given by \(\varPhi _t(L')\) and \(\varPhi _t(L'')\), respectively, where both \(L'\) and \(L''\) are ambient isotopic to L.
The proof of the theorem relies on the existence of certain holomorphic functions, whose explicit construction eludes us at this moment. As a consequence, Theorem 1 guarantees the existence of the knotted fields, but does not allow us to provide any new examples beyond the torus link family.
The closed field lines at time \(t=0\) turn out to be projections into \(\mathbb {R}^3\) of real analytic Legendrian links with respect to the standard contact structure in \(S^3\). This family of links has been studied by Rudolph in the context of holomorphic functions as totally tangential \(\mathbb {C}\)-links [35, 36].
The remainder of the article is structured as follows. In Sect. 2 we review some key mathematical concepts, in particular Bateman's construction of null electromagnetic fields and knots and their role in contact geometry. Section 3 summarises some observations that relate the problem of constructing knotted field lines to a problem on holomorphic extendability of certain functions. The proof of Theorem 1 can be found in Sect. 4, where we use results by Rudolph, Burns and Stout to show that the functions in question can in fact be extended to holomorphic functions. In Sect. 5 we offer a brief discussion of our result and some properties of the resulting electromagnetic fields.
Knots and links
For \(m\in \mathbb {N}\) we write \(S^{2m-1}\) for the \((2m-1)\)-sphere of unit radius:
$$\begin{aligned} S^{2m-1}=\{(z_1,z_2,\ldots ,z_m)\in \mathbb {C}^2:\sum _{i=1}^m |z_i|^2=1\}. \end{aligned}$$
Via stereographic projection we have \(S^3\cong \mathbb {R}^3\cup \{\infty \}\). A link with n components in a 3-manifold M is (the image of) a smooth embedding of n circles \(S^1\sqcup S^1\sqcup \ldots \sqcup S^1\) in M. A link with only one component is called a knot. The only 3-manifolds that are relevant for this article are \(M=S^3\) and \(M=\mathbb {R}^3\).
Knots and links are studied up to ambient isotopy or, equivalently, smooth isotopy, that is, two links are considered equivalent if one can be smoothly deformed into the other without any cutting or gluing. This defines an equivalence relation on the set of all links and we refer to the equivalence class of a link L as its link type or, in the case of a knot, as its knot type. It is very common to be somewhat lax with the distinction between the concept of a link and its link type (cf. for example Theorem 1, where L is used to denote both the link type and a representative of that equivalence class). When there is no risk of confusion we will for example refer to a link L even though we really mean the link type, i.e., the equivalence class, represented by L.
One special family of links/link types is the family of torus links \(T_{p,q}\) and the equivalence classes that they represent. It consists of all links that can be drawn on the surface of an unknotted torus \(\mathbb {T}=S^1\times S^1\) in \(\mathbb {R}^3\) or \(S^3\) and they are characterised by two integers p and q, the number of times the link winds around each \(S^1\). This definition leaves an ambiguity regarding the sign of p and q, i.e., which direction is considered as positive wrapping around the meridian and the longitude. This ambiguity is removed by the standard convention to choose
$$\begin{aligned} (\rho \mathrm {e}^{\mathrm {i}q\varphi }, \sqrt{1-\rho ^2}\mathrm {e}^{\mathrm {i}p\varphi }) \end{aligned}$$
as a parametrisation of the (p, q)-torus knot in the unit 3-sphere \(S^3\subset \mathbb {C}^2\) with \(p,q>0\), where the parameter \(\varphi \) ranges from 0 to \(2\pi \) and \(\rho \) is the solution to \(\rho ^{|p|}=\sqrt{1-\rho ^2}^{|q|}\). It follows that for positive p and q the complex curve \(z_1^p-z_2^q=0\) intersects \(S^3\) in the (p, q)-torus knot \(T_{p,q}\) [31].
Knot theory is now a vast and quickly developing area of mathematics with many connections to biology, chemistry and physics. For a more extensive introduction we refer the interested reader to the standard references [1, 34]. The role that knots play in physics is discussed in more detail in [2, 25].
Bateman's construction
Our exposition of Bateman's work follows the relevant sections in [26]. In electromagnetic fields that are null for all time the electric and magnetic field lines evolve like unbreakable elastic filaments in an ideal fluid flow. They are dragged in the direction of the Poynting vector field with the speed of light [23, 28]. This means that the link types of any closed field lines remain unchanged for all time. In the following we represent a time-dependent electromagnetic field by its Riemann-Silberstein vector \(\mathbf {F}=\mathbf {E}+\mathrm {i}\mathbf {B}\), where \(\mathbf {E}\) and \(\mathbf {B}\) are time-dependent real vector fields on \(\mathbb {R}^3\), representing the electric and magnetic part of \(\mathbf {F}\), respectively.
The Riemann–Silberstein vector (or RS vector, for short) goes back to lectures on partial differential equations by Riemann, which were edited and published by Weber [41], and two articles by Silberstein [37, 38]. Its main advantage lies in the simplification of descriptions and equations for electromagnetic fields, both in the classical and the quantum setting. An overview on the history of the RS vector and many examples of its applications can be found in [7].
It was shown in [28] that the nullness condition
$$\begin{aligned} \mathbf {E}\cdot \mathbf {B}=0,\qquad \mathbf {E}\cdot \mathbf {E}-\mathbf {B}\cdot \mathbf {B}=0, \qquad \text {for all }t\in \mathbb {R} \end{aligned}$$
is equivalent to \(\mathbf {F}\) being both null and shear-free at \(t=0\), that is,
$$\begin{aligned} (\mathbf {E}\cdot \mathbf {B})|_{t=0}=0,\qquad (\mathbf {E}\cdot \mathbf {E}-\mathbf {B}\cdot \mathbf {B})|_{t=0}=0, \end{aligned}$$
$$\begin{aligned} ((E^i E^j-B^i B^j)\partial _j V_i)|_{t=0}&=0,\nonumber \\ ((E^i B^j+E^j B^i)\partial _j V_i)|_{t=0}&=0, \end{aligned}$$
where \(\mathbf {V}=\mathbf {E}\times \mathbf {B}/|\mathbf {E}\times \mathbf {B}|\) is the normalised Poynting field and the indices \(i,j=1,2,3\) enumerate the components of the fields \(\mathbf {E}=(E_1,E_2,E_3)\), \(\mathbf {B}=(B_1,B_2,B_3)\) and \(\mathbf {V}=(V_1,V_2,V_3)\).
It is worth pointing out that the Poynting vector field \(\mathbf {V}\) of a null field satisfies the Euler equation for a pressure-less flow:
$$\begin{aligned} \partial _t \mathbf {V}+(\mathbf {V}\cdot \nabla )\mathbf {V}=0. \end{aligned}$$
More analogies between null light fields and pressure-less Euler flows are summarised in [28].
The transport of field lines by the Poynting field of a null electromagnetic field was made precise in [28]. We write \(W=\frac{1}{2}(\mathbf {E}\cdot \mathbf {E}+\mathbf {B}\cdot \mathbf {B})\) for the electromagnetic density. The normalised Poynting vector field \(\mathbf {V}\) transports (where it is defined) \(\mathbf {E}/W\) and \(\mathbf {B}/W\). In the following construction V can be defined everywhere, even where \(W=0\). Note that since \(\partial _t W + \nabla \cdot (W\mathbf {V})=0\), the nodal set of W is also transported by \(\mathbf {V}\). This implies that if \(L_1\) is a link formed by closed electric field lines at time \(t=0\) and \(L_2\) is a link formed by closed magnetic field lines of such an electromagnetic field at \(t=0\) (and in particular \(W\ne 0\) on \(L_1\) and \(L_2\)), then their time evolution according to Maxwell's equations does not only preserve the link types of \(L_1\) and \(L_2\), but also the way in which they are linked, i.e., the link type of \(L_1\cup L_2\). Likewise, the topology of any field line, open or closed, is preserved with time.
Bateman discovered a construction of null electromagnetic fields [3], which guarantees the stability of links and goes as follows. Take two functions \(\alpha , \beta :\mathbb {R}\times \mathbb {R}^3\rightarrow \mathbb {C}\) that satisfy
$$\begin{aligned} \nabla \alpha \times \nabla \beta =\mathrm {i}(\partial _t\alpha \nabla \beta -\partial _t\beta \nabla \alpha ), \end{aligned}$$
where \(\nabla \) denotes the gradient with respect to the three spatial variables.
Then for any pair of holomorphic functions \(f,g:\mathbb {C}^2\rightarrow \mathbb {C}\) the field defined by
$$\begin{aligned} \mathbf {F}=\mathbf {E}+\mathrm {i}\mathbf {B}=\nabla f(\alpha ,\beta )\times \nabla g(\alpha ,\beta ) \end{aligned}$$
satisfies Maxwell's equations and is null for all time. The field \(\mathbf {F}\) can be rewritten as
$$\begin{aligned} \mathbf {F}=h(\alpha , \beta )\nabla \alpha \times \nabla \beta , \end{aligned}$$
where \(h=\partial _{z_1} f\partial _{z_2} g-\partial _{z_2} f\partial _{z_1} g\) and \((z_1,z_2)\) are the coordinates in \(\mathbb {C}^2\). Since f and g are arbitrary holomorphic functions, we obtain a null field for any holomorphic function \(h:\mathbb {C}^2\rightarrow \mathbb {C}\).
Bateman's construction also has a description in terms of spinors, which connects the construction of knots in electromagnetism to knotted solutions of the Weyl and the Dirac equations [5]. However, little is known in this direction of research.
Kedia et al. [26] used Bateman's construction to find concrete examples of electromagnetic fields with knotted electric and magnetic field lines. In their work both the electric and the magnetic field lines take the shape of torus knots and links. They consider
$$\begin{aligned} \alpha&=\frac{x^2+y^2+z^2-t^2-1+2\mathrm {i}z}{x^2+y^2+z^2-(t-\mathrm {i})^2},\nonumber \\ \beta&=\frac{2(x-\mathrm {i}y)}{x^2+y^2+z^2-(t-\mathrm {i})^2}, \end{aligned}$$
where x, y and z are the three spatial coordinates and t represents time. It is a straightforward calculation to check that \(\alpha \) and \(\beta \) satisfy Eq. (7). Note that for any value of \(t=t_*\), the function \((\alpha ,\beta )|_{t=t_*}:\mathbb {R}^3\rightarrow \mathbb {C}^2\) gives a diffeomorphism from \(\mathbb {R}^3\cup \{\infty \}\) to \(S^{3}\subset \mathbb {C}^2\).
The construction of stable knots and links in electromagnetic fields therefore comes down to finding holomorphic functions f and g, or equivalently one holomorphic function h. Since the image of \((\alpha , \beta )\) is \(S^3\), it is not necessary for these functions to be holomorphic (or even defined) on all of \(\mathbb {C}^2\). It suffices to find functions that are holomorphic on an open neighbourhood of \(S^3\) in \(\mathbb {C}^2\).
Kedia et al. find that for \(f(z_1,z_2)=z_1^p\) and \(g(z_1,z_2)=z_2^q\) the resulting electric and magnetic fields both contain field lines that form the (p, q)-torus link \(T_{p,q}\). Hence there is a construction of flow lines in the shape of torus links that are stable for all time.
It was wrongly stated in [11, 26] that for \(t=0\) the map \((\alpha ,\beta )\) in Eq. (10) is the inverse of the standard stereographic projection. In fact, the inverse of the standard stereographic projection is given by \((u,v):\mathbb {R}^3\rightarrow S^3\),
$$\begin{aligned} u&=\frac{x^2+y^2+z^2-1+2\mathrm {i}z}{x^2+y^2+z^2+1},\nonumber \\ v&=\frac{2(x+\mathrm {i}y)}{x^2+y^2+z^2+1}, \end{aligned}$$
so that \((\alpha ,\beta )|_{t=0}\) is actually the inverse of the standard stereographic projection followed by a mirror reflection that sends \(\text {Im}(z_2)\) to \(-\text {Im}(z_2)\) or equivalently it is a mirror reflection in \(\mathbb {R}^3\) along the \(y=0\)-plane followed by the inverse of the standard stereographic projection.
Kedia et al.'s choice of f and g was (in their own words) 'guided' by the hypersurface \(z_1^p\pm z_2^q=0\). Complex hypersurfaces like this and their singularities have been extensively studied by Milnor and others [12, 31] and it is well-known that the hypersurface intersects \(S^3\) in the (p, q)-torus knot \(T_{p,q}\). Even though this made the choice of f and g somewhat intuitive (at least for Kedia et al.), there seems to be no obvious relation between the hypersurface and the electromagnetic field that would enable us to generalise their approach. Since their fields contain the links \(T_{p,q}\) in \(\mathbb {R}^3\), the corresponding curves on \(S^3\) are actually the mirror image \(T_{p,-q}\). Therefore, it seems more plausible that (if there is a connection to complex hypersurfaces at all) the relevant complex curve is \(z_1^pz_2^q-1=0\), which intersects a 3-sphere of an appropriate radius in \(T_{p,-q}\) [35]. However, in contrast to Milnor's hypersurfaces, this intersection is totally tangential, i.e., at every point of intersection the tangent plane of the hypersurface lies in the tangent space of the 3-sphere. This is an interesting property that plays an important role in the generalisation of the construction to arbitrarily complex link types in the following sections.
Since Bateman fields are null, their field lines are transported by the normalised Poynting field. With our choice of \(\alpha \) and \(\beta \) in Eq. (10) the time evolution of the field lines can be made quite explicit. Recall that \((\alpha ,\beta )|_{t=t_*}\) is a diffeomorphism between \(\mathbb {R}^3\cup \{\infty \}\) and \(S^3\) mapping the point at infinity to \((1,0)\in S^3\subset \mathbb {C}^2\) for all \(t_*\). We define \(\varphi _{t_*}:=(\alpha ,\beta )|_{t=t_*}\) and \(\varPhi _{t}:=\varphi _t^{-1}\circ \varphi _0\). We find that
$$\begin{aligned}&(\varPhi _{t=t_*})_*\text{ Re }(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\ {}&\quad \qquad =\frac{(1+x^2+y^2+(t_*-z)^2)^2}{t_*^4-2t_*^2(|q|^2-1)+(|q|^2+1)^2}\text{ Re }(\nabla \alpha |_{t=t*}\times \nabla \beta |_{t=t_*}), \end{aligned}$$
$$\begin{aligned}&(\varPhi _{t=t_*})_*\text{ Im }(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\ {}&\quad \qquad =\frac{(1+x^2+y^2+(t_*-z)^2)^2}{t_*^4-2t_*^2(|q|^2-1)+(|q|^2+1)^2}\text{ Im }(\nabla \alpha |_{t=t*}\times \nabla \beta |_{t=t_*}), \end{aligned}$$
where \(|q|^{2}=x^{2}+y^{2}+z^{2}\). For the electric and magnetic part of a general Bateman field with our choice of \(\alpha \) and \(\beta \) this implies
$$\begin{aligned} (\varPhi _{t=t_*})_*\mathbf {E}|_{t=0}&=(\varPhi _{t=t_*})_*(\text {Re}(h(\alpha |_{t=0},\beta |_{t=0}))\text {Re}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\&\quad -\,\text {Im}(h(\alpha |_{t=0},\beta |_{t=0}))\text {Im}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0}))\nonumber \\&=(\text {Re}(h(\alpha |_{t=t_*},\beta |_{t=t*}))(\varPhi _{t=t_*})_*\text {Re}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\&\quad -(\text {Im}(h(\alpha |_{t=t_*},\beta |_{t=t*}))(\varPhi _{t=t_*})_*\text {Im}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\&=\frac{(1+x^2+y^2+(t_*-z)^2)^2}{t_*^4-2t_*^2(x^2+y^2+z^2-1)+(x^2+y^2+z^2+1)^2}\mathbf {E}_{t=t_*} \end{aligned}$$
$$\begin{aligned} (\varPhi _{t=t_*})_*\mathbf {B}|_{t=0}&=(\varPhi _{t=t_*})_*(\text {Re}(h(\alpha |_{t=0},\beta |_{t=0}))\text {Im}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\&\quad +\text {Im}(h(\alpha |_{t=0},\beta |_{t=0}))\text {Re}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0}))\nonumber \\&=(\text {Re}(h(\alpha |_{t=t_*},\beta |_{t=t*}))(\varPhi _{t=t_*})_*\text {Im}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\&\quad +(\text {Im}(h(\alpha |_{t=t_*},\beta |_{t=t*}))(\varPhi _{t=t_*})_*\text {Re}(\nabla \alpha |_{t=0}\times \nabla \beta |_{t=0})\nonumber \\&=\frac{(1+x^2+y^2+(t_*-z)^2)^2}{t_*^4-2t_*^2(x^2+y^2+z^2-1)+(x^2+y^2+z^2+1)^2}\mathbf {B}_{t=t_*}. \end{aligned}$$
In both cases we have used that
$$\begin{aligned} h(\alpha |_{t=0}(\varPhi _{t=t_*}^{-1}(x,y,z)),\beta |_{t=0}(\varPhi _{t=t_*}^{-1}(x,y,z))&=h\circ \varphi _{t=0}\circ (\varphi _{t=t_*}^{-1}\varphi _{t=0})^{-1}(x,y,z)\nonumber \\&=h\circ \varphi _{t=t_*}(x,y,z)\nonumber \\&=h(\alpha |_{t=t_*}(x,y,z),\beta |_{t=t_*}(x,y,z)). \end{aligned}$$
This shows that the electric field and the magnetic field at time \(t=t_*\) are non-zero multiples of the pushforward of the fields at \(t=0\) by the diffeomorphism \(\varPhi _{t=t_*}\). In particular, it provides another argument that the zeros of \(\mathbf {F}\) are also transported by a smooth family of diffeomorphisms and no field line ever changes its topology. One interpretation of this calculation is that the field lines of the electric part \(\mathbf {E}\) or magnetic part \(\mathbf {B}\) of a (time-dependent) Bateman field correspond to the field lines of one fixed vector field on the 3-sphere, \((\varphi _0)_*\mathbf {E}|_{t=0}\) and \((\varphi _0)_*\mathbf {B}|_{t=0}\), respectively, where the time evolution in \(\mathbb {R}^3\) corresponds to a smooth change of the diffeomorphism \(\varphi _t^{-1}\) that is used to project the field lines from \(S^3\) to \(\mathbb {R}^3\).
Note that \(\varPhi _{t}\) is independent of the holomorphic function h, that is, we have the same family of smooth diffeomorphisms for all of these Bateman fields. This should not really surprise us, since it is already known that the field lines are transported along the (normalised) Poynting vector field, which for our choice of \(\alpha \) and \(\beta \) does not depend on h either.
Since the energy density \(W=(\mathbf {E}^2+\mathbf {B}^2)/2\) for the Hopfion solution \((h=1)\) at \(t=0\) is \(\tfrac{16}{(1+x^2+y^2+z^2)^4}\), the energy density for a general Bateman field is
$$\begin{aligned} W=\frac{16(\text {Re}(h(\alpha ,\beta ))^2+\text {Im}(h(\alpha ,\beta ))^2)}{(1+x^2+y^2+z^2)^4}. \end{aligned}$$
In particular, all Bateman fields have finite energy and since \(\mathbf {E}\) and \(\mathbf {B}\) have the same norm, they both go to 0 at spatial infinity as \(r^{-4}\) (if \(h(1,0)\ne 0\)) or faster (if \(h(1,0)=0\)). While the energy density depends on t, the asymptotic behaviour of \(\mathbf {E}\) and \(\mathbf {B}\) at spatial infinity is not affected by this.
Contact structures and Legendrian links
A contact structure on a 3-manifold M is a smooth, completely non-integrable plane distribution \(\xi \subset TM\) in the tangent bundle of M. It can be given as the kernel of a differential 1-form, a contact form \(\alpha \), for which the non-integrability condition reads
$$\begin{aligned} \alpha \wedge \mathrm {d}\alpha \ne 0. \end{aligned}$$
It is a convention to denote contact forms by \(\alpha \). This should not be confused with the first component of the map \((\alpha ,\beta )\) in Eq. (10). Within this subsection \(\alpha \) refers to a contact form, in all other sections it refers to Eq. (10). The choice of \(\alpha \) for a given \(\xi \) is not unique, but the non-integrability property is independent of this choice.
In other words, for every point \(p\in M\) we have a plane (a 2-dimensional linear subspace) \(\xi _p\) in the tangent space \(T_p(M)\) given by \(\xi _p=\text {ker}_p\alpha \), which is the kernel of \(\alpha \) when \(\alpha \) is regarded as a map \(T_pM\rightarrow \mathbb {R}\). The non-integrability condition ensures that there is a certain twisting of these planes throughout M. We call the pair of manifold M and contact structure \(\xi \) a contact manifold \((M,\xi )\).
The standard contact structure \(\xi _0\) on \(S^3\) is given by the contact form
$$\begin{aligned} \alpha _0=\sum _{j=1}^2 (x_j\mathrm {d}y_j-y_j\mathrm {d}x_j), \end{aligned}$$
where we write the complex coordinates \((z_1,z_2)\) of \(\mathbb {C}^2\) in terms of their real and imaginary parts: \(z_j=x_j+\mathrm {i}y_j\).
There are two interesting geometric interpretations of the standard contact structure \(\xi _0\). Firstly, the planes are precisely the normals to the fibers of the Hopf fibration \(S^3\rightarrow S^2\). Secondly, the planes are precisely the complex tangent lines to \(S^3\).
A link L in a contact manifold \((M,\xi )\) is called a Legendrian link with respect to the contact structure \(\xi \), if it is everywhere tangent to the contact planes, i.e., \(T_pL\subset \xi _p\). It is known that every link type in \(S^3\) has representatives that are Legendrian. In other words, for every link L in \(S^3\) there is a Legendrian link with respect to the standard contact structure on \(S^3\) that is ambient isotopic to L.
More details on contact geometry and the connection to knot theory can be found in [21, 22].
Legendrian Field Lines
In this section we would like to point out some observations on Bateman's construction. Bateman's construction turns the problem of constructing null fields with knotted field lines into a problem of finding appropriate holomorphic functions \(h:\mathbb {C}^2\rightarrow \mathbb {C}\). Our observations turn this into the question whether for a given Legendrian link L with respect to the standard contact structure on \(S^3\) a certain function defined on L admits a holomorphic extension.
Lemma 1
Let \(h:\mathbb {C}^2\rightarrow \mathbb {C}\) be a function that is holomorphic on an open neighbourhood of \(S^3\) and let \(\mathbf {F}=h(\alpha ,\beta )\nabla \alpha \times \nabla \beta \) be the corresponding electromagnetic field with \((\alpha ,\beta )\) as in Eq. (10). Suppose L is a set of closed magnetic field lines or a set of closed electric field lines of \(\mathbf {F}\) at time \(t=0\). Then \((\alpha ,\beta )|_{t=0}(L)\) is a Legendrian link with respect to the standard contact structure on \(S^3\).
It is known that all fields that are constructed with the same choice of \((\alpha ,\beta )\) have the same Poynting field, independent of h. For \((\alpha ,\beta )\) as in Eq. (10) with \(t=0\) its pushforward by \((\alpha ,\beta )|_{t=0}\) is tangent to the fibers of the Hopf fibration. By the definition of the Poynting field, the electric and magnetic field are orthogonal to the Poynting field and it is a simple calculation that their pushforwards by \((\alpha ,\beta )|_{t=0}\) are orthogonal as well. Therefore, they must be normal to the fibers of the Hopf fibration. Hence the pushforward of all electric and magnetic field lines by \((\alpha ,\beta )\) are tangent to the standard contact structure on \(S^3\). In particular, any closed electric or magnetic field line is a Legendrian link with respect to the standard contact structure. \(\quad \square \)
A more general statement of Lemma 1 is proven in [11] and follows immediately from the existence of \(\varPhi _t\) in Sect. 2.2. It turns out that \((\alpha ,\beta )\) define a contact structure for each value of t, where time evolution is given by a 1-parameter family of contactomorphisms, and all sets of closed flow lines at a fixed moment in time are (the images in \(\mathbb {R}^3\) of) Legendrian links with respect to the corresponding contact structure.
Lemma 1 tells us that (the projection of) closed field lines form Legendrian links. We would like to go in the other direction, starting with a Legendrian link and constructing a corresponding electromagnetic field for it.
We define the map \(\varphi =(\alpha ,\beta )|_{t=0}:\mathbb {R}^3\cup \{\infty \}\rightarrow S^3\). The particular choice of \((\alpha ,\beta )\) in Eq. (10) does not only determine a contact structure, but also provides us with an explicit orthonormal basis of the plane \(\xi _p\) in \(T_pS^3\) for all \(p\in S^3\backslash \{(1,0)\}\), given by
$$\begin{aligned} \xi _p=\text {span}\{v_1,v_2\} \end{aligned}$$
where \(v_1\) and \(v_2\) are given by
$$\begin{aligned} v_1&=\varphi _*\left( \frac{(x^2+y^2+z^2+1)^3}{8} \text {Re}\left( \nabla \alpha \bigr |_{t=0} \times \nabla \beta \bigr |_{t=0}\right) \right) \nonumber \\&=-x_2 \frac{\partial }{\partial x_1}+y_2\frac{\partial }{\partial y_1}+x_1\frac{\partial }{\partial x_2}-y_1\frac{\partial }{\partial y_2},\nonumber \\ v_2&=\varphi _*\left( \frac{(x^2+y^2+z^2+1)^3}{8}\text {Im}\left( \nabla \alpha \bigr |_{t=0} \times \nabla \beta \bigr |_{t=0}\right) \right) \nonumber \\&=-y_2 \frac{\partial }{\partial x_1}-x_2\frac{\partial }{\partial y_1}+y_1\frac{\partial }{\partial x_2}+x_1\frac{\partial }{\partial y_2}. \end{aligned}$$
They are pushforwards of multiples of \(\text {Re}(\nabla \alpha \times \nabla \beta )|_{t=0}\) and \(\text {Im}(\nabla \alpha \times \nabla \beta )|_{t=0}\) by \(\varphi \). It is easy to see from these expressions that \(v_1\) and \(v_2\) are orthonormal and span the contact plane \(\xi _p\) at each point \(p\in S^3\backslash \{(1,0)\}\). The point \(p=(1,0)\) is excluded, since it is \((1,0)=\varphi (\infty )\).
A magnetic field \(\mathbf {B}\) constructed using Bateman's method satisfies
$$\begin{aligned} \mathbf {B}&=\text {Im}(\mathbf {F})\nonumber \\&=\text {Re}(h(\alpha ,\beta ))\text {Im}(\nabla \alpha \times \nabla \beta )+\text {Im}(h(\alpha ,\beta ))\text {Re}(\nabla \alpha \times \nabla \beta ), \end{aligned}$$
while the electric field \(\mathbf {E}\) satisfies
$$\begin{aligned} \mathbf {E}&=\text {Re}(\mathbf {F})\nonumber \\&=\text {Re}(h(\alpha ,\beta ))\text {Re}(\nabla \alpha \times \nabla \beta )-\text {Im}(h(\alpha ,\beta ))\text {Im}(\nabla \alpha \times \nabla \beta ). \end{aligned}$$
In particular, both fields are at every point a linear combination of \(\text {Re}(\nabla \alpha \times \nabla \beta )\) and \(\text {Im}(\nabla \alpha \times \nabla \beta )\) and their pushforwards by \(\varphi \) are linear combinations of \(v_1\) and \(v_2\). The fact that \(v_1\) and \(v_2\) are a basis for the contact plane \(\xi _p\) for all \(p\in S^3\backslash \{(1,0)\}\) implies that Eqs. (22) and (23) provide an alternative proof of Lemma 1. Hence every closed field line must be a Legendrian knot and the holomorphic function h describes the coordinates of the field with respect to this preferred basis.
Suppose now that we have an n-component Legendrian link \(L=L_1\cup L_2\cup \ldots \cup L_n\) with respect to the standard contact structure on \(S^3\), with \((1,0)\not \in L\), a subset \(I\subset \{1,2\ldots ,n\}\), and a non-zero section X of its tangent bundle \(TL\subset \xi _0\subset TS^3\). We can define a complex-valued function \(H:L\rightarrow \mathbb {C}\) given by
$$\begin{aligned} \text {Re}(H(z_1,z_2))&=X_{(z_1,z_2)}\cdot v_1,&\text {for all }(z_1,z_2)\in L_i, i\in I\nonumber \\ \text {Im}(H(z_1,z_2))&=-X_{(z_1,z_2)} \cdot v_2,&\text {for all }(z_1,z_2)\in L_i, i\in I,\nonumber \\ \text {Re}(H(z_1,z_2))&=X_{(z_1,z_2)}\cdot v_2,&\text {for all }(z_1,z_2)\in L_i, i\notin I\nonumber \\ \text {Im}(H(z_1,z_2))&=X_{(z_1,z_2)} \cdot v_1,&\text {for all }(z_1,z_2)\in L_i, i\notin I, \end{aligned}$$
where \(\cdot \) denotes the standard scalar product in \(\mathbb {R}^4=T_{(z_1,z_2)}\mathbb {C}^2\).
If there is an open neighbourhood U of \(S^3\subset \mathbb {C}^2\) and a holomorphic function \(h:U\rightarrow \mathbb {C}\) with \(h|_L=H\), then the corresponding electromagnetic field \(\mathbf {F}=h(\alpha , \beta )\nabla \alpha \times \nabla \beta \) at \(t=0\) has closed field lines ambient isotopic to (the mirror image of) L, with closed electric field lines in the shape of (the mirror image of) \(\bigcup _{i\in I}L_i\) and magnetic field lines in the shape of (the mirror image of) \(\bigcup _{i\notin I}L_i\).
For every point \(q\in \varphi ^{-1}(\bigcup _{i\notin I}L_i)\) we have
$$\begin{aligned} \mathbf {B}|_{t=0}(q)&=\left( \text {Re}(h(\alpha ,\beta ))\text {Im}(\nabla \alpha \times \nabla \beta )\right. \nonumber \\&\quad \left. +\text {Im}(h(\alpha ,\beta ))\text {Re}(\nabla \alpha \times \nabla \beta )\right) \Bigr |_{t=0,(x,y,z)=q}\nonumber \\&=\frac{8}{(|q|^2+1)^3}\left( ((\text {Re}(H(\alpha ,\beta ))(\varphi ^{-1})_*(v_2)\right. \nonumber \\&\quad \left. +\text {Im}(H(\alpha ,\beta ))(\varphi ^{-1})_*(v_1))\right) \Bigr |_{t=0,(x,y,z)=q}\nonumber \\&=\frac{8}{(|q|^2+1)^3} (\varphi ^{-1})_*(X_{(\alpha ,\beta )})\Bigr |_{t=0,(x,y,z)=q}, \end{aligned}$$
where \(|\cdot |\) denotes the Euclidean norm in \(\mathbb {R}^3\). The second equality follows from \(h|_L=H\) and Eq. (21). The last equality follows from the orthonormality of the basis \(\{v_1,v_2\}\), the definition of H and the fact that L is Legendrian. Equation (25) states that at \(t=0\) the field \(\mathbf {B}\) is everywhere tangent to \(\varphi ^{-1}(\bigcup _{i\notin I}L_i)\). In particular, at \(t=0\) the field \(\mathbf {B}\) has a set of closed flow lines that is ambient isotopic to the mirror image of \(\bigcup _{i\notin I}L_i\) (cf. Remark 1).
Similarly, for every \(q\in \varphi ^{-1}(\bigcup _{i\in I}L_i)\) we have
$$\begin{aligned} \mathbf {E}|_{t=0}(q)=&\left( \text {Re}(h(\alpha ,\beta ))\text {Re}(\nabla \alpha \times \nabla \beta )\right. \nonumber \\&\left. -\text {Im}(h(\alpha ,\beta ))\text {Im}(\nabla \alpha \times \nabla \beta )\right) \Bigr |_{t=0,(x,y,z)=q}\nonumber \\ =&\frac{8}{(|q|^2+1)^3}\left( ((\text {Re}(H(\alpha ,\beta ))(\varphi ^{-1})_*(v_1)\right. \nonumber \\&\left. -\text {Im}(H(\alpha ,\beta ))(\varphi ^{-1})_*(v_2))\right) \Bigr |_{t=0,(x,y,z)=q}\nonumber \\&=\frac{8}{(|q|^2+1)^3} (\varphi ^{-1})_*(X_{(\alpha ,\beta )})\Bigr |_{t=0,(x,y,z)=q}. \end{aligned}$$
The same arguments as above imply that at \(t=0\) the field \(\mathbf {E}\) is everywhere tangent to \(\varphi ^{-1}(\bigcup _{i\in I}L_i)\), so that at \(t=0\) the field \(\mathbf {E}\) has a set of closed flow lines that is ambient isotopic to \(\bigcup _{i\in I}L_i\). \(\quad \square \)
Since the constructed fields are null for all time, the topology of the electric and magnetic field lines does not change, and the fields contain L for all time. We hence have the following corollary.
Let \(L=L_1\cup L_2\cup \ldots \cup L_n\) be an n-component Legendrian link with respect to the contact structure in \(S^3\) with \(I\subset \{1,2,\ldots ,n\}\) and a non-vanishing section of its tangent bundle such that the corresponding function \(H:L\rightarrow \mathbb {C}\) allows a holomorphic extension \(h:U\rightarrow \mathbb {C}\) to an open neighbourhood U of \(S^3\). Then Theorem 1 holds for the mirror image of L and the subset I with \(\mathbf {F}=h(\alpha ,\beta )\nabla \alpha \times \nabla \beta \).
We already know that the time evolution of all these Bateman fields is determined by the same smooth family of diffeomorphisms \(\varPhi _t\). Therefore, what we have to show in order to prove Theorem 1 is that every link type (with every choice of a subset of its components) has a Legendrian representative as in the corollary.
The Proof of the Theorem
We have seen in the previous section that Theorem 1 can be proven by showing that every link type has a Legendrian representative for which a certain function has a holomorphic extension. Questions like this, regarding the existence of holomorphic extensions of functions defined on a subset of \(\mathbb {C}^m\), are important in the study of complex analysis in m variables and are in general much more challenging when \(m>1\). In this section, we first prove that every link type has a Legendrian representative with certain properties regarding real analyticity. We then review a result from complex analysis by Burns and Stout that guarantees that for this class of real analytic submanifolds of \(\mathbb {C}^2\) contained in \(S^3\) the desired holomorphic extension exists, thereby proving Theorem 1.
Every link type has a real analytic Legendrian representative L that admits a non-zero section of its tangent bundle, such that for any given subset I of its set of components the corresponding function \(H:L\rightarrow \mathbb {C}\) as in Eq. (24) is real analytic.
The lemma is essentially proved in [36], where it is shown that every link has a Legendrian representative L (with respect to the contact structure in \(S^3\)) that is the image of a smooth embedding, given by a Laurent polynomial \(\eta _i=(\eta _{i,1},\eta _{i,2}):S^1\rightarrow S^3\subset \mathbb {C}^2\) in \(\mathrm {e}^{\mathrm {i}\chi }\) for each component \(L_i\). The set of functions \(\eta _i\) in [36] is obtained by approximating some smooth embedding, whose image is a Legendrian link \(L'\) of the same link type as L. It is a basic exercise in contact topology to show that we can assume that \((1,0)\not \in L'\) [21] and hence also \((1,0)\not \in L\).
Since each \(\eta _i\) is a real analytic embedding, the inverse \(\eta _i^{-1}:L\rightarrow S^1\) is real analytic in \(x_1\), \(y_1\), \(x_2\) and \(y_2\) for all \(i=1,2,\ldots ,n\). Likewise \(\partial _\chi \eta _i:S^1\rightarrow TL_i\) is real analytic in \(\chi \) and non-vanishing, since \(\eta _i\) is an embedding. It follows that the composition \(X:=(\partial _\chi \eta _i)\circ \eta _i^{-1}:L_i\rightarrow TL_i\) is a real analytic non-vanishing section of the tangent bundle of \(L_i\) for all \(i=1,2,\ldots ,n\). Equations (24) and (21) then directly imply that H is also real analytic, no matter which subset I of the components of L is chosen. \(\quad \square \)
It was shown in [35] that a link L in \(S^3\) is a real analytic Legendrian link if and only if it is a totally tangential \(\mathbb {C}\)-link, i.e., L arises as the intersection of a complex plane curve and \(S^3\) that is tangential at every point. Recall from Remark 1 that the torus links constructed in [26] arise in this way, where the complex plane curve is \(z_1^pz_2^q-1=0\) and the radius of the 3-sphere is chosen appropriately. Links that arise as transverse intersections of complex plane curves and the 3-sphere, so-called transverse \(\mathbb {C}\) -links or, equivalently, quasipositive links, have been studied as stable vortex knots in null electromagnetic fields in [11].
Following Burns and Stout [13] we call a real analytic submanifold \(\Sigma \) of \(\mathbb {C}^2\) that is contained in \(S^3\) an analytic interpolation manifold (relative to the 4-ball B) if every real analytic function \(\Sigma \rightarrow \mathbb {C}\) is the restriction to \(\Sigma \) of a function that is holomorphic on some neighbourhood of B. The neighbourhood depends on the function in question.
(Burns–Stout [13]). \(\Sigma \) is an analytic interpolation manifold if and only if \(T_p(\Sigma )\subset T_p^{\mathbb {C}}(S^3)\) for every \(p\in \Sigma \), where \(T_p^{\mathbb {C}}(S^3)\) denotes the maximal complex subspace of \(T_p(S^3)\).
The result stated in [13] holds in fact for more general ambient spaces and their boundaries, namely strictly pseudo-convex domains with smooth boundaries. The open 4-ball B with boundary \(\partial B=S^3\) is easily seen to be an example of such a domain.
Proof of Theorem 1
By Lemma 2 every link type can be represented by a real analytic Legendrian link L. It is thus a real analytic submanifold of \(\mathbb {C}^2\) that is contained in \(S^3\). The condition \(T_pL\subset T_p^{\mathbb {C}}(S^3)\) is equivalent to L being a Legendrian link with respect to the standard contact structure on \(S^3\). Hence L is an analytic interpolation manifold. Since Lemma 2 also implies that for every choice of I the function \(H:L\rightarrow \mathbb {C}\) can be taken to be real analytic, Theorem 2 implies that H is the restriction of a holomorphic function \(h:U\rightarrow \mathbb {C}\), where U is some neighbourhood of \(S^3\).
The discussion in Sect. 3 shows that the electromagnetic field
$$\begin{aligned} \mathbf {F}=h(\alpha ,\beta )\nabla \alpha \times \nabla \beta \end{aligned}$$
has a set of closed electric field lines in the shape of the mirror image of \(\bigcup _{i\in I}L_i\) and a set of closed magnetic field lines in the shape of the mirror image of \(\bigcup _{i\notin I}L_i\) at time \(t=0\). Since the constructed field is null for all time, \(\mathbf {F}\) contains these links for all time, which concludes the proof of Theorem 1, since every link has a mirror image. \(\quad \square \)
We showed that every link type arises as a set of stable electric and magnetic field lines in a null electromagnetic field. Since these fields are obtained via Bateman's construction, they share some properties with the torus link fields in [26]. They are for example shear-free and have finite energy.
However, since the proof of Theorem 1 only asserts the existence of such fields, via the existence of a holomorphic function h, other desirable properties of the fields in [26] are more difficult to investigate. The electric and magnetic field lines in [26] lie on the level sets of \(\text {Im}(\alpha ^p\beta ^q)\) and \(\text {Re}(\alpha ^p\beta ^q)\). At this moment, it is not clear (and doubtful) if the fields in Theorem 1 have a similar integrability property. It is, however, very interesting that the relevant function \(z_1^pz_2^q\), whose real/imaginary part is constant on integral curves of the (pushforward of the) magnetic/electric field, is (up to an added constant) exactly the complex plane curve whose totally tangential intersection with \(S^3\) gives the \((p,-q)\)-torus link. In light of this observation, we might conjecture about the fields in Theorem 1, which contain L, that if the electric/magnetic field lines really lie on the level sets of a pair of real functions, then the real and imaginary parts of F would be natural candidates for such functions, where \(F=0\) intersects \(S^3\) totally tangentially in the mirror image of L. So far \(z_1^pz_2^q-1=0\) is the only explicit example of such a function (resulting in the \((p,-q)\)-torus link) that the author is aware of, even though it is known to exist for any link. It is this lack of explicit examples and concrete constructions that makes it difficult to investigate this conjecture and other properties of the fields from Theorem 1.
Kedia et al. also obtained concrete expressions for the helicity of their fields [26]. Again, the lack of concrete examples makes it difficult to obtain analogous results.
Since the fields in Theorem 1 are obtained via Bateman's construction, all their Poynting fields at \(t=0\) are tangent to the fibers of the Hopf fibration. It is still an open problem to modify the construction, potentially via a different choice of \(\alpha \) and \(\beta \), to obtain knotted fields, whose underlying Poynting fields give more general Seifert fibrations.
Adams, C.C.: The Knot Book. W.H. Freeman and Company, New York (1994)
MATH Google Scholar
Atiyah, M.: The Geometry and Physics of Knots. Cambridge University Press, Cambridge (1990)
Bateman, H.: The Mathematical Analysis of Electrical and Optical Wave-Motion. Dover, New York (1915)
Berry, M.V.: Knotted zeros in the quantum states of hydrogen. Found. Phys. 31, 659–667 (2001)
Article MathSciNet Google Scholar
Bialynicki-Birula, I.: New solutions of the Dirac, Maxwell, and Weyl equations from the fractional Fourier transform. Phys. Rev. D 103, 085001 (2021)
Article ADS MathSciNet Google Scholar
Bialynicki-Birula, I., Bialynicka-Birula, Z.: Motion of vortex lines in nonlinear wave mechanics. Phys. Rev. A 65, 014101 (2001)
Bialynicki-Birula, I., Bialynicka-Birula, Z.: The role of the Riemann–Silberstein vector in classical and quantum theories of electromagnetism. J. Phys. A 46, 053001 (2013)
Bialynicki-Birula, I., Bialynicka-Birula, Z., Śliwa, C.: Motion of vortex lines in quantum mechanics. Phys. Rev. A 61, 032110 (2000)
Bialynicki-Birula, I., Młoduchowski, T., Radożycki, T., Śliwa, C.: Vortex lines in motion. Acta Physica Polonica A 100(Supplement), 29–41 (2001)
Bode, B., Dennis, M.R.: Constructing a polynomial whose nodal set is any prescribed knot or link. J. Knot Theory Ramif. 28(1), 1850082 (2019)
Bode, B.: Quasipositive links and electromagnetism. In: Topology and its Applications (in press)
Brauner, K.: Zur Geometrie der Funktionen zweier komplexer Veränderlichen II, III, IV. Abh. Math. Sem. Univ. Hambg. 6, 8–54 (1928)
Burns, D., Jr., Stout, E.L.: Extending functions from submanifolds of the boundary. Duke Math. J. 43(2), 391–404 (1976)
Dennis, M.R., King, R.P., Jack, B., OHolleran, K., Padgett, M.: Isolated optical vortex knots. Nat. Phys. 6, 118–121 (2010)
Enciso, A., Hartley, D., Peralta-Salas, D.: A problem of Berry and knotted zeros in the eigenfunctions of the harmonic oscillator. J. Eur. Math. Soc. 20, 301–314 (2018)
Enciso, A., Hartley, D., Peralta-Salas, D.: Dislocations of arbitrary topology in Coulomb eigenfunctions. Rev. Mat. Iberoam. 34, 1361–1371 (2018)
Enciso, A., Peralta Salas, D.: Knots and links in steady solutions of the Euler equations. Ann. Math. 175, 345–367 (2012)
Enciso, A., Peralta Salas, D.: Existence of knotted vortex tubes in steady Euler flows. Acta Math. 214, 61–134 (2015)
Enciso, A., Lucà, R., Peralta-Salas, D.: Vortex reconnections in the three dimensional Navier–Stokes equations. Adv. Math. 309, 452–486 (2017)
Enciso, A., Peralta-Salas, D.: Approximation theorems for the Schrödinger equation and quantum vortex reconnection. arXiv:1905.02467 (2019)
Etnyre, J.B.: Legendrian and transversal knots. In: Menasco, W., Thistlewaite, M. (eds.) Handbook of Knot Theory, pp. 105–185. Elsevier Science, Amsterdam (2005)
Geiges, H.: An introduction to contact topology. In: Cambridge Studies in Advanced Mathematics, vol. 109, Cambridge University Press, Cambridge (2008)
Irvine, W.T.M.: Linked and knotted beams of light, conservation of helicity and the flow of null electromagnetic fields. J. Phys. A 43, 385203 (2010)
Kamien, R.D., Mosna, R.A.: The topology of dislocations in smectic liquid crystals. New J. Phys. 18, 053012 (2016)
Kauffman, L.H.: Knots and Physics. World Scientific, Singapore (1991)
Kedia, H., Bialynicki-Birula, I., Peralta-Salas, D., Irvine, W.T.M.: Tying knots in light fields. Phys. Rev. Lett. 111, 150404 (2013)
Kedia, H., Foster, D., Dennis, M.R., Irvine, W.T.M.: Weaving knotted vector field with tunable helicity. Phys. Rev. Lett. 117, 274501 (2016)
Kedia, H., Peralta-Salas, D., Irvine, W.T.M.: When do knots in light stay knotted? J. Phys. A 51, 025204 (2017)
Machon, T., Alexander, G.P.: Knotted defects in nematic liquid crystals. Phys. Rev. Lett. 113, 027801 (2014)
Machon, T., Alexander, G.P.: Global defect topology in nematic liquid crystals. Proc. R. Soc. A 472, 20160265 (2016)
Milnor, J.: Singular Points of Complex Hypersurfaces. Princeton University Press, Princeton (1968)
Proment, D., Onorato, M., Barenghi, C.F.: Vortex knots in a Bose–Einstein condensate. Phys. Rev. E 85(3), 036306 (2012)
Rañada, A.F.: A topological theory of the electromagnetic field. Lett. Math. Phys. 18, 97–106 (1989)
Rolfsen, R.: Knots and Links. Publish or Perish, Berkeley (1976)
Rudolph, L.: Totally tangential links of intersection of complex plane curves with round spheres. In: Apanasov, B.N., Neumann, W.D., Reid, A.W., Siebenmann, L. (eds.) Topology, vol. 90, pp. 343–349. De Gruyter, Berlin (1992)
Rudolph, L.: An obstruction to sliceness via contact geometry and classical gauge theory. Invent. Math. 119, 155–163 (1995)
Silberstein, L.: Elektromagnetische Grundgleichungen in bivectorieller Behandlung. Ann. Phys. 327, 579–586 (1907)
Silberstein, L.: Nachtrag zur Abhandlung über Elektromagnetische Grundgleichungen in bivectorieller Behandlung. Ann. Phys. 329, 783–784 (1907)
Sutcliffe, P.: Knots in the Skyrme–Faddeev model. Proc. R. Soc. A 463, 3001–3020 (2007)
Synge, J.L.: Relativity: The Special Theory. North-Holland Pub. Co., Amsterdam (1956)
Weber, H.: Die Partiellen Differential-Gleichungen Der Mathematischen Physik: Nach Riemanns Vorlesungen Bearbeitet von Heinrich Weber. Friedrich Vieweg und Sohn, Braunschweig (1901)
The author is grateful to Mark Dennis, Daniel Peralta-Salas and Vera Vertesi for helpful discussions.
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Instituto de Ciencias Matemáticas, Consejo Superior de Investigaciones Científicas, 28049, Madrid, Spain
Benjamin Bode
Correspondence to Benjamin Bode.
B.B. was supported by JSPS KAKENHI Grant Number JP18F18751, a JSPS Postdoctoral Fellowship as JSPS International Research Fellow, and the Severo Ochoa Postdoctoral Programme at ICMAT.
The author declares that he has no conflict of interest.
Communicated by P. Chrusciel.
The author was supported by JSPS KAKENHI Grant Number JP18F18751, a JSPS Postdoctoral Fellowship as JSPS International Research Fellow, and the Severo Ochoa Postdoctoral Programme at ICMAT.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Bode, B. Stable Knots and Links in Electromagnetic Fields. Commun. Math. Phys. 387, 1757–1770 (2021). https://doi.org/10.1007/s00220-021-04219-3
Issue Date: November 2021 | CommonCrawl |
What physically determines the point-set topology of a spacetime manifold?
Like any manifold, the pseudo-Riemannian manifold of spacetime in special or general relativity is a topological space, so there is a notion of open sets (or equivalently, neighborhoods) that allows us to talk about continuity, connectedness, etc. We implicitly use this structure whenever we frame the equivalence principle as saying that any spacetime "locally looks like Minkowski space" - the "locally" really means "in very small neighborhoods within the manifold". This point-set-topological structure is in a sense even more fundamental than anything relating to the metric, because any manifold has such a structure, whether or not is is pseudo-Riemannian (or even differentiable).
But what physically defines these open sets? For a Riemannian manifold (or more generally any metric space), in practice we always use the topology induced by the metric. But this doesn't work for a pseudo-Riemannian manifold, because the indefinite metric signature prevents it from being a metric space (in the mathematical sense). For example, if I emit a photon which "later" gets absorbed in the Andromeda Galaxy, then there is clearly a physical sense in which the endpoints of the null photon world line are "not infinitesimally close together", even though the spacetime interval separating them is zero (e.g. we could certainly imagine a physical field whose value varies significantly over the null trajectory). Is there a physical, coordinate- and Lorentz-invariant way to define the open sets of the spacetime?
(Note that I'm not talking about the global/algebraic topology of the spacetime, which is a completely separate issue.)
general-relativity spacetime metric-tensor topology causality
tparkertparker
$\begingroup$ Related: math.stackexchange.com/q/1379732 $\endgroup$ – tparker May 2 '18 at 19:46
There's no need to define the topology of the manifold from the metric. While a nice feature, the topology of the manifold is defined primarily by its atlas, which, from a physical perspective, correspond to the coordinates. A spacetime with a set of coordinates $\{ x^i \}$ will have a topology defined by the mapping of open sets from $\mathbb{R}^n$ to the manifold via the chart $\phi$.
If you wish, though, there are some things in general relativity that do define the spacetime topology.
A common basis of the spacetime topology is the Alexandrov topology. If your spacetime is strongly causal, the Alexandrov topology is equivalent to the manifold topology. Its basis is defined by the set of causal diamonds :
$$\{ C | \forall p, q \in M, C = I^+(p) \cap I^-(q) \}$$
It's easy to find counterexamples (the Alexandrov topology is just $\varnothing$ and $M$ for the Gödel spacetime), but if it is strongly causal, that will give you back the manifold topology.
SlereahSlereah
There are lots of different possible ways of defining a manifold, some of which are not quite equivalent but all of which are equivalent for physics purposes. E.g., you can define a manifold in terms of a triangulation.
You could just start with the manifold, say defined using a triangulation. Then it has a definite topology, and only after that do you need to worry about putting a metric on it.
If you use the definition of a manifold in terms of a chart with smooth transition maps, then you get a topology for free from the charts. I think this is essentially what enumaris is saying.
But we should also be able to talk about these things in a coordinate-independent way. A metric can just exist on a manifold, regardless of whether the manifold was ever defined in terms of any coordinate charts. Then I think you still get a topology induced by the metric. This is because the metric defines geodesics, and it also defines affine parameters along those geodesics. So in your example of sending a photon to the Andromeda galaxy, the photon travels along a geodesic, we can define an affine parameter, and we can tell that the emission and reception of the photon do not lie in an arbitrarily small neighborhood of one another, because they lie at a finite affine distance.
Ben CrowellBen Crowell
$\begingroup$ Yes, I was thinking the same thing about using the affine parameter as a "distance" measure along null geodesics, but I couldn't find any references to topologies induced by pseudo-Riemannian "metrics". Do you know of any? $\endgroup$ – tparker May 2 '18 at 19:06
I don't know about "physically" what defines open sets since open sets are a (afaik) purely mathematical construction, but what defines the open sets on the spacetime manifold is simply the open sets in $\mathbb{R}^4$. Open sets in $\mathbb{R}^4$ gets mapped to open sets in the manifold by definition. The topology of manifolds is induced naturally this way.
enumarisenumaris
$\begingroup$ So for Minkowski space, the topology is generated by the balls $({\bf x} - {\bf x}_0)^2 + (t - t_0)^2 < r$, even though these sets aren't Lorentz invariant, since that's the Euclidean topology of the domain of the coordinate chart? $\endgroup$ – tparker May 2 '18 at 18:09
$\begingroup$ Yes, by definition, the induced topology on an manifold comes from the underlying Euclidean space to which the manifold locally maps. Maybe another way to put it is the maps which define the Atlas must be homeomorphisms. $\endgroup$ – enumaris May 2 '18 at 18:24
$\begingroup$ Perhaps this thread can shed some more light on your question: mathoverflow.net/q/266903 It appears that if we restrict ourselves to strongly causal spacetimes then the topology induced by the metric will be equal to the topology induced by the charts. Good question though, I had not come across this subtle detail before. $\endgroup$ – enumaris May 2 '18 at 19:51
The indefinite metric of a pseudo-Riemannian manifold does prevent it from being a metric space, and hence use this route to define a topological space.
However, we can still relax the axioms of a metric space and still be able to define a topological space. In this case, we have the definition of a pseudo-metric and then the construction of the topology goes through as with the usual case.
Mathematically, a more important aporia (a missing but important property) is that manifolds do not have the exponential property:
If $M$ and $N$ are manifolds. Then $M+N$ and $MN$ are manifolds (the former is the disjoint union and the latter the Cartesian product). However, whilst the exponential $M^N$ exists both at the point set and topological level, it does not as manifold. There are many attempts to get around this, but a method that seems to be finding increasing favour and which was first put forward by Souriou and later named diffeology uses techniques inspired by sheaf theory.
$\begingroup$ Could you clarify what you mean by "the construction of the topology goes through as with the usual case"? In the usual case, the topology is generated by the balls $|{\bf x} - {\bf x}_0| < r$. In the Lorentzian case, these "balls" always include the entire future and past light cones and their interiors. Surely open sets are allowed to be compact in the time direction. $\endgroup$ – tparker May 2 '18 at 23:38
$\begingroup$ @tparker: well, the same definition works; it's simplest to show that the pseudo-metric defines a topology as in this article (see under the sub-heading topology), and then show that a Lorentz metric is actually a pseudo-metric. Open sets are never compact. $\endgroup$ – Mozibur Ullah May 2 '18 at 23:56
$\begingroup$ I can't find any sub-heading "topology" in that article. $\endgroup$ – tparker May 2 '18 at 23:58
$\begingroup$ @tparker: Sorry, I linked to the wrong article; it's this one, on pseudo-metrics. $\endgroup$ – Mozibur Ullah May 3 '18 at 0:10
$\begingroup$ The metric tensor does not actually induce a pseudometric structure on a pseudo-Riemannian manifold, because a pseudometric is required to be nonnegative and the spacetime interval on a pseudo-Riemannian manifold can be negative. (It's unfortunate that same prefix "pseudo-" is used in incompatible ways in "pseudo-Riemannian manifold" and in "pseudometric".) I take your point that we can still use the invariant interval to define a topology on an arbitrary pseudo-Riemannian manifold, but it seems so much vastly coarser than the usual topology as to be pretty useless for any applications. $\endgroup$ – tparker May 3 '18 at 0:18
Not the answer you're looking for? Browse other questions tagged general-relativity spacetime metric-tensor topology causality or ask your own question.
Why pseudo-Riemannian metric cannot define a topology?
What is known about the topological structure of spacetime?
Are there any restrictions on building the topology of spacetime out of the complement of open balls?
Why do we require manifolds to be a topological space?
Does a spacetime manifold have the structure of a metric space?
How does GR determine the topology of spacetime?
What manifold is spacetime?
Expansion of Universe: Spacetime metric and Topological Variations?
How exactly does "classical physics is continuous" posit "spacetime is a set with a certain topology"?
Nature of the elements of spacetime? | CommonCrawl |
The Lorenz equation as a metaphor for the Navier-Stokes equations
DCDS Home
April 2001, 7(2): 431-445. doi: 10.3934/dcds.2001.7.431
Global structure of 2-D incompressible flows
Tian Ma 1, and Shouhong Wang 2,
Department of Mathematics, Sichuan University, Chengdu
Department of Mathematics, Indiana University, Bloomington, IN 47405
Revised November 2000 Published January 2001
The main objective of this article is to classify the structure of divergence-free vector fields on general two-dimensional compact manifold with or without boundaries. First we prove a Limit Set Theorem, Theorem 2.1, a generalized version of the Poincaré-Bendixson to divergence-free vector fields on 2-manifolds of nonzero genus. Namely, the $\omega$ (or $\alpha$) limit set of a regular point of a regular divergence-free vector field is either a saddle point, or a closed orbit, or a closed domain with boundaries consisting of saddle connections. We call the closed domain ergodic set. Then the ergodic set is fully characterized in Theorem 4.1 and Theorem 5.1. Finally, we obtain a global structural classification theorem (Theorem 3.1), which amounts to saying that the phase structure of a regular divergence-free vector field consists of finite union of circle cells, circle bands, ergodic sets and saddle connections.
Keywords: Compact manifold, global structural classification theorem., Limit Set Theorem.
Mathematics Subject Classification: 34D, 35Q35, 58F, 76, 86A1.
Citation: Tian Ma, Shouhong Wang. Global structure of 2-D incompressible flows. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 431-445. doi: 10.3934/dcds.2001.7.431
Mathias Staudigl. A limit theorem for Markov decision processes. Journal of Dynamics & Games, 2014, 1 (4) : 639-659. doi: 10.3934/jdg.2014.1.639
Jean-Pierre Conze, Stéphane Le Borgne, Mikaël Roger. Central limit theorem for stationary products of toral automorphisms. Discrete & Continuous Dynamical Systems - A, 2012, 32 (5) : 1597-1626. doi: 10.3934/dcds.2012.32.1597
James Nolen. A central limit theorem for pulled fronts in a random medium. Networks & Heterogeneous Media, 2011, 6 (2) : 167-194. doi: 10.3934/nhm.2011.6.167
Piotr Fijałkowski. A global inversion theorem for functions with singular points. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 173-180. doi: 10.3934/dcdsb.2018011
Peter Giesl. Converse theorem on a global contraction metric for a periodic orbit. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5339-5363. doi: 10.3934/dcds.2019218
Kazuhiro Ishige, Michinori Ishiwata. Global solutions for a semilinear heat equation in the exterior domain of a compact set. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 847-865. doi: 10.3934/dcds.2012.32.847
Oliver Díaz-Espinosa, Rafael de la Llave. Renormalization and central limit theorem for critical dynamical systems with weak external noise. Journal of Modern Dynamics, 2007, 1 (3) : 477-543. doi: 10.3934/jmd.2007.1.477
Yves Derriennic. Some aspects of recent works on limit theorems in ergodic theory with special emphasis on the "central limit theorem''. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 143-158. doi: 10.3934/dcds.2006.15.143
Kuo-Chih Hung, Shao-Yuan Huang, Shin-Hwa Wang. A global bifurcation theorem for a positone multiparameter problem and its application. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5127-5149. doi: 10.3934/dcds.2017222
Dariusz Idczak. A global implicit function theorem and its applications to functional equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2549-2556. doi: 10.3934/dcdsb.2014.19.2549
Aylin Aydoğdu, Sean T. McQuade, Nastassia Pouradier Duteil. Opinion Dynamics on a General Compact Riemannian Manifold. Networks & Heterogeneous Media, 2017, 12 (3) : 489-523. doi: 10.3934/nhm.2017021
Habibulla Akhadkulov, Akhtam Dzhalilov, Konstantin Khanin. Notes on a theorem of Katznelson and Ornstein. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4587-4609. doi: 10.3934/dcds.2017197
Stefano Bianchini, Daniela Tonon. A decomposition theorem for $BV$ functions. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1549-1566. doi: 10.3934/cpaa.2011.10.1549
Henk Broer, Konstantinos Efstathiou, Olga Lukina. A geometric fractional monodromy theorem. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 517-532. doi: 10.3934/dcdss.2010.3.517
Rabah Amir, Igor V. Evstigneev. On Zermelo's theorem. Journal of Dynamics & Games, 2017, 4 (3) : 191-194. doi: 10.3934/jdg.2017011
John Hubbard, Yulij Ilyashenko. A proof of Kolmogorov's theorem. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 367-385. doi: 10.3934/dcds.2004.10.367
Cristina Stoica. An approximation theorem in classical mechanics. Journal of Geometric Mechanics, 2016, 8 (3) : 359-374. doi: 10.3934/jgm.2016011
Fabrizio Colombo, Irene Sabadini, Frank Sommen. The inverse Fueter mapping theorem. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1165-1181. doi: 10.3934/cpaa.2011.10.1165
Olaf Hansen. A global existence theorem for two coupled semilinear diffusion equations from climate modeling. Discrete & Continuous Dynamical Systems - A, 1997, 3 (4) : 541-564. doi: 10.3934/dcds.1997.3.541
Saikat Mazumdar. Struwe's decomposition for a polyharmonic operator on a compact Riemannian manifold with or without boundary. Communications on Pure & Applied Analysis, 2017, 16 (1) : 311-330. doi: 10.3934/cpaa.2017015
2018 Impact Factor: 1.143
PDF downloads (10)
Tian Ma Shouhong Wang | CommonCrawl |
Universal atom interferometer simulation of elastic scattering processes
Florian Fitzek1,2,
Jan-Niclas Siemß1,2,
Stefan Seckmeyer1,
Holger Ahlers1,
Ernst M. Rasel1,
Klemens Hammerer2 &
Naceur Gaaloul1
Scientific Reports volume 10, Article number: 22120 (2020) Cite this article
Atomic and molecular physics
Matter waves and particle beams
Quantum metrology
Ultracold gases
In this article, we introduce a universal simulation framework covering all regimes of matter-wave light-pulse elastic scattering. Applied to atom interferometry as a study case, this simulator solves the atom-light diffraction problem in the elastic case, i.e., when the internal state of the atoms remains unchanged. Taking this perspective, the light-pulse beam splitting is interpreted as a space and time-dependent external potential. In a shift from the usual approach based on a system of momentum-space ordinary differential equations, our position-space treatment is flexible and scales favourably for realistic cases where the light fields have an arbitrary complex spatial behaviour rather than being mere plane waves. Moreover, the solver architecture we developed is effortlessly extended to the problem class of trapped and interacting geometries, which has no simple formulation in the usual framework of momentum-space ordinary differential equations. We check the validity of our model by revisiting several case studies relevant to the precision atom interferometry community. We retrieve analytical solutions when they exist and extend the analysis to more complex parameter ranges in a cross-regime fashion. The flexibility of the approach, the insight it gives, its numerical scalability and accuracy make it an exquisite tool to design, understand and quantitatively analyse metrology-oriented matter-wave interferometry experiments.
The commonly used approach for treating light-pulse beam-splitter and mirror dynamics in matter-wave systems consists in solving a system of ordinary differential equations (ODE) with explicit couplings between the relevant momentum states.
This formulation starts by identifying the relevant diffraction processes and extracting their corresponding coupling terms in the ODE1,2. In the elastic scattering case, each pair of light plane waves can drive a set of two-photon transitions from one momentum class j to the next neighboring orders \(j \pm 2\). The presence of multiple couplings allows for higher order transitions and the system is simplified by choosing a cutoff omitting small transition strengths. This ODE approach works well for simple cases leading to analytical solutions in the deep Bragg and Raman-Nath regimes1,2. Using a perturbative treatement, it was generalised to the intermediate, so-called quasi-Bragg regime3. A numerical solution in this regime has been extended in the case of a finite momentum width4. In a different approach, Siemß et al.5 developed an analytic theory for Bragg atom interferometry based on the adiabatic theorem for quasi-Bragg pulses. Realistically distorted light beams or mean-field interactions, however, sharply increase the number of plane wave states and their couplings required for an accurate description. The formulation of the ODE becomes increasingly large and inflexible, with a set of coupling terms for each relevant pair of light plane waves.
Here, we take an alternative approach and solve the system in its partial differential equation (PDE) formulation following the Schrödinger equation. This time-dependent perspective6 has several advantages in terms of ease of formulation and implementation, flexibility and numerical efficiency for a broad range of cases. Indeed, this treatment is valid for different types of beam splitters (Bloch, Raman-Nath, deep Bragg and any regime in between) and pulse arrangements. Combining successive light-pulse beam-splitters naturally promoted our solver to a cross-regime or universal atom interferometry simulator that could cope with a wide range of non-ideal effects such as light spatial distortions or atomic interactions, yet being free of commonly-made approximations incompatible with a metrological use.
The position-space representation seems underutilised in the treatment of atom interferometry problems in favor of the momentum-space description although several early attempts of using it were reported for specific cases7,8,9,10,11. In this paper, we show the unique insights this approach can deliver and, contrary to widespread belief, its great numerical precision and scalability. In addition we illustrate our study with relevant examples from the precision atom interferometry field.
Theoretical model
Light-pulse beam splitting as an external potential
We start with a semi-classical model of Bragg diffraction, where a two-level atom is interacting with a classical light field1,2. This light field consists of a pair of two counter-propagating laser beams realised by a retro-reflection mirror setup for example. Assuming that the detuning of the laser light \(\Delta\) is much larger than the natural line width of the atom, one may perform the adiabatic elimination of the excited state. This yields an effective Schrödinger equation for the lower-energy atomic state \(\psi (x, t)\) with an external potential proportional to the intensity of the electric field
$$\begin{aligned} i \hbar \partial _t \psi (x, t)= \left( \frac{-\hbar ^2}{2m}\frac{\partial ^2}{\partial x^2} + 2\hbar \Omega \cos ^2(kx) \right) \psi (x, t) \end{aligned}$$
with the two-photon Rabi frequency \(\Omega\) and wave vector \(k=2\pi /\lambda\) in a simplified 1D geometry along the x-direction. For the present study, we consider a \(^{87}Rb\) atom that is addressed at the D2 transition with \(\lambda =780\) nm resulting in a recoil frequency and velocity12 of \(\omega _r=\hbar k^2/2m=2 \pi \cdot 3.8\) kHz and \(v_r=\hbar k/m=5.9\) mm/s, respectively.
In the context of realistic precision atom interferometric setups, it is necessary to include Rabi frequencies \(\Omega (x,t)\) and wave vectors k(x, t) which are space and time-dependent. This allows one to account for important experimental ingredients such as the Doppler detuning or the beam shapes including wavefront curvatures13,14,15 and Gouy phases16,17,18,19. Moreover, this generalisation allows one to effortlessly include the superposition of more than two laser fields interacting with the atoms as in the promising case of double Bragg diffraction20,21,22, and to model complex atom-light interaction processes where spurious light reflections or other experimental imperfections are present23.
Atom interferometer geometries
The light-pulse representation presented in the previous section is the elementary component necessary to generate arbitrary geometries of matter-wave interferometers operating in the elastic diffraction limit. Indeed, since the atom-light interaction in this regime conserves the internal state of the atomic system, a scalar Schrödinger equation is sufficient to describe the physics of the problem in contrast to the model adopted in Ref.10.
For example, a Mach–Zehnder-like interferometer geometry can be generated by a succession of \(\frac{\pi }{2}-\pi -\frac{\pi }{2}\) Bragg pulses (beam-splitter, mirror, beam-splitter pulses) of order n separated by a free drift time of T between each pair of pulses. In the case of Gaussian temporal pulses, this leads to a time-dependent Rabi frequency
$$\begin{aligned} \Omega (t)=\Omega _{bs}e^{\frac{-t^2}{2\tau _{bs}^2}}+\Omega _{m}e^{\frac{-(t-T)^2}{2\tau _{m}^2}}+\Omega _{bs}e^{\frac{-(t-2T)^2}{2\tau _{bs}^2}}, \end{aligned}$$
where \(\Omega _{bs}\), \(\tau _{bs}\) and \(\Omega _{m}\), \(\tau _{m}\) are the peak Rabi frequencies and their respective durations associated to the beam-splitter and mirror pulses, respectively. We numerically solve the corresponding time-dependent Schödinger equation using the split-operator method24 to propagate the atomic wave packets along the two arms. The populations in the two output ports \(\vert +\rangle =\vert 0\hbar k\rangle\) and \(\vert -\rangle =\vert 2n\hbar k\rangle\) are evaluated after the last recombination pulse waiting for a time of flight \(\tau _{ToF}\) long enough that the atomic wave packets spatially separate. They are obtained by the integration
$$\begin{aligned} P^{unnormalised}_{\pm }&=\int _{\pm }\mathrm {d}x\;|\psi (x,\tau _{ToF})|^2, \end{aligned}$$
where the integration domains extend over a space interval with non-vanishing probability density of the states \(\vert \pm \rangle\). These probabilities are further normalised to account for the loss of atoms to other parasitic momentum classes
$$\begin{aligned} P_{\pm }&=\frac{P^{unnormalised}_{\pm }}{P_{+}^{unnormalised}+P_{-}^{unnormalised}}. \end{aligned}$$
Using Feynman's path integral approach, the resulting phase shift between the two arms can be decomposed as25,26
$$\begin{aligned} \Delta \phi&= \Delta \phi _{propagation} + \Delta \phi _{laser} + \Delta \phi _{separation}. \end{aligned}$$
The propagation phase is calculated by evaluating the classical action along the trajectories of the wave packet's centers. The laser phase corresponds to the accumulated phase imprinted by the light pulses at the atom-light interaction position and time. Finally, the separation phase is different from zero if the final wave packets are not overlapping at the time of the final beam splitter, \(t=2T\).
To extract the relative phase \(\Delta \phi\) between the two conjugate ports and the contrast C, one can scan a laser phase \(\phi _0\in [0,2\pi ]\) at the last beam splitter and evaluate the populations1 varying as
$$\begin{aligned} P_{\pm }&=\frac{1}{2}\left( 1\pm C\cos (\Delta \phi + n\phi _0)\right) . \end{aligned}$$
The resulting fringe pattern is then fitted with \(\Delta \phi\) and \(C\le 1\) as fit parameters. This method, analogous to experimental procedures, allows one to determine the relative phase modulo \(2\pi\).
Raman-Nath beam splitter
The Raman-Nath regime, characterised by a spatially symmetric beam splitting, is the limit of elastic diffraction for very short interaction times of \(\tau \ll \frac{1}{\sqrt{2\Omega \omega _r}}\). The dynamics of the system can, in this case, be analytically captured following Refs.1,2
$$\begin{aligned} |g_n(t)|^2=J_n^2(\Omega t), \end{aligned}$$
where \(g_n(t)\) describes the amplitude of the momentum state \(\vert 2n\hbar k\rangle\) and \(J_n\) the Bessel functions of the first kind. Such experiments are at the heart of investigations as the one reported in Ref.27 where a Raman-Nath beam splitter was used to initialise a three-path contrast interferometer offering the possibility of measuring the recoil frequency \(\omega _r\).
To demonstrate the validity of our position-space approach, we contrast our results to the analytical ones obtained adopting the parameters of Ref.27. Figure 1 shows the outcome of a symmetric Raman-Nath beam splitter targeting the preparation of three momentum states: \(50\%\) into \(\vert 0\hbar k\rangle\) and \(25\%\) in each of the \(\vert \pm 2\hbar k\rangle\) momentum classes. As a feature of our solver, we directly observe the losses to higher momentum states (\(p=\pm 4\hbar k\) and \(p=\pm 6 \hbar k\)) due to the finite pulse fidelity. An excellent agreement is found with the analytical predictions (green filled circles) of the populations of the momentum states.
Probability density after a Raman-Nath pulse with \(\Omega =50\) \(\omega _r\), \(\tau =1\) \(\upmu\)s and a rectangular temporal profile as implemented in27. This shall create a beam splitter of roughly \(50\%\) in \(\vert 0\hbar k\rangle\) and \(25\%\) in each of the \(\vert \pm 2 \hbar k\rangle\) momentum states with an added time of flight of \(\tau _{ToF}=20\) ms to clearly separate the wavepackets in position space. The left and right panels show the position- and momentum-space probability density. The initial momentum width of the Gaussian wavepacket is chosen to be \(\sigma _p=0.01\) \(\hbar\)k. Numerical results of this work (continuous blue lines) agree well with the analytical solution of the Raman-Nath regime (green dots, momentum space) given by the Bessel functions of the first kind.
Bragg-diffraction Mach–Zehnder interferometers
To simulate a Mach–Zehnder atom interferometer based on Bragg diffraction, we consider a pair of two counter-propagating laser beams with a relative frequency detuning \(\Delta \omega =\omega _1-\omega _2=2 nkv_r\) and a phase jump \(\phi _0\in [0,2\pi ]\). This gives rise to the following running optical lattice
$$\begin{aligned} V_{Bragg}(x,t)&= 2 \hbar \Omega (t) \cos ^2(k(x-n v_r t) + \frac{\phi _0}{2}). \end{aligned}$$
For sufficiently long atom-light interaction times, i.e. in the quasi- and deep-Bragg regimes2,3,28,29, the driven Bragg order n with momentum transfer \(\Delta p=2n\hbar k\) is determined by the relative frequency detuning \(\Delta \omega\) of the two laser beams. The relative velocity between the initially prepared atom and the optical lattice is \(v = nv_r\). In the rest frame of the optical lattice, the atom has a momentum \(p=-n\hbar k\). The difference of kinetic energy between the initial (\(p=-n\hbar k\)) and target state (\(p=+n\hbar k\)) is vanishing and therefore this transition is energetically allowed and leads to a \(\Delta p=n\hbar k- (-n\hbar k)=2n\hbar k\) momentum transfer.
We now realise beam splitters and mirrors by finding the right combination of peak Rabi frequency and interaction time \((\Omega , \tau )\), either by numerical population optimisation or analytically, when we work in the deep Bragg regime. Recent advances by Siemß et al.5 generalise this to the quasi-Bragg regime in an analytical description of Bragg pulses based on the adiabatic theorem. For the pulses used in this paper, the two approaches give the same result for the optimised Rabi frequencies and pulse durations.
(a) Rabi frequency \(\Omega (t)\) over time of a \(2\hbar k\)-Bragg Mach–Zehnder interferometer according to Eq. (2). (b) Corresponding space-time diagram of the probability density \(|\psi (x,t)|^2\). The initial momentum width is chosen to be \(\sigma _p=0.1\) \(\hbar\)k and the splitter and mirror Gaussian pulses have peak Rabi frequencies of \(\Omega =1.0573\) \(\omega _r\) with pulse lengths of \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s, respectively. The separation time between the pulses is \(T=10\) ms with a final time of flight after the exit beam splitter of \(\tau _{ToF}=20\) ms. Due to the velocity selectivity of the Bragg pulses, several trajectories can be observed after each pulse. Inset: Additional insight into the dynamics of the mirror pulse of the upper arm. The peak amplitude of the Gaussian pulse is reached at \(t=11\) ms. The interference fringes of the density plot indicate the overlap between the atoms in momentum class \(p=2\hbar k\) and the atoms lost due to velocity selectivity remaining at \(p=0\hbar k\).
In Fig. 2, we simulate a Mach–Zehnder geometry and illustrate the diffraction outcome by showing a space-time diagram of the density distribution \(|\psi (x,t)|^2\). For the parameters chosen here, a clear feature of the dynamics is the appearance of additional atomic channels after the mirror pulse, which can be attributed to the velocity selectivity arising from a pulse with a finite duration characterised by \(\tau\). The finite velocity acceptance can, indeed, be estimated over the Fourier width \(\sigma _f\) of the applied pulse as
$$\begin{aligned} \mathcal {F}(\Omega e^{-\frac{t^2}{2\tau ^2}})=\sqrt{2\pi \tau ^2\Omega ^2}e^{-2(\pi f\tau )^2}, \end{aligned}$$
with \(\sigma _f=1/(2\pi \tau )\) and f being the frequency variable. This yields the velocity acceptance30
$$\begin{aligned} \sigma _v^{pulse}=\frac{1}{8\,\omega _r \tau }v_r=0.11\,v_r. \end{aligned}$$
With an initial velocity width of the atomic probability distribution of \(\sigma _v^{atom}=0.1\) \(v_r,\) it is clear that velocity components with \(|v|= \sigma _v^{pulse}\) will have a much smaller excitation probability than the components at the centre of the cloud, which leads to the characteristic double well densities of the parasitic trajectories.
With momenta \(p_{upper}=0\hbar k\) and \(p_{lower}=2\hbar k\), both parasitic trajectories still fulfill the resonance condition with the final Bragg beam splitter, which leads to the emergence of ten trajectories after the exit beam splitter. For a measurement in position space, it is now important that a sufficiently long time of flight \(\tau _{ToF}\) is applied that the ports of the Mach–Zehnder interferometer do not overlap with the parasitic ports and bias the relative phase measurement. For large densities, the parasitic trajectories at the Mach–Zehnder ports should not overlap since this may already lead to density interaction phase shifts \(H_{int}\propto |\psi (x,t)|^2\). To circumvent these problems it is important to choose \(\sigma _v^{pulse} \gg \sigma _v^{atoms}\). An example of state-of-the-art experiments23 with delta-kick collimated BEC sources31,32,33,34,35,36 uses \(\sigma _v^{atoms}=0.03\) \(v_r\ll 0.14\) \(v_r=\sigma _v^{pulse}\), for strongly suppressed parasitic trajectories due to velocity selectivity.
Implementing high-order Bragg diffraction is a natural avenue to increase the momentum separation of an atom interferometer, and therefore its sensitivity. In Fig. 3, we run our solver to observe the population distribution across the different ports of a Mach–Zehnder configuration with Bragg orders up to \(n=3\). This is done in a straightforward way by scanning the laser phase \(\phi _0\). We fit the data points corresponding to the population in the fast port \(\vert 2\hbar k\rangle\) for the different Bragg orders according to Eq. (6) and observe a clear sinusoidal signal of the simulated fringes, as expected. The resulting contrasts and phase shifts are directly found by our theory model and numerical solver which include the ideal phase shifts commonly found25,26, and go beyond to comprise several non-ideal effects as (i) finite momentum widths, (ii) finite pulse timings and (iii) multi-port Bragg diffraction37,38 and the resulting diffraction phase. The natural occurrence of these effects and the possibility to quantify them are a native feature of our simulator.
Scan of Mach–Zehnder interferometer phase for different Bragg transition orders of \(2\hbar k\) (red dots), \(4\hbar k\) (green dots) and \(6\hbar k\) (blue dots). The phase shift is applied as a laser phase jump \(\phi _0\in [0,2\pi ]\) at the last Bragg pulse. The lengths of the Gaussian splitting and mirror pulses are \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s, respectively. The initial momentum width of the atomic sample is \(\sigma _p=0.01\) \(\hbar\)k. The corresponding Rabi frequencies for the higher order Bragg transitions were found by optimising for an ideal 50 : 50 population splitting of the \(\frac{\pi }{2}\) pulse. This leads to \(\Omega _{4\hbar k}=3.7\) \(\omega _r\) and \(\Omega _{6\hbar k}=8.4\) \(\omega _r\). The Rabi frequency for the \(2\hbar k\) transition is \(\Omega _{2\hbar k}=1.0573\) \(\omega _r\). The solid lines are the respective fringe scan fits from which the phase shifts and contrasts are directly extracted.
Symmetric double Bragg geometry
Scalable and symmetric atom interferometers based on double Bragg diffraction were theoretically studied22 and experimentally demonstrated21. This dual-lattice geometry has particular advantages, including an increased sensitivity due to the doubled scale factor compared to single-Bragg diffraction, as well as an intrinsic suppression of noise and certain systematic uncertainties due to the symmetric configuration21. Combining this technique with subsequent Bloch oscillations applied to the two interferometer arms led to reaching momentum separations of thousands of photon recoils as was recently shown in Ref.23.
In double Bragg diffraction schemes, two counter-propagating optical lattices are implemented in such a way that the recoil is simultaneously transferred in opposite directions, leading to a beam splitter momentum separation of \(\Delta p = 4n\hbar k\)21,22. To extend our simulator to this important class of interferometers, we merely have to add a term to the external potential
$$\begin{aligned} V_{double\;Bragg}(x,t)&= 2 \hbar \Omega (t) (\cos ^2(k(x-nv_rt))+\cos ^2(k(x+nv_rt))). \end{aligned}$$
The procedures of realising a desired \(4n\hbar k\) momentum transfer, as well as mirror or splitter pulses, are identical to the case of single-Bragg diffraction. A simple scan of the Rabi frequency and pulse timings was enough to obtain a full double Bragg interferometer as shown in Fig. 4. The different resulting paths are illustrated in this space-time diagram of the density distribution \(|\psi (x,t)|^2\). Similarly to the single-Bragg Mach–Zehnder interferometer, we observe additional parasitic interferometers due to the finite velocity filter of the Bragg pulses after the mirror pulse of the interferometer. Due to a finite fidelity of the initial beam splitter, some atoms remain in the \(\vert 0\hbar k\rangle\) port and recombine at the last beam splitter with the trajectories of the interferometer. In a metrological study, these effects are highly important to quantify. Our simulator gives access to all the quantitative details of such a realisation in a straightforward fashion.
(a) Rabi frequency \(\Omega (t)\) over time of a symmetric double Bragg interferometer according to Eq. (2). (b) The corresponding probability density \(|\psi (x,t)|^2\) is plotted for an initial momentum width of \(\sigma _p=0.1\) \(\hbar\)k. The timings of the Gaussian splitter and mirror pulses are set to \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s, respectively. The corresponding Rabi frequencies are found by optimising the desired population transfer. The first \(\frac{\pi }{2}\) pulse corresponds to a \(2\hbar k\) transfer in two directions, realised by two counter-propagating optical lattices which results in a \(4\hbar k\) separation between the two interferometer arms. The mirror pulse is a \(4\hbar k\) Bragg transition with a Rabi frequency of \(\Omega =1.9\) \(\omega _r\) such that both arms make a transition from \(\vert \pm 2\hbar k\rangle \rightarrow \vert \mp 2\hbar k\rangle\). The last recombination pulse now realises a 50 : 50 split of the upper trajectory to \(\vert -2\hbar k\rangle\) and \(\vert 0\hbar k\rangle\) and the lower trajectory to \(\vert +2\hbar k\rangle\) and \(\vert 0\hbar k\rangle\). This leads to a final population of \(25\%\) in the \(\vert \pm 2 \hbar k\rangle\) ports and \(50\%\) in the \(\vert 0\hbar k\rangle\) port. The separation time between the pulses is \(T=10\) ms with a final time of flight after the exit beam splitter of \(\tau _{ToF}=20\) ms. Due to velocity selectivity of the Bragg pulses and a non-ideal fidelity of the initial beam splitter pulse, several parasitic interferometers can be observed.
Gravity gradient cancellation for a combined Bragg and Bloch geometry
Precision atom interferometry-based inertial sensors are sensitive to higher order terms of the gravitational potential, including gravity gradients. In particular, for atom interferometric tests of Einstein's equivalence principle (EP), gravity gradients pose a challenge by coupling to the initial conditions, i.e. position and velocity of the two test isotopes39. A finite initial differential position or velocity of the two species can, if unaccounted for, mimic a violation of the EP. By considering a gravitational potential of the form
$$\begin{aligned} V(x)&=-m g x-\frac{1}{2} m \Gamma x^2, \end{aligned}$$
where \(\Gamma =\Gamma _{xx}\) is the gravity gradient in the direction normal to the Earth's surface, the relative phase of a freely falling interferometer can be calculated as40
$$\begin{aligned} \Delta \phi&= k_{eff}|g-a_{Bragg}|T^2+k_{eff}\Gamma (x_0+v_0T)T^2, \end{aligned}$$
with \(k_{eff}=2nk\).
In Ref.40, it was shown that introducing a variation of the effective wave vector \(\Delta k_{eff}=\Gamma k_{eff}T^2/2\) at the \(\pi\) pulse can cancel the additional phase shift due to the gravity gradient. This was experimentally demonstrated in Refs.41,42.
The same principle applies to the gradiometer configuration of left panel in Fig. 5 where the effect of a gravity gradient is compensated by the application of a wave vector correction. This is reminiscent of another experimental cancellation of the gravity gradient phase shifts41. In our example, we first consider a set of two Mach–Zehnder interferometers vertically separated by \(h=2\) m, realised with \(4\hbar k\) Bragg transitions where the atoms start with the same initial velocities \(v_0\). Choosing a Doppler detuning according to \(a_{Bragg}=g\), the gradiometric phase reads
$$\begin{aligned} \Phi =4k\Gamma h T^2. \end{aligned}$$
By scanning the momentum of the applied \(\pi\) pulse, one can compensate the gradiometric phase. This is observed in our simulations at the analytically predicted value of \(\Delta k_{eff}=\Gamma k_{eff}T^2/2\) (red dashed curve crossing the zero horizontal line).
It is particularly interesting to use our simulator to find this correction phase in the context of more challenging situations, such as a combined scalable Bragg and Bloch Mach–Zehnder interferometer or a symmetric Bloch beam splitter43 where analytic solutions are not easily found.
Bloch oscillations can be used to quickly impart a momentum of \(p=2n_{Bloch}\hbar k\) on the atoms44,45. This adiabatic process can be realised by loading the atoms into a co-moving optical lattice, then accelerating the optical lattice by applying a frequency chirp and finally by unloading the atom from the optical lattice. In our model, this corresponds to the following external potential
$$\begin{aligned} V_{Bloch}(x,t)&= 2 \hbar \Omega (t)\cos ^2(k(x-x(t))) \end{aligned}$$
$$\begin{aligned} x(0)&=0\end{aligned}$$
$$\begin{aligned} \dot{x}(0)&=2nv_r\end{aligned}$$
$$\begin{aligned} \ddot{x}(t)&= {\left\{ \begin{array}{ll} 0 &{}\quad 0<t<\tau _{load}\\ \frac{2n_{Bloch}v_r}{\tau _{chirp}} &{}\quad \tau _{load}<t<\tau _{load}+\tau _{chirp}\\ 0 &{}\quad \tau _{load}+\tau _{chirp}<t<\tau _{load}+\tau _{chirp}+\tau _{unload}\\ \end{array}\right. }\end{aligned}$$
$$\begin{aligned} \Omega (t)&= {\left\{ \begin{array}{ll} \Omega \frac{t}{\tau _{load}} &{}\quad 0<t<\tau _{load}\\ \Omega &{}\quad \tau _{load}<t<\tau _{load}+\tau _{chirp}\\ \Omega \left( 1-\frac{t-(\tau _{load}+\tau _{chirp})}{\tau _{unload}}\right) &{}\quad \tau _{load}+\tau _{chirp}<t<\tau _{load}+\tau _{chirp}+\tau _{unload}\\ \end{array}\right. } \end{aligned}$$
where \(\tau _{load}\), \(\tau _{chirp}\) and \(\tau _{unload}\) are the durations of the lattice loading, acceleration and unloading, respectively.
By ramping up the co-moving optical lattice, the atoms are loaded into the first Bloch band with a quasimomentum \(q=0\). An acceleration of the optical lattice acts as a constant force on the atoms which linearly increases the quasimomentum over time. When the criterion for an adiabatic acceleration of the optical lattice is met, the atoms stay in the first Bloch band and undergo a Bloch oscillation, which can be repeated \(n_{Bloch}\) times leading to a final momentum transfer of \(\Delta p=2n_{Bloch}\hbar k\).
The \(\pi\) pulse correction \(\Delta k_{eff}=\Gamma k_{eff}T^2/2\) is proportional to the space-time area \(\mathcal {A}_{Bragg}=\hbar k_{eff}T^2/m\) of the underlying \(2n\hbar k\) Mach–Zehnder geometry and does not compensate the gravity gradient effects in the Bloch case. Analysing the space-time area \(\mathcal {A}_{Bragg+Bloch}\) immediately shows a non-trivial correction compared to \(\mathcal {A}_{Bragg}\). The suitable momentum compensation factor is, however, found using our solver at the crossing of the dashed blue line and the vertical zero limit (\(\Delta k_{eff}^{Bragg+Bloch}=0.932\) \(\Delta k_{eff}^{Bragg}\)). This straightforward implementation of our toolbox in a rather complex arrangement is promising for an extensive use of this framework to design, interpret or propose advanced experimental schemes.
Gravity gradient cancellation in the case of a combined Bragg-Bloch gradiometer scheme. (a) Schematic of the Bragg-Bloch interferometer geometry with a baseline of 2 m. This configuration allows one to independently imprint momenta of \(n\hbar k\) Bragg and \(n_{Bloch}\hbar k\). The Bragg mirror pulse is momentum-adapted to cancel the gravity gradient phase. (b) Gradiometric phase for a \(4\hbar k\) (red dots) Bragg momentum transfer and a \(2\hbar k\) Bragg + \(2\hbar k\) Bloch Mach–Zehnder interferometer (blue dots). For both interferometers, the Gaussian pulse lengths of the splitting and mirror pulses are \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s, respectively. The initial momentum width of the atomic sample is \(\sigma _p=0.01\) \(\hbar\)k. The corresponding Rabi frequencies for the higher order Bragg transitions were found by optimising for an ideal 50 : 50 population splitting of the \(\frac{\pi }{2}\) pulse, which leads to a Rabi frequency of \(\Omega _{4\hbar k}=3.7\) \(\omega _r\). For the \((2+2)\hbar k\) Bragg+Bloch geometry, the Bloch sequence is implemented with an adiabatic loading time of \(\tau _{load}=0.5\) ms, a frequency chirp time of \(\tau _{chirp}=0.5\) ms during which the momentum transfer occurs and an adiabatic unloading time of \(\tau _{unload}=0.5\) ms. The Rabi frequency of the Bloch lattice is \(\Omega =4\) \(\omega _r\). For the \(4\hbar k\) Bragg geometry we find a vanishing gradiometer phase at \(\Delta k_{eff}=\frac{\Gamma }{2}k_{eff}T^2\) which agrees with the analytical calculation40. For the shell \((2+2)\hbar k\) Bragg+Bloch geometry we find a phase shift of \(\Phi =-3\) mrad at \(\Delta k_{eff}=\frac{\Gamma }{2}k_{eff}T^2\) due to the nontrivial correction of the space-time area of the \((2+2)\hbar k\) Bragg+Bloch compared to the \(4\hbar k\) Bragg geometry. The dashed lines are a guide to the eye.
Trapped interferometry of an interacting BEC
Employing Bose–Einstein condensate (BEC) sources46,47 for atom interferometry34,48,49 has numerous advantages such as the possibility to start with very narrow momentum widths \(\sigma _p\)31,32,33,34,35,36, which enables high fidelities of the interferometry pulses4. For interacting atomic ensembles, it is necessary to take into account the scattering properties of the particles. The Schrödinger equation is not anymore sufficient to describe the system dynamics and the ODE approach becomes rather complex to use as shown in the section on scalability and numerics. We rather generalise our position-space approach and consider a trapped BEC atom interferometer including two-body scattering interactions described in a mean-field framework. The corresponding Gross–Pitaevskii equation reads50
$$\begin{aligned} i \hbar \partial _t \psi (x, t)&= \left( \frac{-\hbar ^2}{2m}\frac{\partial ^2}{\partial x^2} + 2 \hbar \Omega (t) \cos ^2(k(x-nv_rt)) + g_{1D}N|\psi (x, t)|^2 \right) \psi (x, t), \end{aligned}$$
where the quantum gas of N atoms is trapped is a quasi-1D guide aligned with the interferometry direction and characterised by a transverse trapping at an angular frequency \(\omega _{\perp }\) much stronger than the longitudinal one. These interactions can effectively be reduced in 1D to a magnitude of \(g_{1D}=2\hbar a_{s} \omega _{\perp }\). For our calculation, we set the s-wave scattering length of \(^{87}\)Rb to one Bohr radius, i.e. \(a_{s}=a_0=5.3\times 10^{-11}\) m. Experimentally, such a value can be realised using a Feshbach resonance technique51. This model is well valid in the weakly interacting limit, i.e. when \(a_s N|\psi | ^2\ll 1\)52,53.
All atom interferometric considerations mentioned earlier, like the Bragg resonance conditions, construction of interferometer geometries, the implementation of Doppler detunings, phase calculations and population measurements are also valid in this case without any extra theoretical effort. The non-linear Gross–Pitaevskii equation is solved following the split-operator method as in the Schrödinger case24.
If the atom interferometer is perfectly symmetric in the two directions of the matterwave guide, no phase shift should occur. In realistic situations, however, the finite fidelity of the beam splitters creates an imbalance \(\delta N\) of the particle numbers between the two interferometer arms. The phase shift in this case can be related to the differential chemical potential by
$$\begin{aligned} \Delta \phi _{MF}&=\frac{1}{\hbar }\int _0^{2T} \mathrm {d}t\;(\mu _{arm1}-\mu _{arm2}). \end{aligned}$$
We illustrate the capability of our approach to quantitatively predict this effect by contrasting it to the well-known treatment of this dephasing. Following Ref.48, we introduce \(\delta N \ne 0\) and analyse the dephasing by using the 1D Thomas–Fermi chemical potential of the harmonic oscillator potential
$$\begin{aligned} \mu ^{TF}_{arm1/arm2} = \left( \frac{3\sqrt{m}}{2^{5/2}} g_{1D}\frac{\omega _x}{\sqrt{2}}\right) ^{2/3}\left( \frac{N}{2}\pm \frac{\delta N}{2}\right) ^{2/3}. \end{aligned}$$
The \(+\) and − signs refer here to the arms 1 and 2, respectively. We assume the Thomas–Fermi radii before and after the atom-light interaction to be approximately the same. To this end, one needs to introduce the correction factor of \(1/\sqrt{2}\) which is a direct consequence of
$$\begin{aligned} R_{TF}^{initial}=R_{TF}^{arm1/arm2}=R_{TF}=\left( \frac{3 N g_{1D}}{2m\omega _x^2} \right) ^{1/3}. \end{aligned}$$
Using Eq. (21), one finds a phase shift of
$$\begin{aligned} \Delta \phi _{MF}^{TF}=\frac{2T}{\hbar }\left( \frac{3\sqrt{m}}{2^{5/2}} g_{1D}\frac{\omega _x}{\sqrt{2}}\right) ^{2/3}\left( \left( \frac{N}{2}+\frac{\delta N}{2}\right) ^{2/3}-\left( \frac{N}{2}-\frac{\delta N}{2}\right) ^{2/3}\right) . \end{aligned}$$
A Taylor expansion to second order in the \(\delta N/N\) leads to the following phase shift formula
$$\begin{aligned} \Delta \phi _{MF}^{TF}&=\frac{2T}{\hbar }\frac{g_{1D}}{2R_{TF}}\delta N + \mathcal {O}(\delta N/N)^3. \end{aligned}$$
One would retrieve the same expression of the dephasing, up to second order in \(\delta N / N\), if one would use the chemical potential of a uniform BEC
$$\begin{aligned} \mu ^{uniform}_{arm1/arm2}=\left( \frac{N}{2}\pm \frac{\delta N}{2}\right) \frac{g_{1D}}{2L}, \end{aligned}$$
where L denotes the half-width of the BEC, simply by replacing \(L=R_{TF},\) as assumed by Ref.48. In Fig. 6a, the mean-field shift is plotted as a function of the atom number imbalance in the two cases of the numerical solution of the Gross–Pitaevskii equation and with the analytical model using the Thomas–Fermi approximation. It is worth noting that the dephasing is accompanied by a loss of contrast consistent with previous theoretical studies54. We performed a numerical optimisation to find the maximal particle number N up to which we find a contrast of \(C>99\,\%\), which is \(N\le 6\cdot 10^{4}\) in this case. In Fig. 6b, the absolute value of the difference between the numerical and the analytical solutions of \(\Delta \phi\) is plotted. For an imbalance of the order of \(10\,\%\), we observe an agreement at the mrad level. We could point to different possible sources for the relative phase difference. First, the assumptions of the Thomas–Fermi approximation at the heart of the analytical method are not necessarily satisfied here with \(N\le 6\cdot 10^{4}\). Moreover, the analytical treatment neglects all time-dependent effects occurring during the light-atom interactions at the mirror and beam-splitter pulses. These effects, combined with a non-vanishing mean-field would lead to additional phase shifts and shape deformations of the wave functions that are absent from a simple Thomas–Fermi assumption.
(a) Mean-field-driven phase shifts as a function of the particle imbalance \(\delta N\). The analytic solution is given by Eq. (24) (blue line). For the numerical solution (orange dots), we modeled the imbalance by considering a first \(\frac{\pi }{2}\) beam-splitter with a finite fidelity. The Gaussian splitter and mirror pulses have peak Rabi frequencies of \(\Omega =1.0573\) \(\omega _r\) with pulse lengths of \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s, respectively. The transverse trapping frequency is realised with an angular frequency of \(2\pi \times 50\) Hz and the initial trap frequency in which the BEC is condensed is set to \(2\pi \times 1\) Hz with a number of atoms of \(N=6\cdot 10^4\) and a scattering length of \(a_s=a_0\) with \(a_0\) being the Bohr radius. (b) Absolute value of the phase difference between the analytic solution and the numerical simulation. Discrepencies with respect to the analytical model stem from the assumptions of the Thomas–Fermi approximation being not satisfied here (see main text).
Scalability and numerics
Numerical accuracy and precision
To gain a better understanding of the numerical accuracy of the simulations, we plot in Fig. 7 the dependency of the phase shift \(|\Delta \phi |\) on the momentum width of the atomic sample \(\sigma _p\) for a \(2\hbar k\) Bragg Mach–Zehnder interferometer. We study two realisations which differ in the peak Rabi frequency with corresponding pulse lengths to perform beam splitter and mirror pulses. For both cases we observe a similar characteristic qualitative behaviour of \(|\Delta \phi |\) scaling with \(\sigma _p\). Going to smaller initial momentum widths systematically decreases the phase shift until it reaches a plateau of \(\,1\times 10^{-7}\) rad for \(\Omega =1.06\omega _r\) and \(2.5\times 10^{-14}\) for \(\Omega =0.53\omega _r\).
This qualitative behaviour can be explained by considering the effect of parasitic trajectories. In Fig. 2 it is clearly visible that after the time of flight of \(\tau _{ToF}=2T\), there is no clear separation between the parasitic trajectories and the main ports of the Mach–Zehnder interferometer, which leads to interference between them. We choose the integration borders by setting up a symmetric interval around the peak value of each of the ports (see Eq. (3)), ensuring a minimal influence of the parasitic atoms on the interferometric ports. Nevertheless, the interference between the interferometric ports and the parasitic trajectories modifies the measured particle number and therefore also the inferred relative phase. This effect decreases with smaller initial momentum width since less atoms populate the parasitic trajectories overlapping with the main ports, which explains the decrease of relative phase \(|\Delta \phi |\) between \(\sigma _p=0.1\hbar k\) to \(\sigma _p=0.05 \hbar k\) (\(\Omega =1.06\) \(\omega _r\)) and \(\sigma _p=0.03\hbar k\) (\(\Omega =0.53\) \(\omega _r\)). Another important contribution to the relative phase \(|\Delta \phi |\) which is not captured by Feynman's path integral approach25,26 is the diffraction phase, which is fundamentally linked to the excitation of non-resonant momentum states37,38. Using smaller Rabi frequencies leads to a reduced population of non-resonant momentum states (after a beam splitter pulse we find \(P(-2\hbar k,\;\Omega =1.06\,\omega _r)+P(4\hbar k,\;\Omega =1.06\,\omega _r)=1.3\times 10^{-7}\) and \(P(-2\hbar k,\;\Omega =0.53\,\omega _r)+P(4\hbar k,\;\Omega =0.53\,\omega _r)=1.9\times 10^{-18}\)) and therefore to a reduced diffraction phase which explains that operating a Mach–Zehnder interferometer at \(\Omega =0.53\) \(\omega _r\) leads to a much smaller residual phase shift than at \(\Omega =1.06\) \(\omega _r\).
These results indicate that our simulator reaches at least a relative phase accuracy at the level of \(2.5\times 10^{-14}\) rad. It is worth mentioning, that the numerical parameters chosen to reach this performance are very accessible on modestly powerful desktop computers. The computation took \(\tau _{CPUtime}=12.7\) s on an Intel Xeon X5670 processor using four cores (2.93 GHz, 12 MB last level cache). Modeling precision atom interferometry problems with this method is therefore a practical, flexible and highly accurate approach. Using improved resolutions in position and time or higher order operator splitting schemes55 leads to even better numerical precision and accuracy.
Phase shift of a \(2\hbar k\) Mach–Zehnder interferometer as a function of the initial momentum width of an atomic sample. We evaluate the phase shift for pulse lengths of \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s (blue dots) and for \(\tau _{bs}=50\) \(\upmu\)s and \(\tau _{m}=100\) \(\upmu\)s (red dots), using peak Rabi frequencies of \(\Omega _{25 \upmu \mathrm{s}}=1.06\) \(\omega _r\) and \(\Omega _{50 \upmu \mathrm{s}}=0.53\) \(\omega _r\). The dashed lines are a guide to the eye. We find a systematic decreasing behaviour of the relative phase offset \(|\Delta \phi |\) starting from an initial momentum width of \(\sigma _p=0.1\hbar k\) (far right) to \(\sigma _p=0.05 \hbar k\) (red dots) and \(\sigma _p=0.03\hbar k\) (blue dots). Reaching those critical initial momentum widths both curves show fixed relative phase offsets \(|\Delta \phi |\), which in the case of the interferometer with smaller Rabi frequency of \(\Omega =0.53\omega _r\) (red dots) reaches a value of \(2.5\times 10^{-14}\) rad (see text). The numerical simulations were performed with 65,536 grid points, an interaction time step of \(dt_{int}=1\) \(\upmu\)s and a free evolution time step of \(dt_{free}=10\) \(\upmu\)s, leading to a computational time of \(\tau _{CPUtime}=12.69\) s on four cores of an Intel Xeon X5670 processor with 2.93 GHz frequency and 12 MB of cache.
Numerical convergence
To analyse the numerical convergence as well as the connected numerical precision and accuracy of the split-operator method applied to the previously presented systems, we simulate three different interferometer settings for different space and time grids. We consider a \(2\hbar k\) Mach–Zehnder interferometer, a \(2\hbar k\) Mach–Zehnder interferometer in a waveguide and as a last example a \((2+2)\hbar k\) Bragg+Bloch Mach–Zehnder interferometer in order to quantify the necessary resolution and grid sizes that can be derived from these results. In Figs. 8 and 9 we extract the relative phase over successively decreasing spatial and temporal steps and we compare them to simulations with sufficiently fine spatial and temporal resolutions by plotting the absolute value of the difference of the relative phases, i.e. \(|\Delta \phi -\Delta \phi (\mathrm {dt}=4\;ns)|\) and \(|\Delta \phi -\Delta \phi (\mathrm {d}x=0.01\lambda )|\). For the presented cases we compare to \(\mathrm {d}t=4\) ns and \(\mathrm {d}x=0.01\lambda\). The choice of the steps fulfilling the necessary resolution is motivated in the following, relating them to the physical quantities of the problem (optical lattice and atomic wavepacket).
Numerical convergence analysis of three different interferometer realisations given by a \(2\hbar k\) Mach–Zehnder interferometer (blue and orange dots corresponding to the \(0\hbar k\) and \(2\hbar k\) ports), a \(2\hbar k\) Mach–Zehnder interferometer in a waveguide (green dots) and a \((2+2)\hbar k\) Bragg+Bloch Mach–Zehnder interferometer (red dots). We analyse the numerical convergence behaviour when changing the position step \(\mathrm {d}x\) expressed in units of \(\lambda =780\) nm of the numerical simulation using the third-order split-operator method with temporal steps of \(\mathrm {d}t_{int}=1\,\upmu \mathrm{s}\) and \(\mathrm {d}t_{free}=10\,\upmu \mathrm{s}\). The Gaussian splitter and mirror pulses have peak Rabi frequencies of \(\Omega =1.0573\) \(\omega _r\) with pulse lengths of \(\tau _{bs}=25\) \(\upmu\)s and \(\tau _{m}=50\) \(\upmu\)s, respectively. The Bloch sequence is implemented with an adiabatic loading time of \(\tau _{load}=0.5\) ms, a frequency chirp time of \(\tau _{chirp}=0.5\) ms during which the momentum transfer occurs and an adiabatic unloading time of \(\tau _{unload}=0.5\) ms. The Rabi frequency of the Bloch lattice is \(\Omega =4\) \(\omega _r\). In the case of the \(2\hbar k\) Mach–Zehnder and \((2+2)\hbar k\) Bragg+Bloch Mach–Zehnder interferometers, we use an initial momentum width of the atomic sample of \(\sigma _p=0.01\) \(\hbar\)k. In the case of the \(2\hbar k\) Mach–Zehnder interferometer in a waveguide the transverse trapping frequency is realised with an angular frequency of \(2\pi \times 50\) Hz and the initial trap frequency in which the BEC is condensed is set to \(2\pi \times 1\) Hz with a number of atoms of \(N=6\cdot 10^4\) and a scattering length of \(a_s=a_0\) with \(a_0\) being the Bohr radius.
Numerical convergence analysis of three different interferometer realisations given by a \(2\hbar k\) Mach–Zehnder interferometer (blue and orange dots corresponding to the \(0\hbar k\) and \(2\hbar k\) ports), a \(2\hbar k\) Mach–Zehnder interferometer in a waveguide (green dots) and a \((2+2)\hbar k\) Bragg+Bloch Mach–Zehnder interferometer (red dots). We analyse the numerical convergence behaviour when changing the temporal step \(\mathrm {d}t\) of the numerical simulation using the third-order split-operator method with a spatial step of \(\mathrm {d}x=4\times 10^{-2}\;\lambda\). The same parameters as Fig. 8 are used.
The fast Fourier transform (FFT) efficiently switches between momentum and position representations to apply kinetic and potential propagators. The corresponding position and momentum grids are defined by the number of grid points \(N_{grid}\) and the total size of the position grid \(\Delta x\) as
$$\begin{aligned} \mathrm {d}x = \frac{\Delta x}{N_{grid}-1}, \;\;\;\mathrm {d}p = \frac{2 \pi \hbar }{\Delta x}\;\;\; \mathrm {and}\;\;\; \Delta p = \frac{2 \pi \hbar }{\mathrm {d}x}, \end{aligned}$$
where \(\mathrm {d}p\) and \(\mathrm {d}x\) are the steps in momentum and position, respectively, and \(\Delta p\) the total size of the momentum grid.
To resolve a finite momentum width of the atomic cloud we are restricted to (\(\lambda =780\) nm)
$$\begin{aligned} \mathrm {d}p \ll \hbar k \Leftrightarrow \Delta x \gg \lambda , \end{aligned}$$
which sets a bound to the size of the position grid. Finally, \(\Delta x\) has to be chosen according to the maximal separation of the atomic clouds \(\Delta x_{sep}\). With this we find
$$\begin{aligned} \Delta x \gtrsim \Delta x_{sep} \gg \lambda . \end{aligned}$$
To include all momentum orders necessary to simulate the considered atom interferometric sequences, we are naturally bound by
$$\begin{aligned} \Delta p = \frac{2 \pi \hbar }{\mathrm {d}x} \Leftrightarrow \frac{\Delta p}{\hbar k}=\frac{\lambda }{\mathrm {d}x}. \end{aligned}$$
Hence, we find that
$$\begin{aligned} \frac{\Delta p}{\hbar k}=\frac{\lambda }{\mathrm {d}x} \gg 1 \Leftrightarrow \mathrm {d}x \ll \lambda , \end{aligned}$$
which is the natural condition imposed by the necessity of resolving the atomic dynamics in the optical lattice nodes and anti-nodes of the Bragg and Bloch beams.
The absence of data points of the \((2+2)\hbar k\) Bragg+Bloch Mach–Zehnder interferometer in Fig. 8 shows the limits given by Eq. (31). Choosing position steps at \(\mathrm {d}x=0.07\,\lambda\) leads to a maximal computed momentum of \(\pm \,7.1\) \(\hbar k\), which results in the impossibility to find probabilities at \(8\hbar k\). For this specific interferometer, however, it is critical to resolve those momenta, since they are residually populated during the atom-light interaction processes. Imposing that the position step is roughly one order of magnitude smaller than the wavelength (\(\mathrm {d}x \lesssim 0.06\) \(\lambda\)) results in a reasonable momentum truncation and resolution of the light potential and therefore in the convergence of the numerical routine. Additionally, we can observe that reaching spatial resolutions of \(\mathrm {d}x=0.01\lambda\), one finds, for all three studied cases, satisfying numerical accuracy and precision which in the worst case is approximately \(1\times 10^{-9}\) rad for the \((2+2)\hbar k\) Bragg+Bloch Mach–Zehnder interferometer.
The typical time scales we need to consider are set on the one hand by the velocities of the optical lattice beams and the atomic cloud, and on the other hand by the duration of the atom-light interaction \(\tau\). The beams, as well as the atomic cloud, move with velocities which are proportional to the recoil velocity \(v_r\). Given that we want to drive Bragg processes of the order of n, we find the following bound on the time step \(\mathrm {d}t\)
$$\begin{aligned} \mathrm {d}t \ll \frac{\lambda }{nv_r}\approx \frac{100\, \upmu \mathrm{s}}{n}. \end{aligned}$$
The typical duration of a pulse in the quasi-Bragg regime is typically adapted to the momentum width due to the spectral properties of the finite pulse. Here, we assume a lower bound of \(\tau =10\,\upmu\)s, which leads to \(\mathrm {d}t < \tau\). It is worth noting that this time step is only necessary during the atom-light interaction. One can simulate the free evolution between the pulses with a much larger time step (without external and interaction potentials a single step suffices) or using scaling techniques34,56,57,58,59.
Figure 9 shows that depending on the specific form of the simulated light potential or the consideration of two-particle interactions, we observe a characteristic convergence behaviour which we directly connect to the propagation error of the split-operator routine55,60. For the \(2\hbar k\) Mach–Zehnder interferometer one can observe the already found level of convergence around \(1\times 10^{-13}\) rad (see Fig. 7). Interestingly, the \(2 \hbar k\) port reaches a level of \(1\times 10^{-13}\) rad at a time step of \(\mathrm {d}t\approx 0.1\) \(\upmu\)s, whereas the \(0\hbar k\) port already convergences to that level at a time step of \(\mathrm {d}t=1\) \(\upmu\)s. Note, that the diffraction phase and therefore the relative phase in the slow port vanishes but reaches a finite value of approximately \(1\times 10^{-7}\) rad in the fast port37,38 (see Fig. 7). The next analysed case is the \(2\hbar k\) Mach–Zehnder interferometer in a waveguide whereby introducing the non-linear interaction term (see Eq. (20)) one can observe a more demanding convergence behaviour, leading to an initial precision of 1 \(\upmu\)rad at \(\mathrm {d}t=1\) \(\upmu\)s, which converges to \(1\times 10^{-10}\) rad at a time step of \(\mathrm {d}t\approx 10^{-8}\) s. Introducing additional Bloch oscillations shifts the convergence curve again by two orders of magnitude at a minimal time step of \(\mathrm {d}t=1\;\upmu\)s and reaches a level of approximately 1 \(\upmu\)rad at \(\mathrm {d}t=10^{-8}\) s. Note that the precision and accuracy of the split-operator algorithm strongly depends on the potential that is simulated60 and that in the case of a Bloch oscillation the optical potential linearly changes its velocity, where in the case of a Bragg transition the optical lattice moves with a constant velocity during the atom-light interaction process. Additionally, the atom-light interaction time of a Bloch oscillation is typically one order of magnitude larger compared with a Bragg transition which explains the need for finer temporal grids in order to achieve reasonable precision and accuracy.
Time complexity analysis
In this section, we compare the time complexity behaviour of the commonly-used method of treating the beam splitter and mirror dynamics given by the ODE approach with the PDE formulation presented in this paper, based on a position-space approach to the Schrödinger equation. To assess the time complexity of the ODE treatment, we re-derive it from the Schrödinger equation
$$\begin{aligned} i \hbar \partial _t \psi (x, t)&= \left( \frac{-\hbar ^2}{2m}\frac{\partial ^2}{\partial x^2} + 2 \hbar \Omega \cos ^2(kx) \right) \psi (x, t). \end{aligned}$$
We decompose the wave function in a momentum state basis as done in Refs.2,3,4
$$\begin{aligned} \psi (x, t)&=\sum _{j, \delta } g_{j+\delta }(t)\,e^{i(j+\delta )kx}, \end{aligned}$$
where j denotes the momentum orders considered and \(\delta\) the discrete representation of momenta in the interval \([k_{j}-k/2,k_{j}+k/2]\) which captures the finite momentum width of the atoms around each momentum class \(k_{j}\). Making the two exponential terms appear in \(\cos ^2(kx)\), one obtains
$$\begin{aligned} i \hbar \dot{g}_{j+\delta }(t)&=\hbar ( (j+\delta )^2 \omega _{r}+ \Omega )g_{j+\delta }(t)+\frac{\hbar \Omega }{2} (g_{j+\delta +2}(t) + g_{j+\delta -2}(t)), \end{aligned}$$
which is a set of \(N_{eq}\) coupled ordinary differential equations. This number \(N_{eq}\) of equations to solve is equal to \(N_{j} N_{\delta }\), set by the truncation condition restricting the solution space to \(N_{j}\) momentum classes, each discretised in \(N_{\delta }\) sub-components. Using standard solvers for such systems as Runge-Kutta, multistep or the Bulirsch–Stoer methods61, we generally need to evaluate the right hand side of the system of equations over several iterations. With \(N_{eq}\) differential equations, where each one has only two coupling terms, one finds a time complexity of \(\mathcal {O}(N_{eq})\).
In Fig. 10 we present a visualisation of different possible momentum couplings starting from \(0\hbar k\) to other momentum components. One starts with only three momentum states and two coupling elements in Fig. 10a corresponding to a vanishing momentum width (\(\delta =0\) in Eq. (35)). If the momentum width is introduced (Fig. 10b), the number of coupling elements increases since every \(\delta\) sub-momentum class of j is connected to the same sub-momentum class of \(j-2\) and \(j+2\) as suggested by Eq. (35). In order to reduce visual complexity we are only showing couplings that start from the \(0\hbar k\) wavepacket, while dropping coupling elements starting from \(\pm 2\hbar k\). We also fixed the number of momentum states per integer momentum class \(k_j\) to three, which in a realistic example is at least an order of magnitude larger.
Visualisation of different momentum couplings from the \(0\hbar k\) momentum wavepacket corresponding to different levels of complexity. (a) Zero momentum width and two coupling elements from \(0\hbar k\) to \(\pm 2\hbar k\). (b) Finite momentum widths with coupling elements for each momentum component in the \(0\hbar k\) wavepacket to the corresponding momentum component in the \(\pm 2\hbar k\) wavepackets with a momentum difference for each transition of \(\Delta p=2\hbar k\). The different colours indicate the separate momentum subspaces in which transitions can occur. (c) Finite momentum widths with multiple possible coupling elements from the \(0\hbar k\) wavepacket to the \(\pm 2\hbar k\) wavepacket with a broadening of the possible momentum difference \(\Delta p\). (d) Finite momentum widths with higher order coupling elements from the \(0\hbar k\) wavepacket to momentum components of the \(\pm 4\hbar k,\,\pm 6\hbar k,\dots\) wavepackets.
In a next step, the coupling terms are calculated for more general potentials with time and space-dependent Rabi frequencies \(\Omega (x,t)\) and wave vectors k(x, t). For this purpose, the momentum-space representation of the Schrödinger equation is more appropriate and can be written for the Fourier transform of the atomic wave function g(p, t)
$$\begin{aligned} i \hbar \dot{g}(p,t) = \frac{p^2}{2m}g(p,t)+V(p,t) *g(p,t), \end{aligned}$$
$$\begin{aligned} V(p,t) *g(p,t) := \int \mathrm {d}x \;\frac{e^{-i\frac{p}{\hbar }x}}{\sqrt{2 \pi \hbar }}V(x, t)\psi (x, t). \end{aligned}$$
Expressing the wave function in momentum space gives
$$\begin{aligned} V(p) *g(p,t) = \frac{1}{2\pi \hbar }\int \mathrm {d}p'&\underbrace{\int \mathrm {d}x\; e^{ix\frac{(p'-p)}{\hbar }} V(x,t)}_{=:F(p,p',t)\in \mathbb {C}} g(p',t). \end{aligned}$$
Discretising \(p \rightarrow (j+\delta )\hbar k\) and \(p' \rightarrow (l+\gamma )\hbar k\), one finds
$$\begin{aligned} V(p) *g(p,t) \approx \frac{1}{2\pi \hbar } \sum _{l,\gamma } F( (j+\delta )\hbar k,(l+\gamma )\hbar k,t)g_{l+\gamma }(t), \end{aligned}$$
where l and \(\gamma\) span the same indices ensembles as j and \(\delta\). The new equations to solve read
$$\begin{aligned} i \hbar \dot{g}_{j+\delta }(t) \approx \frac{((j+\delta )\hbar k)^2}{2m}g_{j+\delta }(t) + \frac{1}{2\pi \hbar } \sum _{l, \gamma } F( (j+\delta )\hbar k,(l+\gamma )\hbar k,t)g_{l+\gamma }(t), \end{aligned}$$
which yields the necessary momentum couplings for an arbitrary potential V(x, t). In the worst case, the sum in Eq. (40) runs over \(N_{eq}\) nonzero entries (\(N_{l} N_{\gamma }=N_{j} N_{\delta }=N_{eq}\)) which leads to a time complexity of \(\mathcal {O}(N_{eq}^2)\). This, however, is an extreme example that contrasts with commonly operated precision interferometric experiments since it would correspond to white light with speckle noise. Realistic scenarios rather involve time-dependent potentials with a smaller number of momentum couplings, i.e. \(N_{eq} \gg \# coupling\; terms \gtrsim 2\) as would be the case in Fig. 10c. To evaluate the momentum couplings, it is necessary to calculate the integral \(F(p,p',t)\) at each time step using the FFT, which leads to a final time complexity class for solving the ODE of \(\mathcal {O}(N_{eq}\log N_{eq})\).
The next important generalisation aims to include the effect of the two-body collisions analysed in the mean-field approximation, i.e. \(H_{int}=g_{1D}|\psi (x, t)|^2\). In this case, the equation describing the dynamics of the system and the couplings can be written as
$$\begin{aligned} i \hbar \dot{g}_{j+\delta }(t)&=\hbar ( (j+\delta )^2 \omega _{r} +\Omega )g_{j+\delta }(t)+\frac{\hbar \Omega }{2}(g_{j+\delta +2}(t) + g_{j+\delta -2}(t) ) \end{aligned}$$
$$\begin{aligned}&\quad + g_{1D} \left( \sum _{l,\gamma ,o,\nu }g^*_{l+\gamma }(t)g_{2o-l+2\nu -\gamma }(t) \right) g_{j+\delta }(t), \end{aligned}$$
where \(\nu\) and o are running indices over the same values as l and \(\gamma\). One ends up with \(N_{eq}\) differential equations where each has more than \(N_{eq}^2\) coupling terms, and finds a time complexity class of \(\mathcal {O}(N_{eq}^3)\). This shows the growth in numerical operations of the ODE treatment as reflected by the number of couplings in Fig. 10d.
We analyse now the time complexity class for the PDE approach, using the split-operator method24. Based on the application of the FFT, it is known that the complexity class of this method is scaling as \(\mathcal {O}(N_{grid}\log N_{grid})\), where \(N_{grid}\) is the number of grid points in the position or momentum representations. Since the discretisation of the problem for the ODE and PDE (Schrödinger equation) approaches is roughly the same (\(N_{eq} \approx N_{grid}\)), a direct comparison between the two treatments is possible.
The time complexity analysis is summarised in Table 1. It shows that the standard ODE approach is only better suited in the case of ideal light plane waves. In every realistic case where the light field is allowed to be spatially inhomogeneous, the amount of couplings increases and it is preferable to employ the PDE approach with a scaling of \(\mathcal {O}(N_{grid}\log N_{grid})\), independently of any further complexity to be modelled.
Table 1 Comparison of the different time complexity classes of the commonly-used ODE treatment with the position-space approach developed in this work (PDE-based). Including more and more realistic features of the atom-light system leads to an ODE time complexity unfavourably scaling. The PDE formulation, however, routinely scales with \(\mathcal {O}(N_{grid}\log N_{grid})\).
In this paper, we have shown that the position-space representation of light-pulse beam splitters is quite powerful for tackling realistic beam profiles in interaction with cold atom ensembles. It was successfully applied across several relevant regimes, geometries and applications. We showed its particular fitness in treating metrologically-relevant investigations based on atomic sensors. Its high numerical precision and scalability makes it a flexible tool of choice to design or interpret atom interferometric measurements without having to change the theoretical framework for every beam geometry, dimensionality, pulse length or atomic ensemble property. We anticipate the possibility of accurately implementing this approach to analyse important systematic effects in the field of precision light-pulse matter-wave interferometry such as the ones related to wavefront aberrations, large momentum transfer and inhomogeneity and fluctuations of the Rabi pulses. Finally, we would like to highlight the possibility to generalise this method to Raman or 1-photon transitions if we account for the internal state degree of freedom change during the diffraction.
Berman, P. R. Atom Interferometry (Academic Press, London, 1997).
Meystre, P. Atom Optics Vol. 33 (Springer Science & Business Media, New York, 2001).
Müller, H., Chiow, S.-W. & Chu, S. Atom-wave diffraction between the Raman-Nath and the Bragg regime: Effective Rabi frequency, losses, and phase shifts. Phys. Rev. A 77, 023609. https://doi.org/10.1103/PhysRevA.77.023609 (2008).
ADS CAS Article Google Scholar
Szigeti, S. S., Debs, J. E., Hope, J. J., Robins, N. P. & Close, J. D. Why momentum width matters for atom interferometry with Bragg pulses. N. J. Phys. 14, 023009. https://doi.org/10.1088/1367-2630/14/2/023009 (2012).
Siemß, J.-N. et al. Analytic theory for Bragg atom interferometry based on the adiabatic theorem. Phys. Rev. A 102, 033709. https://doi.org/10.1103/PhysRevA.102.033709 (2020).
Tannor, D. J. Introduction to Quantum Mechanics (University Science Books, Mill Valley, 2018).
Simula, T. P., Muradyan, A. & Mølmer, K. Atomic diffraction in counterpropagating Gaussian pulses of laser light. Phys. Rev. A 76, 063619. https://doi.org/10.1103/PhysRevA.76.063619 (2007).
Stickney, J. A., Kafle, R. P., Anderson, D. Z. & Zozulya, A. A. Theoretical analysis of a single- and double-reflection atom interferometer in a weakly confining magnetic trap. Phys. Rev. A 77, 043604. https://doi.org/10.1103/PhysRevA.77.043604 (2008).
Liu, C.-N., Krishna, G. G., Umetsu, M. & Watanabe, S. Numerical investigation of contrast degradation of Bose–Einstein-condensate interferometers. Phys. Rev. A 79, 013606. https://doi.org/10.1103/PhysRevA.79.013606 (2009).
Stuckenberg, F., Marojević, Z. & Rosskamp, J. H. Atus2. https://github.com/GPNUM/atus2/tree/master/doc. Accessed 12 Dec 2019
Blakie, P. B. & Ballagh, R. J. Mean-field treatment of Bragg scattering from a Bose–Einstein condensate. J. Phys. B Atom. Mol. Opt. Phys. 33, 3961–3982. https://doi.org/10.1088/0953-4075/33/19/311 (2000).
Steck, D. A. Rubidium 87 D Line Data. http://steck.us/alkalidata (Revision 2.2.1, 21 November 2019).
Louchet-Chauvet, A. et al. The influence of transverse motion within an atomic gravimeter. N. J. Phys. 13, 065025. https://doi.org/10.1088/1367-2630/13/6/065025 (2011).
Schkolnik, V., Leykauf, B., Hauth, M., Freier, C. & Peters, A. The effect of wavefront aberrations in atom interferometry. Appl. Phys. B 120, 311–316. https://doi.org/10.1007/s00340-015-6138-5 (2015).
Zhou, M.-K., Luo, Q., Chen, L.-L., Duan, X.-C. & Hu, Z.-K. Observing the effect of wave-front aberrations in an atom interferometer by modulating the diameter of Raman beams. Phys. Rev. A 93, 043610. https://doi.org/10.1103/PhysRevA.93.043610 (2016).
Bade, S., Djadaojee, L., Andia, M., Cladé, P. & Guellati-Khelifa, S. Observation of extra photon recoil in a distorted optical field. Phys. Rev. Lett. 121, 073603. https://doi.org/10.1103/PhysRevLett.121.073603 (2018).
Wicht, A., Hensley, J. M., Sarajlic, E. & Chu, S. A preliminary measurement of the fine structure constant based on atom interferometry. Phys. Scr. T102, 82. https://doi.org/10.1238/physica.topical.102a00082 (2002).
Wicht, A., Sarajlic, E., Hensley, J. M. & Chu, S. Phase shifts in precision atom interferometry due to the localization of atoms and optical fields. Phys. Rev. A 72, 023602. https://doi.org/10.1103/PhysRevA.72.023602 (2005).
Cladé, P. et al. Precise measurement of \(h / {m}_{\rm Rb}\) using Bloch oscillations in a vertical optical lattice: Determination of the fine-structure constant. Phys. Rev. A 74, 052109. https://doi.org/10.1103/PhysRevA.74.052109 (2006).
Küber, J., Schmaltz, F. & Birkl, G. Experimental realization of double Bragg diffraction: robust beamsplitters, mirrors, and interferometers for Bose–Einstein condensates (2016). arXiv:1603.08826.
Ahlers, H. et al. Double Bragg interferometry. Phys. Rev. Lett. 116, 173601. https://doi.org/10.1103/PhysRevLett.116.173601 (2016).
ADS CAS Article PubMed Google Scholar
Giese, E., Roura, A., Tackmann, G., Rasel, E. M. & Schleich, W. P. Double Bragg diffraction: A tool for atom optics. Phys. Rev. A 88, 053608. https://doi.org/10.1103/PhysRevA.88.053608 (2013).
Gebbe, M. et al. Twin-lattice atom interferometry (2019). arXiv:1907.08416.
Feit, M., Fleck, J. & Steiger, A. Solution of the Schrödinger equation by a spectral method. J. Comput. Phys. 47, 412–433. https://doi.org/10.1016/0021-9991(82)90091-2 (1982).
ADS MathSciNet CAS Article MATH Google Scholar
Hogan, J., Johnson, D. & Kasevich, M. Light-pulse atom interferometry. Proc. Int. School Phys. Enrico Fermi 168. https://doi.org/10.3254/978-1-58603-990-5-411 (2008).
Storey, P. & Cohen-Tannoudji, C. The Feynman path integral approach to atomic interferometry. A tutorial. J. Phys. II(4), 1999–2027. https://doi.org/10.1051/jp2:1994103 (1994).
Gupta, S., Dieckmann, K., Hadzibabic, Z. & Pritchard, D. E. Contrast interferometry using Bose–Einstein condensates to measure \(h/m\) and \(\alpha\). Phys. Rev. Lett. 89, 140401. https://doi.org/10.1103/PhysRevLett.89.140401 (2002).
Keller, C. et al. Adiabatic following in standing-wave diffraction of atoms. Appl. Phys. B 69, 303–309. https://doi.org/10.1007/s003400050810 (1999).
Giltner, D. M., McGowan, R. W. & Lee, S. A. Theoretical and experimental study of the Bragg scattering of atoms from a standing light wave. Phys. Rev. A 52, 3966–3972. https://doi.org/10.1103/PhysRevA.52.3966 (1995).
Kovachy, T., Chiow, S.-W. & Kasevich, M. A. Adiabatic-rapid-passage multiphoton Bragg atom optics. Phys. Rev. A 86, 011606. https://doi.org/10.1103/PhysRevA.86.011606 (2012).
Chu, S., Bjorkholm, J. E., Ashkin, A., Gordon, J. P. & Hollberg, L. W. Proposal for optically cooling atoms to temperatures of the order of \(10^{-6}\) K. Opt. Lett. 11, 73–75. https://doi.org/10.1364/OL.11.000073 (1986).
Ammann, H. & Christensen, N. Delta kick cooling: a new method for cooling atoms. Phys. Rev. Lett. 78, 2088–2091. https://doi.org/10.1103/PhysRevLett.78.2088 (1997).
Morinaga, M., Bouchoule, I., Karam, J.-C. & Salomon, C. Manipulation of motional quantum states of neutral atoms. Phys. Rev. Lett. 83, 4037–4040. https://doi.org/10.1103/PhysRevLett.83.4037 (1999).
Müntinga, H. et al. Interferometry with Bose–Einstein condensates in microgravity. Phys. Rev. Lett. 110, 093602. https://doi.org/10.1103/PhysRevLett.110.093602 (2013).
Kovachy, T. et al. Matter wave lensing to picokelvin temperatures. Phys. Rev. Lett. 114, 143004. https://doi.org/10.1103/PhysRevLett.114.143004 (2015).
Corgier, R. et al. Fast manipulation of Bose–Einstein condensates with an atom chip. N. J. Phys. 20, 055002. https://doi.org/10.1088/1367-2630/aabdfc (2018).
Büchner, M. et al. Diffraction phases in atom interferometers. Phys. Rev. A 68, 013607. https://doi.org/10.1103/PhysRevA.68.013607 (2003).
Estey, B., Yu, C., Müller, H., Kuan, P.-C. & Lan, S.-Y. High-resolution atom interferometers with suppressed diffraction phases. Phys. Rev. Lett. 115, 083002. https://doi.org/10.1103/PhysRevLett.115.083002 (2015).
Aguilera, D. N. et al. STE-QUEST—test of the universality of free fall using cold atom interferometry. Class. Quantum Gravity 31, 115010. https://doi.org/10.1088/0264-9381/31/11/115010 (2014).
ADS CAS Article MATH Google Scholar
Roura, A. Circumventing Heisenberg's uncertainty principle in atom interferometry tests of the equivalence principle. Phys. Rev. Lett. 118, 160401. https://doi.org/10.1103/PhysRevLett.118.160401 (2017).
ADS MathSciNet Article PubMed Google Scholar
D'Amico, G. et al. Canceling the gravity gradient phase shift in atom interferometry. Phys. Rev. Lett. 119, 253201. https://doi.org/10.1103/PhysRevLett.119.253201 (2017).
ADS Article PubMed Google Scholar
Overstreet, C. et al. Effective inertial frame in an atom interferometric test of the equivalence principle. Phys. Rev. Lett. 120, 183604. https://doi.org/10.1103/PhysRevLett.120.183604 (2018).
Pagel, Z. et al. Symmetric Bloch oscillations of matter waves (2019). arXiv:1907.05994.
Ben Dahan, M., Peik, E., Reichel, J., Castin, Y. & Salomon, C. Bloch oscillations of atoms in an optical potential. Phys. Rev. Lett. 76, 4508–4511. https://doi.org/10.1103/PhysRevLett.76.4508 (1996).
Wilkinson, S. R., Bharucha, C. F., Madison, K. W., Niu, Q. & Raizen, M. G. Observation of atomic Wannier–Stark ladders in an accelerating optical potential. Phys. Rev. Lett. 76, 4512–4515. https://doi.org/10.1103/PhysRevLett.76.4512 (1996).
Ketterle, W. Nobel lecture: When atoms behave as waves: Bose–Einstein condensation and the atom laser. Rev. Mod. Phys. 74, 1131–1151. https://doi.org/10.1103/RevModPhys.74.1131 (2002).
Cornell, E. A. & Wieman, C. E. Nobel lecture: Bose–Einstein condensation in a dilute gas, the first 70 years and some recent experiments. Rev. Mod. Phys. 74, 875–893. https://doi.org/10.1103/RevModPhys.74.875 (2002).
Debs, J. E. et al. Cold-atom gravimetry with a Bose–Einstein condensate. Phys. Rev. A 84, 033610. https://doi.org/10.1103/PhysRevA.84.033610 (2011).
Sugarbaker, A., Dickerson, S. M., Hogan, J. M., Johnson, D. M. S. & Kasevich, M. A. Enhanced atom interferometer readout through the application of phase shear. Phys. Rev. Lett. 111, 113002. https://doi.org/10.1103/PhysRevLett.111.113002 (2013).
Pethick, C. & Smith, H. Bose–Einstein Condensation in Dilute Gases (Cambridge University Press, Cambridge, 2002).
Chin, C., Grimm, R., Julienne, P. & Tiesinga, E. Feshbach resonances in ultracold gases. Rev. Mod. Phys. 82, 1225–1286. https://doi.org/10.1103/RevModPhys.82.1225 (2010).
Olshanii, M. Atomic scattering in the presence of an external confinement and a gas of impenetrable bosons. Phys. Rev. Lett. 81, 938–941. https://doi.org/10.1103/PhysRevLett.81.938 (1998).
Salasnich, L., Parola, A. & Reatto, L. Effective wave equations for the dynamics of cigar-shaped and disk-shaped Bose condensates. Phys. Rev. A 65, 043614. https://doi.org/10.1103/PhysRevA.65.043614 (2002).
Watanabe, S., Aizawa, S. & Yamakoshi, T. Contrast oscillations of the Bose–Einstein-condensation-based atomic interferometer. Phys. Rev. A 85, 043621. https://doi.org/10.1103/PhysRevA.85.043621 (2012).
Javanainen, J. & Ruostekoski, J. Symbolic calculation in development of algorithms: split-step methods for the Gross–Pitaevskii equation. J. Phys. A Math. Gen. 39, L179–L184. https://doi.org/10.1088/0305-4470/39/12/l02 (2006).
ADS MathSciNet Article MATH Google Scholar
Castin, Y. & Dum, R. Bose–Einstein condensates in time dependent traps. Phys. Rev. Lett. 77, 5315–5319. https://doi.org/10.1103/PhysRevLett.77.5315 (1996).
Kagan, Y., Surkov, E. L. & Shlyapnikov, G. V. Evolution of a Bose gas in anisotropic time-dependent traps. Phys. Rev. A 55, R18–R21. https://doi.org/10.1103/PhysRevA.55.R18 (1997).
van Zoest, T. et al. Bose–Einstein condensation in microgravity. Science 328, 1540–1543. https://doi.org/10.1126/science.1189164 (2010).
Meister, M. et al. Efficient description of Bose–Einstein condensates in time-dependent rotating traps, Chapter 6. In Advances In Atomic, Molecular, and Optical Physics, Advances in Atomic, Molecular, and Optical Physics Vol. 66 (eds Arimondo, E. et al.) 375–438 (Academic Press, New York, 2017). https://doi.org/10.1016/bs.aamop.2017.03.006.
Bandrauk, A. D. & Shen, H. Improved exponential split operator method for solving the time-dependent Schrödinger equation. Chem. Phys. Lett. 176, 428–432. https://doi.org/10.1016/0009-2614(91)90232-X (1991).
Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. Numerical Recipes in Fortran 77: The Art of Scientific Computing Vol. 2 (Cambridge University Press, Cambridge, 1992).
We thank Sven Abend, Sina Loriani, Christian Schubert for insightful discussions and Eric Charron for carefully reading the manuscript. N.G. wishes to thank Alexander D. Cronin for fruitful indications about previous publications related to our current work. We also thank Matthew Glaysher and Heather Glaysher for proofreading the manuscript. This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) under Germany's Excellence Strategy (EXC-2123 QuantumFrontiers Grants No. 390837967) and through CRC 1227 (DQ-mat) within Projects No. A05 and No. B07, the Verein Deutscher Ingenieure (VDI) with funds provided by the German Federal Ministry of Education and Research (BMBF) under Grant No. VDI 13N14838 (TAIOL). We furthermore acknowledge financial support from "Niedersächsisches Vorab" through "Förderung von Wissenschaft und Technik in Forschung und Lehre" for the initial funding of research in the new DLR-SI Institute and the "Quantum- and Nano Metrology (QUANOMET)" initiative within the project QT3. Further support was possible by the German Space Agency (DLR) with funds provided by the Federal Ministry of Economic Affairs and Energy (BMWi) due to an enactment of the German Bundestag under grant No. 50WM1861 (CAL) and 50WM2060 (CARIOQA).
Open Access funding enabled and organized by Projekt DEAL.
Institut für Quantenoptik, Leibniz Universität Hannover, Welfengarten 1, 30167, Hannover, Germany
Florian Fitzek, Jan-Niclas Siemß, Stefan Seckmeyer, Holger Ahlers, Ernst M. Rasel & Naceur Gaaloul
Institut für Theoretische Physik, Leibniz Universität Hannover, Appelstraße 2, 30167, Hannover, Germany
Florian Fitzek, Jan-Niclas Siemß & Klemens Hammerer
Florian Fitzek
Jan-Niclas Siemß
Stefan Seckmeyer
Holger Ahlers
Ernst M. Rasel
Klemens Hammerer
Naceur Gaaloul
F.F. implemented the numerical model, performed all numerical simulations, and prepared the figures. J.-N.S. and H.A. helped with the interpretation of the results. H.A. and N.G. designed the research goals and directions. E.M.R. and K.H. contributed to scientific discussions. F.F. and N.G. wrote the manuscript. S.S. critically reviewed the manuscript. All authors reviewed the results and the paper and approved the final version of the manuscript.
Correspondence to Naceur Gaaloul.
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Fitzek, F., Siemß, JN., Seckmeyer, S. et al. Universal atom interferometer simulation of elastic scattering processes. Sci Rep 10, 22120 (2020). https://doi.org/10.1038/s41598-020-78859-1
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
Scientific Reports ISSN 2045-2322 (online) | CommonCrawl |
Computational inference of the structure and regulation of the lignin pathway in Panicum virgatum
Mojdeh Faraji1, 2,
Luis L. Fonseca1, 2,
Luis Escamilla-Treviño2, 3,
Richard A. Dixon2, 3 and
Eberhard O. Voit1, 2Email authorView ORCID ID profile
Biotechnology for Biofuels20158:151
© Faraji et al. 2015
Received: 28 January 2015
Accepted: 3 September 2015
Switchgrass is a prime target for biofuel production from inedible plant parts and has been the subject of numerous investigations in recent years. Yet, one of the main obstacles to effective biofuel production remains to be the major problem of recalcitrance. Recalcitrance emerges in part from the 3-D structure of lignin as a polymer in the secondary cell wall. Lignin limits accessibility of the sugars in the cellulose and hemicellulose polymers to enzymes and ultimately decreases ethanol yield. Monolignols, the building blocks of lignin polymers, are synthesized in the cytosol and translocated to the plant cell wall, where they undergo polymerization. The biosynthetic pathway leading to monolignols in switchgrass is not completely known, and difficulties associated with in vivo measurements of these intermediates pose a challenge for a true understanding of the functioning of the pathway.
In this study, a systems biological modeling approach is used to address this challenge and to elucidate the structure and regulation of the lignin pathway through a computational characterization of alternate candidate topologies. The analysis is based on experimental data characterizing stem and tiller tissue of four transgenic lines (knock-downs of genes coding for key enzymes in the pathway) as well as wild-type switchgrass plants. These data consist of the observed content and composition of monolignols. The possibility of a G-lignin specific metabolic channel associated with the production and degradation of coniferaldehyde is examined, and the results support previous findings from another plant species. The computational analysis suggests regulatory mechanisms of product inhibition and enzyme competition, which are well known in biochemistry, but so far had not been reported in switchgrass. By including these mechanisms, the pathway model is able to represent all observations.
The results show that the presence of the coniferaldehyde channel is necessary and that product inhibition and competition over cinnamoyl-CoA-reductase (CCR1) are essential for matching the model to observed increases in H-lignin levels in 4-coumarate:CoA-ligase (4CL) knockdowns. Moreover, competition for 4-coumarate:CoA-ligase (4CL) is essential for matching the model to observed increases in the pathway metabolites in caffeic acid O-methyltransferase (COMT) knockdowns. As far as possible, the model was validated with independent data.
Biochemical systems theory
Lignin biosynthesis
Panicum virgatum
Pathway analysis
Recalcitrance
Switchgrass
About 440 million years ago plants started to leave the oceans and inhabit land [1, 2]. The emergence of lignin during this time was an adaptation to the new environment and, specifically, a response to gravity and to limitations in accessing water. The new life also demanded plants to store water and develop systems of water transfer. The plant furthermore needed to grow in height in order to have enough access to sunlight and oxygen. Plants ultimately accomplished these multiple tasks through their xylem structures, of which lignin is a key constituent. Lignin is a phenolic polymer that is woven around and between cellulose and hemicellulose within the secondary cell wall; it provides strength and facilitates water transfer in plants. A consequence of these significant benefits for plants is that lignin is very difficult to decompose, because it is an irregular polymer that contains aromatic rings. This resistance against decomposition and digestion is known as recalcitrance. It is arguably the most important barrier to industrializing second-generation biofuels, and in particular the production of ethanol from inedible plant parts as sustainable and affordable biofuels, because recalcitrance necessitates additional treatment steps, such as hot acid or ammonia baths, to loosen the lignin structure [3–5]. These steps require time and expense and therefore reduce feasibility and cost effectiveness. Moreover, most of the pretreatments are not environmentally friendly [6, 7]. Outside the biofuel industry, recalcitrance affects forage digestibility, and progress toward reducing recalcitrance could have a significant impact on the cattle and sheep industry [8].
Numerous attempts have been made in recent times to manipulate the lignin content and composition in candidate plants for biofuel production. Many of these studies relied on the assumption that the lignin biosynthesis pathway was known. However, this is not necessarily the case, especially in understudied plant species, and the precise pathway structure is often unclear and requires dedicated research for such species. For instance, Selaginella moellendorffi and Medicago truncatula have basically similar lignin pathways, which however differ in some of their metabolic branch points as well as their enzyme properties [9–11]. Beyond the topological structure, it is not surprising that different species have evolved distinct regulatory control patterns. The immediate consequence of such discrepancies for the biofuel industry is that the direct extrapolation of knowledge, methods and treatments from one species to another is not necessarily valid. Moreover, it is well known that pathway systems are highly nonlinear and difficult to predict with intuition alone. A feasible strategy is therefore to employ computational approaches of systems biology and metabolic engineering.
The design of suitable models for this purpose is not trivial. First, it is generically unclear which mathematical representations are optimal for describing a natural system. Second, one cannot be sure that information or data from one species can be assumed to be valid in another species, even if the two are closely related. Similarly, it has been shown many times that data obtained in vitro are not necessarily applicable in vivo [10–14]. At the same time, species-specific experiments are time consuming and expensive. Mechanistic models based on enzyme kinetics seem to be an intriguing choice, but it has been shown that mechanistic models are not always good solutions, for instance, if parameter values and enzymatic rate laws are based on strong assumptions like bulk reactivity that are not necessarily satisfied in vivo [12]. An alternative that was recently proposed is the characterization of in vivo-like kinetics [13], which however is costly and time consuming and would still require extensive validation, which however is seldom truly achieved [12]. An additional challenge for the design of models is the scarcity and quality of test and validation data, which pose a significant obstacle to all analyses of relatively understudied species.
In this study we analyze the lignin biosynthesis pathway in switchgrass, Panicum virgatum, with computational means of systems biology. The analysis is based on a dataset from stem and tiller tissue that consists of the lignin content (H, G and S lignin) and the S/G lignin ratio in wild type and in four transgenic lines (4CL, CCR1, CAD and COMT knockdowns). To some degree, details of the in vitro kinetics of some of the pathway enzymes have also been determined by one of our labs. Our approach here is to develop computational models that characterize the structure and regulatory control patterns of lignin biosynthesis in P. virgatum at a systemic level. The goals of this modeling approach are, first, to explain the experimental results from wild type and transgenic lines and, second, to devise a rational basis for strategies to manipulate the pathway toward reduced recalcitrance.
The results are described in a sequence that follows our step-by-step model design and conveys our rationale for utilizing the observations to remediate discrepancies with the data and for suggesting the investigation of new features to the model in the next step of the analysis. We begin by assessing the pathway structure in switchgrass as it is alleged in the current literature. Next, we examine possible channeling of CCR/CAD, which has been reported for the lignin pathway in alfalfa [5, 14], but not in switchgrass. Even accounting for the possibility of channeling, the experimental data regarding H lignin cannot be captured at this point. Thus, we investigate the effects of product inhibition and competitive inhibition. In the next phase, 4CL inhibition is added as a potential explanation for the accumulation of 4CL substrates, along with a simultaneous decrease in coniferaldehyde in the COMT knockdown. Finally, principal component analysis is performed to investigate the distribution of parameters within the high-dimensional parameter space and to reduce the feasible subspace of parameter values. The results section ends with a validation of the model.
Reaction system of lignin biosynthesis in switchgrass
The traditionally accepted lignin biosynthesis pathway branches at p-coumaroyl CoA to provide S and G-lignin precursors (Fig. 1). The hexagon in this figure shows the details of this branch point. It was also previously assumed, based on studies in the dicots A. thaliana and N. benthamiana, that p-coumaroyl CoA is converted to p-coumaroyl shikimate and p-coumaroyl quinic acid by HCT. Subsequently, both products, p-coumaroyl shikimate and p-coumaroyl quinic acid, were shown to be converted to caffeoyl shikimate and caffeoyl quinic acid, respectively [15]. The enzyme for these unidirectional reactions is C3′H. Downstream, HCT was proposed to operate in the reverse direction to convert caffeoyl shikimate and caffeoyl quinic acid into caffeoyl-CoA.
Lignin biosynthesis pathway. Dashed arrows represent the traditionally accepted pathway of lignin biosynthesis, while the arrow from caffeoyl shikimate to caffeic acid captures a newly discovered enzymatic activity [39] now known to be present in switchgrass. Caffeoyl shikimate esterase turns caffeoyl shikimate into caffeic acid and circumvents the previously accepted route. 4CL has recently been shown to exhibit activity towards caffeic acid and ferulic acid in switchgrass by which a new network topology is introduced for switchgrass lignin biosynthesis. Note that tyrosine is shown here, but not included in the model
A recent study demonstrated that this pathway organization is unlikely to occur in switchgrass [16]. Based on kinetic measurements of PvHCT1a, PvHCT2a and PvHCT-Like1, it was shown that caffeoyl shikimate is not converted to caffeoyl-CoA by the reverse HCT reaction, but is more likely converted into caffeic acid through caffeoyl shikimate esterase, and that this step is actually the main route of mass transfer into the pathway towards S and G monolignols. As indicated with dashed arrows in Fig. 1, HCT is not active in the formation of caffeoyl-CoA. This new information helps us reduce the steps in Fig. 1. It has furthermore been suggested that cinnamic acid is a precursor for salicylic acid; this process is represented by the thick grey arrow [5]. Similarly, a considerable portion of ferulic acid leaves the pathway [17]. Finally, the efflux out of p-coumaric acid acts to avoid accumulation of the metabolite in the 4CL knockdown strain (Fig. 1). These simplifications yield the pathway diagram in Fig. 2.
Revised and simplified pathway in switchgrass. By eliminating HCT from the diagram in Fig. 1 and adding CSE, the pathway system becomes simpler. The right branch in the grey box in Fig. 1 is merged into an efflux and the left branch is simplified to a one-step process. It is hypothesized that a specific functional channel could facilitate the conversion of feruloyl-CoA into coniferyl alcohol. Such a channel could be the result of co-localization of the involved pathway enzymes
At this point, it is not entirely clear whether the lignin pathway in switchgrass contains caffeyl aldehyde. It appears that this is not the case, and the following analysis assumes that caffeyl aldehyde is indeed not produced. Nonetheless, since other species do generate this intermediate, the Additional file 1: Text S1 analyzes this case.
Large-scale simulation studies with this pathway structure lead to irreconcilable differences between the experimental data and the model results, which indicate that the model has genuine flaws. In particular, the dynamics of the different lignin species cannot be explained for the various transgenics (data not shown).
Experimental and theoretical work in alfalfa has suggested that functional enzymatic channeling likely occurs at the coniferaldehyde node [5, 14]. According to this suggestion, the "G-channel" facilitates the use of feruloyl-CoA for the production of coniferyl alcohol, which is the precursor of the G monolignol (Fig. 2). We investigate the same channeling hypothesis here as a possibility. Specifically, we use pertinent experimental data from switchgrass to analyze the feasibility of different hypothetical pathway topologies. The potential existence of a functional complex consisting of CCR1/CAD leads to three possible pathway topologies that satisfy the requirement of mass conservation (Fig. 3).
Topological Configurations. Three pathway structures are plausible when a CCR1/CAD channel is considered. Configuration 2 lacks the channel, while the other two configurations represent alternatives involving the channel
Each of these so-far unregulated topologies was modeled as a generalized mass action (GMA) model, whose parameter values were obtained with a sophisticated large-scale sampling scheme (see "Methods"). Although all topologies were found to be consistent with most of the experimental results, no topology was compatible with the accumulation of H lignin in 4CL knockdown transgenics (Table 1); this situation could not be simulated by any of the candidate models, regardless of the presence or absence of the channel. This strong result suggests the existence of regulatory mechanisms, and considering the structure of the pathway and the branch toward H lignin in particular, we decided to analyze the possible role of product inhibition, which is frequently found in pathway systems in vivo.
Fold change in lignin monomers, total lignin, and S/G in transgenic plants relative to wild-type plants
4CL knockdown 40 % [3]
CCR knockdown 50 % [40]
COMT knockdown 30 % [5]
CAD knockdown 30 % [41]
Down-regulation
Up to 75 %
H lignin
G lignin
~0.75
S lignin
S/G
Decreased
NR not reported
Product inhibition
Experimental results from transgenic plants have demonstrated that H lignin accumulates when the enzyme 4CL is down-regulated [3]. Analyzing this initially counterintuitive observation closer suggests that there might be a wave of accumulation in the metabolites preceding H lignin. Such a wave can be explained with product inhibition (Fig. 4). When an enzyme is down-regulated, the corresponding substrate accumulates. The secondary effect is that the accumulated substrate is by itself a product of a previous reaction whose increased concentration decreases its own rate of production. This backward cascade has an upstream domino effect along the pathway and, depending on the kinetics of the reactions, can lead to the accumulation of upstream metabolites. This observation can be explained by the following chain of events: Down-regulating 4CL leads to a decrease in the products of this enzyme, i.e., p-coumaroyl-CoA, caffeoyl-CoA, and feruloyl-CoA. At the same time, product inhibition leads to a backward accumulation in upstream metabolites, which compensates, at least partially, for the initial decrease in p-coumaroyl-CoA. Product inhibition is easily incorporated into the GMA model (see "Methods"). Thus, in a new round of simulations, a new set of 100,000 randomly sampled parameter values was generated as before, this time accounting for product inhibition. Again, the configurations satisfying the experimental results were recorded.
Substrate competition for a shared enzyme, combined with product inhibition. The accumulation of H lignin in the 4CL transgenic line calls for a regulatory mechanism that guides the flow towards the upper branch of the pathway. Direct activation or an inhibited inhibitor can achieve this result. Simulation results support the second option
Although the simulations showed an improvement regarding the H lignin accumulation in the 4CL knockdown, no topology reached the twofold increase that was reported in the literature [3].
Substrate competition for shared enzymes
Several enzymes in the lignin pathway catalyze multiple reactions with slightly different substrates, and it is reasonable to assume substrate competition for an enzyme among the multiple substrates. This competition can play an important role in altering the flow of mass in a mutant plant.
We explored the consequences of substrate competition with respect to the pertinent enzyme CCR. The analysis yielded the following result. If CCR favors p-coumaroyl-CoA over feruloyl-CoA, due to substrate competition, the flux towards H lignin is increased. In fact, simulation analysis shows that the increase in H lignin is strong enough to match the experimental data.
It could be possible that substrate competition alone would be sufficient for increased H lignin production. We tested this conjecture with a corresponding simulation, which revealed that only the combined model with product inhibition and substrate competition matches the experimental observations. The strength of inhibition is a priori unknown, but simply becomes a parameter value in the GMA model (see "Methods" section). For instance, consider the pathway in Fig. 4, where X 2 and X 6 share the same enzyme for fluxes V 2 and V 4. Blue arrows represent the competition between the substrates, while red arrows represent product inhibition. In this case the equation for V 2 becomes
$$V_{2} = \alpha_{2} X_{2}^{{g_{2,2} }} X_{3}^{{ - g_{3,2} }} X_{6}^{{ - g_{6,2} }} Y_{2} ,$$
where Y 2 is the enzyme catalyzing the reaction (CCR).
Inhibition of 4CL in COMT knockdown transgenics
Although product inhibition and substrate competition improve the consistency between the experimental data and numerical results in CCR1 transgenic plants, the model does not match COMT knockdown data sufficiently well. Specifically, the model does not capture the observed 30 % increase in ferulic acid in COMT knockdowns [4]. This observation becomes even more difficult to explain if one considers the simultaneous 20 % decrease in coniferyl aldehyde. One could speculate that the high accumulation in 5-OH-ferulic acid might trigger a cascade of product inhibition that leads to the accumulation of ferulic acid, but computational results did not support the idea.
Further analysis with the model revealed that the reaction from ferulic acid to feruloyl-CoA, which is catalyzed by 4CL, is the bottleneck. Indeed, the computational results show that this reaction has a flux that is 10 times as large as the efflux from ferulic acid towards 5-OH-ferulic acid. Thus, if the flux towards ferulic acid decreases, any substantial accumulation is impossible unless the 4CL reaction is inhibited. This model-based deduction is indirectly supported by experimental data from one of our labs that exhibit a slight accumulation in the distant p-coumaric acid and caffeic acid, which is explained by 4CL inhibition as well (data not shown).
Accounting for the deduced 4CL inhibition in the model leads to simulations that faithfully capture all experimental data associated with the COMT knockdown; in particular, the 4CL substrates accumulate and the concentration of coniferaldehyde decreases, as observed. From a biochemical point of view, one might be interested in identifying the inhibiting agent. As it was mentioned earlier, the 5-OH-ferulic acid concentration increases by 70 % in COMT knockdown plants. While the metabolite has not been identified as a substrate for 4CL, it might be reasonable to assume that it binds to 4CL in high concentrations, due to its molecular similarity, and thereby inhibit the enzyme competitively (Fig. 5). While this hypothesis remains to be experimentally validated, the same type of substrate competition with respect to 4CL has recently been proposed by others [18]. To implement 4CL inhibition in the model in the most generic manner, we simply lowered the corresponding rate constants.
Parallel reactions catalyzed by 4CL. The observed simultaneous accumulation of 4CL substrates and decrease in coniferaldehyde in COMT transgenic lines can be explained with the assumption of an inhibitory effect on the reactions catalyzed by 4CL. 5-OH-ferulic acid could be a candidate for this role. Although 5-OH-ferulic acid is not a substrate for 4CL in switchgrass, it has a similar molecular shape as ferulic acid, so that high concentrations of 5-OH-ferulic acid might exert competitive inhibition that is comparable to the inhibitory effects of ferulic acid
Compatible configurations
The mathematical model with universal product inhibition, substrate competition for CCR1, inhibition of 4CL, and the possibility of a metabolic channel was subjected to large-scale simulations aimed at inferring the most likely topology of the lignin pathway (recall Fig. 3). Similar to previous simulations, a sample of 100,000 parameter sets was generated to test model consistency with the experimental data and to provide likely kinetic orders for the model (see "Methods"). Intriguingly, the only pathway configuration that is compatible with all available data is Configuration 1 of Fig. 3. Note that the speculated coniferaldehyde channel is indeed present. In fact, no parameter set, using Configurations 2 and 3, could reproduce the experimental data which eliminates the chance to compare the relative performance of the configurations.
To gain a better understanding of the parameter space of the system, principal component analysis (PCA) was performed on the parameter sets that had been filtered by the model criteria. Once the principal components of the parameter space were identified, a new round of simulations was executed. Specifically, a sample of 100,000 parameter sets was generated along the principal directions and within the reduced space. The set was then transformed back to the original coordinates. The successful parameter sets were recorded and are depicted in Additional file 2: Figure S8. Ultimately, principal components 1 through 4 collectively account for 88 % of the variance.
Model uniqueness
It is theoretically impossible to proof the uniqueness of a model for such complex nonlinear problem, because it is always possible to evoke additional processes in such a fashion that the original model could be subsumed as a simpler special case. In our case, one should note that our large-scale simulation approach led to a structurally and numerically compact ensemble of similar solutions within the high-dimensional parameter space of the system. Given that we determined the ensemble with Monte Carlo simulations that cast a very wide net over the parameter space, it is difficult to imagine entirely different parameterizations that would capture all data as well as our ensemble and perform well in the validation studies we performed.
Moreover, considering that the available data were obtained from several independent transgenics, and that the stoichiometric system of the system is underdetermined, the likelihood of significantly other solutions appears to be rather small. Also, our simulations show that the system converges to the same steady-state starting from a wide array of initial conditions. Some arbitrary initial conditions actually lead to steady-state values outside of the defined physiological bounds; however, among the initial conditions that lead to admissible steady-states, several rounds of screening showed identical results.
In summary, it is well understood that model design is an iterative procedure, and while our logical analysis of numerical results suggested the step-wise addition or elimination of new features, there is no mathematical proof that the model ensemble is truly unique.
Outside these purely mathematical arguments, we might also look at the biological reasonableness of the model. For instance, one could ask why only CCR was subjected to substrate competition, while there are other shared enzymes. The answer is a matter of simplicity, as suggested by Ockham's razor. Namely, we demonstrate that the substrate competition of CCR is needed to match the available data, while additional mechanisms are not necessary to explain the experimental data. Thus, we cannot exclude that additional regulatory mechanisms might exist, but we would need additional, independent data to confirm or refute such a hypothesis.
We also note that, although the model design progressed iteratively, we carefully investigated the necessity of including each individual mechanism a posteriori. For example, upon discovering that competitive inhibition over CCR improves H-lignin accumulation, we asked whether product inhibition was still vital for the model to explain the observations. We examined this hypothesis and determined that H-lignin accumulation could not be captured anymore. We therefore concluded that both mechanisms, product inhibition and CCR competition, are necessary. We found this conclusion reasonable, as both product inhibition and substrate competition are common in metabolic pathway systems.
Model validation
The model with parameter values described above was constructed based on experimental data from wild-type switchgrass and four transgenic lines (4CL, CCR1, CAD and COMT knock-downs). To validate the model, experimental data from a separate transgenic plant, which had not been used in any way during the model design, were used to investigate how well the system performs under untested conditions. Namely, in a recent study, the transcription inhibitor PvMYB4 was over-expressed in order to reduce enzyme expression in the lignin pathway [19]. While metabolite concentrations were not measured for any of the pathway intermediates, the published data contain H, G and S lignin levels, as well as comparisons of enzyme activities between the wild type and PvMYB4 plants. The overall result of the study is a global reduction in the expression of the enzymes of the pathway, which in turn leads to 40–70 % decreases in total lignin.
We tested our model against the profile of observed enzyme expression under overexpression of PvMYB4. We started with the already parameterized model without introducing any alterations or adjustments, except for resetting the appropriate enzyme activities, and tested how the system responded to the inhibition in comparison to the in vivo experiments [19]. Encouragingly, the altered G- and S-lignin amounts and their ratio, reported in the experimental study, are captured by the model with the compatible topological configuration quite well. The H-lignin was essentially unchanged in the experiment, while it slightly decreases in our model, in accordance with the data we used. However, H-lignin constitutes only about 3 % of the total lignin so that this difference is of no particular pertinence. Results are shown in Figs. 6 and 7. Figure 6 compares the fold change in lignin monomers between the experimental data and model results. The first row shows the fold change in G, S, the total lignin, and the S/G ratio comparing the wild type and PvMYB4 lines from the experiment; the second row corresponds to the computed configuration. As can be seen, the model results are quite consistent with experimental data.
Fold changes in lignin monomer concentrations in PvMYB4 transgenic plants. The top row represents the average of PvMYB4 plants experimental data normalized with respect to the average of the control plants. The second row represent the results of the model with settings corresponding to the PvMYB4 experiment in [19], normalized with respect to wild-type model results. Wild type is set to 1, which corresponds to white in the color bar. H lignin only counts for 3 % of total lignin and is not shown in here
Steady-state profiles of key pathway metabolites in PvMYB4 overexpression as predicted by the model. Concentrations are normalized and the base value is set to 100, which corresponds to white in the color bar. Any increases with respect to the wild-type steady state are reflected in the red spectrum and any decreases in the blue spectrum
This independent validation is very reassuring, especially with respect to future attempts to use metabolic engineering techniques to alter the S/G ratio in switchgrass. For instance, if further model predictions prove similarly reliable, the model could be used to simulate and optimize the outcome of combinatorial knockdowns, whose outcomes are not necessarily predictable with intuition alone. Such predictions would be very valuable, as a comprehensive combinatorial screening of double and triple knock-downs would neither be economical nor experimentally feasible.
While the published PvMYB4 data used for the first validation do not contain intermediate metabolite concentrations, a more recent study provides steady-state data for several of the pathway metabolites [20]. Comparing the published data in [20] with those in our model, we find that seven metabolites are represented in both, namely, caffeic acid, 5-OH-coniferyl alcohol, ferulic acid, sinapyl alcohol, coniferaldehyde, p-coumaric acid and coniferyl alcohol.
Figure 7 exhibits a comparison of the steady-state profiles. The top row shows the simulation results, while the bottom row represents experimentally measured steady-state concentrations in PvMYB4 normalized to wild type from [20]. The wild-type value for each concentration is set to 100 (white), and the red-blue spectrum represents increases or decreases in steady-state values of knockdowns. For five of these seven metabolites, our computational results of PvMYB4 conditions show the same semi-quantitative behavior in steady-state concentrations compared to the wild type; these are caffeic acid, 5-OH-coniferyl alcohol, sinapyl alcohol, coniferaldehyde and coniferyl alcohol. Discrepancies are seen in ferulic acid and p-coumaric acid. Here, the experimental data show a decrease in the steady-state concentrations, while our computational results predict an accumulation. Interestingly, these differences occur for metabolites whose effluxes out of the lignin pathway are ill defined, because their characteristics were not documented in the literature. It is therefore likely that they are not optimally parameterized in the model.
In this work, we developed an ensemble of models of lignin biosynthesis in stem and tiller tissue in switchgrass, P. virgatum. The model reflects the consequences of various enzyme knock-downs quite well and performed satisfactorily in two validation studies with experimental data that had not been used in the model design or implementation. We used as the modeling framework the generalized mass action (GMA) format within biochemical systems theory (BST) [21–25]. The power-Law representation, which is the hallmark of this type of model, is arguably the least biased default formulation and by its mathematical nature avoids problems due to possibly invalid assumptions that may cast doubt on traditional Michaelis–Menten models in vivo [26]. Parameter values were, as always, difficult to obtain in a direct manner. We used for this purpose experimental knock-down data and a sophisticated Monte Carlo sampling strategy that has been used very successfully for similar systems before [14]. As a particular sub-goal, we investigated the regulatory mechanism of the pathway and the possible co-localization or coupling of the pair of enzymes, CCR1/CAD that was previously suggested for Medicago [5].
To elucidate the co-localization or coupling of these enzymes in switchgrass, we studied multiple configurations that seemed a priori plausible and identified those natural designs that were consistent with the experimental data. The consistent designs were further examined under different regulation scenarios. The main result from this study is a very robust model of lignin biosynthesis in switchgrass that is consistent with all available data. The model was, at least to some degree, validated with a formerly unused dataset. If this validation can be confirmed and expanded experimentally, the model proposed here may be used to predict responses of the natural pathway system to alterations that are difficult to assess with experimental means. For instance, a further validated model will allow the prediction of responses to combinatorial knockdowns that could be the basis for future designs of more sophisticated transgenic lines than are currently available.
The computational analysis suggests the co-localization or functional coupling of the two enzymes CCR1 and CAD. Metabolic channeling and compartmentalization in plants have been identified in many biochemical pathways [27]. Of importance here, it has been suggested that enzymes catalyzing early reactions in the monolignol pathway may be co-localized in their binding to the ER. For instance, a multi-protein complex has been identified between PAL and C4H, and it seems that most of the substrates use these channels, but that some substrate undergoes the metabolic conversion in two steps [28–30]. C4H can also form a complex with C3′H [31], and it has been suggested that different forms of 4CL form a complex in poplar [32]. Independent computational work on alfalfa came to a similar conclusion for channeling of enzymes associated with coniferaldehyde, which were proposed to form a metabolic channel [14]. Our results on switchgrass, presented in this article, are in line with the latter result and suggest moreover that channeling around coniferaldehyde is necessary to capture the available data.
The comparative study of different configurations revealed that consistency with the available experimental data was most difficult to achieve for transgenic 4CL down-regulated lines, in which, surprisingly, the H lignin concentration is increased. This observation is at first counterintuitive because 4CL is located directly upstream of the H lignin precursors, which would lead to the a priori expectation of a decrease in H lignin. The combination of two postulated types of regulatory mechanisms was able to explain this observation. The first is product inhibition, which is observed quite frequently in biochemical systems. While improving the data compatibility, this mechanism turned out to be insufficient, thus requiring additional signaling. Arguably the simplest explanation is a regulatory structure that works in either of the mechanisms below:
An intermediate in the pathway is increased in response to the 4CL knockdown and activates the precursors of H lignin synthesis. The most likely candidates for this scenario appear to be p-coumaric acid, caffeic acid, and ferulic acid (Fig. 8a).
Two plausible explanations for an increase in the H lignin concentration in 4CL transgenic lines. a represents a putative increase in an activator located upstream of the enzyme 4CL, whereas b shows a putative decrease in an inhibitor located downstream of 4CL
There exists an inhibitor for the H lignin branch. This metabolite would have to be located such that its concentration is decreased due to the 4CL knockdown, which means that the inhibitor activity is inhibited and therefore exerts a net positive effect on the system (Fig. 8b). Feruloyl-CoA could be a good candidate for this scenario.
The current literature does not support the first hypothesis. By contrast, multiple candidates are available for the second scenario. A reasonable scenario arises from the fact that the lignin pathway in switchgrass includes parallel fluxes that share the same enzymes. Indeed, 4CL, CAD, COMT, F5H and CCR1 all catalyze multiple reactions, and it is likely that the substrates exert competitive inhibition for the shared enzyme, as it was also suggested in [33]. Supporting this scenario, a targeted numerical analysis demonstrated that competition over CCR1 perfectly matches the results of the 4CL knockdown line in the model with product inhibition. One could surmise that the latter mechanism would suffice to represent the increase in H lignin concentration. To test this hypothesis, we simulated the model with enzyme competition but without product inhibition. The results showed that competitive inhibition by itself could not satisfactorily resolve the issue. By contrast, the combined model containing product inhibition and competitive inhibition matches the experimental results very well. One should also recall that the product inhibition and substrate competition mechanisms only work properly if the proposed metabolic channel is present (Fig. 3, Configuration 1).
Another aspect of the experimental data that was not captured well by the original model, even when product inhibition and substrate competition over CCR1 were taken into account, is the accumulation of 4CL substrates in COMT transgenic plants. Particularly counterintuitive appears to be the accumulation of ferulic acid as a product of a reaction catalyzed by COMT. The observed concomitant decrease in the steady-state concentration of coniferaldehyde supports the possible explanation that the observation is due to regulation that begins to inhibit the conversion of ferulic acid into coniferaldehyde, when 4CL substrates are in excess. The simultaneous accumulation of p-coumaric acid and caffeic acid provides additional evidence that reactions catalyzed by 4CL are inhibited in COMT knockdown plants. Accounting for this feature to our model, all experimental data are represented well. The mechanism of the regulation remains a subject of further experimental investigations. Figure 9 shows the pathway including all inferred regulatory signals.
Full scheme of the lignin biosynthetic pathway in switchgrass suggested by the computational results of this study. All regulatory signals, i.e., universal product inhibition, substrate competition over CCR1, and 4CL inhibition are shown. The 4CL inhibiting agent is unknown and therefore denoted with X. 5-OH-ferulic acid might be a candidate for this role
The model proposed in this article captures all available data and performed well in independent PvMYB4 validation experiments. This good match with data is reason for cautious optimism, which however is to be supported with further experimental confirmation. Indeed, work is in progress to generate and analyze additional transgenic switchgrass lines and to incorporate further lignin compositional and enzyme activity and kinetic data into the model. If the model fares well in these additional validation studies, the results from the present study suggest that one might use the model for predictions, for instance, with respect to double knock-downs, and for optimization studies that could potentially affect the lignin-based recalcitrance in switchgrass in a favorable manner.
Model construction
Much of the analysis in this article consists of comparisons and simulations with different models. Each of these models consists of a system of differential equations that represent the rate of change in metabolite concentrations, which are represented as dependent variables. The right-hand side of each equation contains a set of fluxes which enter (influxes) or leave (effluxes) the metabolite pool. Enzymes are included in the model as independent variables; that is, they do not change in activity during any given computational experiment. The generic formulation of each equation is
$$\frac{{{\rm d}X_{i} }}{{\rm d}t} = \sum\limits_{j = 1}^{k} {s_{i,j} V_{j} }$$
where each X i is a metabolite, V j are fluxes associated with X i , and the quantities s i,j are stoichiometric coefficients, which here are simply 0, 1 or −1 and determine whether flux V j affects X i as influx or efflux or not at all. Each V j is a function of some or potentially all of the X i . At the steady state, the left-hand side is equal to zero, and fluxes can be assessed with methods of linear algebra [34]. Because the system in our case is underdetermined, infinitely many solutions satisfy the steady-state condition. Following the tenets of Flux Balance Analysis (FBA), an objective function is chosen and the problem is solved as a linear programming problem [34]. In the present study, maximizing the total amount of lignin is set as the objective of the system. The optimization problem is solved using MATLAB (version R2014a, The MathWorks, Natick, MA, USA) function linpro. The output is the set of fluxes at the steady state that maximizes the defined objective.
The fluxes themselves are formulated as general mass action (GMA) models of the type
$$V_{j} = \alpha_{j} \prod\limits_{r = 1}^{n} {X_{r}^{{g_{r,j} }} } \prod\limits_{r = n + 1}^{n + m} {X_{r}^{{h_{r,j} }} }$$
within the modeling framework of BST [21, 22, 24, 35, 36]. Here, α j is the rate constant, each X r , for 1 < r < n, is a metabolite or, for n + 1 < r < n + m, an enzyme involved in the reaction. Thus, n is the number of metabolites and m is the number of enzymes in the pathway. The exponents g r,j are kinetic orders that quantify the effect of X r on V j . Similarly, h r,j describes the effect of the enzyme on the reaction. It is customary to set each h r,j to 0 or 1, thus merely reflecting absence or presence of an enzyme in a specific flux. This setting of h r,j = 1 is consistent with the underpinnings of Michaelis–Menten, mass-action, and other traditional models, where a reaction is assumed to be a linear function of enzyme activity. All other kinetic orders g r,j are sampled from the range between 0 and 1 if X r is a substrate or activator of the flux, or from the range between −1 and 0 if X r is an inhibitor.
Due to the nature of the present experimental data for switchgrass, the real concentrations of metabolites and enzyme activities in vivo are unknown. As a remedy, we normalize these quantities with respect to the steady state and set all base values to 100. Thus, we set
$$Z_{i} = \frac{{100X_{i} }}{{X_{SS,i} }}$$
and express Eq. (4) as
$$\frac{{{\rm d}Z_{i} }}{{\rm d}t} = \frac{100}{{X_{SS,i} }}\frac{{{\rm d}X_{i} }}{{\rm d}t} = \frac{100}{{X_{SS,i} }}\sum\limits_{j = 1}^{k} {s_{i,j} V_{j} }$$
Since the constant X ss,i refers to the steady state, simple algebra adjusts the rate constants to this steady state. Thus, we obtain
$$\frac{{{\rm d}Z_{i} }}{{\rm d}t} = \sum\limits_{j = 1}^{k} {s_{i,j} \alpha_{j} \prod\limits_{r = 1}^{n} {\left( {\frac{{100X_{r} }}{{X_{SS,r} }}} \right)^{{g_{r,j} }} } } \prod\limits_{r = n + 1}^{n + m} {\left( {\frac{{X_{r} }}{{X_{SS,r} }}} \right)^{{h_{r,j} }} }$$
The enzymes are independent variables and therefore constant for each experiment. Therefore, X r = X ss for n + 1 < r < n + m for wild type, whereas for a transgenic line it takes a value between 0 and 1, according to the level of knockdown. At the steady state we have:
$$\begin{aligned} 0 &= \sum\limits_{j = 1}^{k} {s_{i,j} \alpha_{j} \prod\limits_{r = 1}^{n} {\left( {\frac{{100X_{r} }}{{X_{SS,r} }}} \right)^{{g_{r,j} }} } } \hfill \\ 0 &= \sum\limits_{j = 1}^{k} {s_{i,j} \alpha_{j} \prod\limits_{r = 1}^{n} {100^{{g_{r,j} }} } } \hfill \\ \end{aligned}$$
with this setting, each steady-state flux is given as
$$V_{j} = \alpha_{j} \prod\limits_{r = 1}^{n} {100^{{g_{r,j} }} } = \alpha_{j} 100^{{\sum\limits_{r = 1}^{n} {g_{r,j} } }} .$$
If the flux is known, the rate constant can be computed as
$$\alpha_{j} = V_{j} /100^{{\sum\limits_{r = 1}^{n} {g_{r,j} } }} .$$
With these settings, the set of the differential equations for the model takes the form below.
$$\begin{aligned} &\frac{{{\rm d}Z_{1} }}{{\rm d}t} = I_{1} - V_{1} \quad\quad\quad\quad\quad\quad\quad\;\; \frac{{{\rm d}Z_{9} }}{{\rm d}t} = V_{12} - V_{14} - V_{18}\\ &\frac{{{\rm d}Z_{2} }}{{\rm d}t} = V_{1} - V_{2} - V_{3} \quad\quad\quad\quad\;\;\; \frac{{{\rm d}Z_{10} }}{{\rm d}t} = V_{13} + V_{14} - V_{15} - V_{26}\\ &\frac{{{\rm d}Z_{3} }}{{\rm d}t} = I_{2} + V_{3} - V_{4} - V_{8} \quad\quad\quad \frac{{{\rm d}Z_{11} }}{{\rm d}t} = V_{15} - V_{16} - V_{19}\\ &\frac{{{\rm d}Z_{4} }}{{\rm d}t} = V_{4} - V_{5} - V_{9} - V_{10} \quad\quad\; \frac{{{\rm d}Z_{12} }}{{\rm d}t} = V_{16} + V_{26} - V_{17} - V_{20} \\ &\frac{{{\rm d}Z_{5} }}{{\rm d}t} = V_{5} - V_{6} \quad \quad\quad\quad\quad\quad\;\;\; \frac{{{\rm d}Z_{13} }}{{\rm d}t} = V_{19} - V_{22}\\ &\frac{{{\rm d}Z_{6} }}{{\rm d}t} = V_{6} - V_{7} \quad\quad\quad\quad\quad\quad\;\;\; \frac{{{\rm d}Z_{14} }}{{\rm d}t} = V_{20} - V_{21} - V_{23}\\ &\frac{{{\rm d}Z_{7} }}{{\rm d}t} = V_{9} - V_{11} - V_{12} \quad\quad\quad\quad \frac{{{\rm d}Z_{15} }}{{\rm d}t} = V_{22} - V_{24}\\ &\frac{{{\rm d}Z_{8} }}{{\rm d}t} = V_{11} - V_{13} \quad\quad\quad\quad\quad\quad \frac{{{\rm d}Z_{16} }}{{\rm d}t} = V_{23} + V_{24} - V_{25}\\ \end{aligned}$$
where the quantities \(I_{i}\) include the influxes into the pathway and the fluxes, \(V_{i}\), are defined as follows:
$$\begin{aligned} &V_{1} = \alpha_{1} Z_{1}^{{g_{1,1} }} Z_{2}^{{g_{2,1} }} Z_{17}^{{}} \quad\quad\quad\quad\; V_{14} = \alpha_{14} Z_{9}^{{g_{9,14} }} Z_{10}^{{g_{10,14} }} Z_{20}^{{}}\\ &V_{2} = \alpha_{2} Z_{2}^{{g_{2,2} }} Z_{18} \quad\quad\quad\quad\quad\quad V_{15} = \alpha_{15} Z_{10}^{{g_{10,15} }} Z_{11}^{{g_{11,15} }} Z_{4}^{{g_{4,15} }} Z_{21}^{{}}\\ &V_{3} = \alpha_{3} Z_{2}^{{g_{2,3} }} Z_{3}^{{g_{3,3} }} Z_{19}^{{}} \quad\quad\quad\quad\; V_{16} = \alpha_{16} Z_{11}^{{g_{11,16} }} Z_{12}^{{g_{12,16} }} Z_{22}^{{}}\\ &V_{4} = \alpha_{4} Z_{3}^{{g_{3,4} }} Z_{4}^{{g_{4,4} }} Z_{20}^{{}} \quad\quad\quad\quad\; V_{17} = \alpha_{17} Z_{12}^{{g_{12,17} }} Z_{29}^{{}}\\ &V_{5} = \alpha_{5} Z_{4}^{{g_{4,5} }} Z_{5}^{{g_{5,5} }} Z_{10}^{{g_{10,5} }} Z_{21}^{{}} \quad\quad\; V_{18} = \alpha_{18} Z_{9}^{{g_{9,18} }} Z_{30}^{{}}\\ &V_{6} = \alpha_{6} Z_{5}^{{g_{5,6} }} Z_{6}^{{g_{6,6} }} Z_{22}^{{}} \quad\quad\quad\quad\; V_{19} = \alpha_{19} Z_{11}^{{g_{11,19} }} Z_{13}^{{g_{13,19} }} Z_{31}^{{}}\\ &V_{7} = \alpha_{7} Z_{6}^{{g_{6,7} }} Z_{23}^{{}} \quad \quad\quad\quad\quad\quad V_{20} = \alpha_{20} Z_{12}^{{g_{12,20} }} Z_{14}^{{g_{14,20} }} Z_{31}^{{}}\\ &V_{8} = \alpha_{8} Z_{3}^{{g_{3,8} }} Z_{24}^{{}} \quad\quad\quad\quad\quad\quad V_{21} = \alpha_{21} Z_{14}^{{g_{14,21} }} Z_{32}^{{}}\\ &V_{9} = \alpha_{9} Z_{4}^{{g_{4,9} }} Z_{7}^{{g_{7,9} }} Z_{25}^{{}} \quad\quad\quad\quad\; V_{22} = \alpha_{22} Z_{13}^{{g_{13,22} }} Z_{15}^{{g_{15,22} }} Z_{27}^{{}}\\ &V_{10} = \alpha_{10} Z_{4}^{{g_{4,10} }} Z_{26}^{{}} \quad\quad\quad\quad\quad V_{23} = \alpha_{23} Z_{14}^{{g_{14,23} }} Z_{16}^{16,23} Z_{27}^{{}} \\ &V_{11} = \alpha_{11} Z_{7}^{{g_{7,11} }} Z_{8}^{{g_{8,11} }} Z_{20}^{{}} \quad\quad\quad V_{24} = \alpha_{24} Z_{15}^{{g_{15,24} }} Z_{16}^{{g_{16,24} }} Z_{22}^{{}}\\ &V_{12} = \alpha_{12} Z_{7}^{{g_{7,12} }} Z_{9}^{{g_{9,12} }} Z_{27}^{{}} \quad\quad\quad V_{25} = \alpha_{25} Z_{16}^{{g_{16,25} }} Z_{33}^{{}}\\ &V_{13} = \alpha_{13} Z_{8}^{{g_{8,13} }} Z_{10}^{{g_{10,13} }} Z_{28}^{{}}\quad\quad\;\;\; V_{26} = \alpha_{26} Z_{10}^{{g_{10,26} }} Z_{12}^{{g_{12,26} }} Z_{4}^{{g_{4,26} }} Z_{34}^{{}}\\ \end{aligned}$$
The metabolites of the pathway are
$$\begin{aligned} Z_{1} &:{\text{phenylalanine}} \quad\quad\quad\quad\;\;\; Z_{9} :{\text{ferulic acid}}\\ Z_{2} &:{\text{cinnamic acid}} \quad\quad\quad\quad\;\; Z_{10} :{\text{feruloyl-CoA}}\\ Z_{3} &:{\text{p-coumaric acid}}\quad\quad\quad\;\;\; Z_{11} :{\text{coniferaldehyde}}\\ Z_{4} &:p{\text{-coumaroyl CoA}} \quad\quad\quad Z_{12} :{\text{coniferyl alcohol }}\\ Z_{5} &:p{\text{-coumaryl aldehyde }} \quad\;\; Z_{13} : 5 {\text{-OH-coniferaldehyde}}\\ Z_{6} &:p{\text{-coumaryl alcohol}} \quad\quad\;\; Z_{14} : 5 {\text{-OH-coniferyl alcohol}}\\ Z_{7} &:{\text{caffeic acid}} \quad\quad\quad\quad\quad\;\;\; Z_{15} :{\text{sinapaldehyde}}\\ Z_{8} &:{\text{caffeoyl CoA}} \quad\quad\quad\quad\quad Z_{16} :{\text{sinapyl alcohol}}\\ \end{aligned}$$
while the enzymes of the pathway are
$$\begin{aligned} Z_{17}^{{}} &:{\text{PAL, }}\,{\text{L-phenylalanine ammonia-lyase}} \\ Z_{19}^{{}} &:{\text{C4H, }}\,{\text{cinnamate 4-hydroxylase}} \\ Z_{20}^{{}} &: 4 {\text{CL, }}\, 4 {\text{-coumarate:CoA ligase}} \\ Z_{21}^{{}} &:{\text{CCR1, }}\,{\text{cinnamoyl CoA reductase}} \\ Z_{22}^{{}} &:{\text{CAD, }}\,{\text{cinnamyl alcohol dehydrogenase}} \\ {\text{Z}}_{ 2 5} &:{\text{HCT,}}\,\,{\text{hydroxycinnamoyl-CoA:shikimate hydroxycinnamoyl transferase/}} \\ &\quad {\text{C3}}'{\text {H}},\,\,p{\text{-coumaroyl shikimate 3}}'{\text{-hydroxylase/}} \\ &\quad{\text{CSE,}}\,\,{\text{c}}\,{\text{affeoyl shikimate esterase}} \\ Z_{27}^{{}} &:{\text{COMT, }}\,{\text{caffeic acid}}\, O{\text{-methyltransferase}} \\ {\text{Z}}_{ 2 8} &:{\text{CCoAOMT, }}\,{\text{caffeoyl CoA}} \, O{\text{-methyltransferase}} \\ Z_{31}^{{}} &:{\text{F5H, ferulate 5-hydroxylase}} \end{aligned}$$
Note that the model does not account for the dynamics of tyrosine, which we consider constant here. The model scheme is shown in Fig. 10.
Lignin pathway in the notation of the model. Redundancy of enzymes, i.e., 4CL, CCR1, CAD, COMT and F5H in parallel fluxes reduces the dimension of state space. The enzymes HCT, C3′H and CSE in flux V9 are merged into one independent variable, Z25. Note that the presence of the G-channel, V26, is an inference from the computational simulations results
Parameter space and sampling
Similar to earlier work [14, 37, 38], flux rates are computed with FBA. Next, the parameters to be estimated are the kinetic orders and rate constants are in turn estimated from the FBA results and randomly sampled kinetic orders through the steps mentioned above. The kinetic order of a metabolite is positive if the metabolite is a substrate or activator of the flux and negative if it acts as an inhibitor. The kinetic order of each enzyme has a default value of 1, which is in line with traditional enzyme kinetics, because it is customary to assume that a flux has a linear relationship with the enzyme. This assumption is explicitly or implicitly made in essentially all traditional models of enzyme kinetics as, for instance, in the Michaelis–Menten formalism, where V max equals k cat times the enzyme concentration.
The down-regulation of an enzyme is modeled through the enzyme concentration, not the kinetic order. Since the concentrations of metabolites and enzymes are normalized, the concentration of an enzyme in the wild type has the default value of 1. In transgenics, the concentration of the corresponding enzyme is set to a value less than one if it is down-regulated. For example, to represent the 4CL knockdown, the concentration of the enzyme is set to 0.6 as the enzyme is down-regulated by 40 %.
To account for product inhibition, the inhibiting product is represented in each reaction by a factor consisting of its concentration, raised to a negative power. The result is as follows:
$$V = \alpha S^{{g_{S} }} P^{{g_{I} }} ,\,\,\,\,\,\,\, - 1 < \frac{{g_{I} }}{{g_{S} }} < 0$$
Here, S is the substrate, P is the product, g I is the kinetic order of the inhibiting product and g S is the kinetic order of the substrate. The ratio of kinetic orders could be derived directly [22] from the corresponding expression for a product-inhibited Michaelis–Menten reaction, which takes the form
$$V = \frac{{V_{\hbox{max} } \frac{S}{{K_{m} }}}}{{1 + \frac{S}{{K_{m} }} + \frac{P}{{K_{I} }}}}$$
The power-law form of Eq. 15 can directly be computed from the tenets of Biological Systems Theory (BST), which defines the kinetic orders as
$$\begin{aligned}g_{S} = \left. {\frac{\partial V}{\partial S} \cdot \frac{S}{V}} \right|_{\rm {OP}} = \left. {\frac{{1 + \frac{P}{{K_{I} }}}}{{1 + \frac{S}{{K_{m} }} + \frac{P}{{K_{I} }}}}} \right|_{\rm {OP}}\\ g_{I} = \left. {\frac{\partial V}{\partial P} \cdot \frac{P}{V}} \right|_{\rm {OP}} = \left. {\frac{{ - \frac{P}{{K_{I} }}}}{{1 + \frac{S}{{K_{m} }} + \frac{P}{{K_{I} }}}}} \right|_{\rm {OP}} \,\, \end{aligned}$$
Rearrangement of these equations gives the ratio of kinetic orders as follows:
$$- 1 < \left. {\frac{{g_{I} }}{{g_{S} }} = \frac{ - P}{{P + K_{I} }}} \right|_{\rm {OP}} < 0$$
The bounded ratio of kinetic orders provides a valuable constraint for the Monte Carlo simulations, because a fixed ratio does not affect the dimension of the parameter space.
For the initial set of simulations, the sampling space is chosen as a unit hypercube in ℝ\(^{n}\) where n is number of kinetic orders to be estimated. A set of 100,000 parameter sets is generated for each scenario simulation. 10,000 sets are randomly generated from the sampling space using Latin Hypercube Sampling to assure a homogeneous coverage of the space, while 90,000 sets are generated by the MATLAB (version R2014a, The MathWorks, Natick, MA, USA) function rand. Each parameter set is simulated to examine whether the model with this set can match the experimental results for the wild type and transgenics. The model is deemed a match for the experimental results if:
The model returns proper lignin contents and S/G ratios for the wild type and different transgenics, with down-regulation of 4CL (40 %), CCR1 (50 %), COMT (30 %), and CAD (30 %).
The model returns the proper decrease in lignin content in the case of knockdowns in 4CL, CCR1, COMT, and CAD.
The model demonstrates an increase in H lignin in 4CL transgenics.
The model matches the altered metabolite concentrations in the COMT transgenic.
If a parameter satisfies the above conditions, it is recorded along with the corresponding topological configuration.
While our model approach emphasizes ensembles of feasible models, the parameter values in Tables 2, 3, and 4 represent one implementation, which we used for further numerical exploration. This specific parameter set corresponds to the minimum error in the comparison of the model results in PvMYB4 and the experimental data.
A sample of rate constants from the ensemble of rate constants
\(\alpha_{1}\)
\(\alpha_{15}\)
A sample of kinetic orders from the ensemble of kinetic orders
\(g_{1,1}\)
\(g_{10,14}\)
−0.1023
\(g_{4,15}\)
\(g_{10,5}\)
Initial values
\(Z_{0,1}\)
\(Z_{0,13}\)
PAL:
l-phenylalanine ammonia-lyase
C4H:
cinnamate 4-hydroxylase
4CL:
4-coumarate:CoA-ligase
CCR1:
cinnamoyl CoA reductase
CAD:
cinnamyl alcohol dehydrogenase
HCT:
hydroxycinnamoyl-CoA:shikimate hydroxycinnamoyl transferase
C3′H:
p-coumaroyl shikimate 3′-hydroxylase
CSE:
caffeoyl shikimate esterase
COMT:
caffeic acid O-methyltransferase
CCoAOMT:
caffeoyl CoA O-methyltransferase
F5H:
ferulate 5-hydroxylase
ER:
BST:
GMA:
generalized mass action
PCA:
FBA:
flux balance analysis
MF designed the model as well as the computational experiments and wrote the article. LLF provided feedback and proposed some of the regulatory mechanisms. LET and RAD performed the laboratory experiments and provided feedback on the manuscript. EOV supervised the project and contributed to all computational and editorial aspects. All authors read and approved the final manuscript.
This work was supported by DOE-BESC grant DE-AC05-00OR22725 (PI: Paul Gilna). BESC, the BioEnergy Science Center, is a U.S. Department of Energy Bioenergy Research Center supported by the Office of Biological and Environmental Research in the DOE Office of Science.
Competing interests The authors declare that they have no competing interests.
13068_2015_334_MOESM1_ESM.docx Additional file 1. Analysis of the lignin pathway with inclusion of caffeyl aldehyde.
13068_2015_334_MOESM2_ESM.docx Additional file 2. Principal component analysis.
The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, 313, Ferst Drive, Atlanta, GA 30332, USA
BioEnergy Sciences Center (BESC), Oak Ridge National Lab, Oak Ridge, TN, USA
Department of Biological Sciences, University of North Texas, 1155 Union Circle #305220, Denton, TX 76203-5017, USA
Lang WH, Cookson IC. On a flora, including vascular land plants, associated with Monograptus, in rocks of Silurian age, from Victoria, Australia. Philos Trans R Soc Lond B. 1935;224(517):421–49.View ArticleGoogle Scholar
Kotyk ME, Basinger JF, Gensel PG, de Freitas TA. Morphologically complex plant macrofossils from the Late Silurian of Arctic Canada. Am J Bot. 2002;89:1004–13.View ArticleGoogle Scholar
Xu B, Escamilla-Trevino LL, Sathitsuksanoh N, Shen Z, Shen H, Zhang YH, Dixon RA, Zhao B. Silencing of 4-coumarate:coenzyme A ligase in switchgrass leads to reduced lignin content and improved fermentable sugar yields for biofuel production. New Phytol. 2011;192:611–25.View ArticleGoogle Scholar
Tschaplinski TJ, Standaert RF, Engle NL, Martin MZ, Sangha AK, Parks JM, Smith JC, Samuel R, Jiang N, Pu Y, et al. Down-regulation of the caffeic acid O-methyltransferase gene in switchgrass reveals a novel monolignol analog. Biotechnol Biofuels. 2012;5:71.View ArticleGoogle Scholar
Fu C, Mielenz JR, Xiao X, Ge Y, Hamilton CY, Rodriguez M Jr, Chen F, Foston M, Ragauskas A, Bouton J, et al. Genetic manipulation of lignin reduces recalcitrance and improves ethanol production from switchgrass. Proc Natl Acad Sci USA. 2011;108:3803–8.View ArticleGoogle Scholar
Zhang ZY, Rackemann DW, Doherty WOS, O'Hara IM. Glycerol carbonate as green solvent for pretreatment of sugarcane bagasse. Biotechnol Biofuels. 2013;6(1):153. doi:10.1186/1754-6834-6-153.
Naik SN, Goud VV, Rout PK, Dalai AK. Production of first and second generation biofuels: a comprehensive review. Renew Sust Energy Rev. 2010;14:578–97.View ArticleGoogle Scholar
Jung HG, Vogel KP. Influence of lignin on digestibility of forage cell wall material. J Anim Sci. 1986;62:1703–12.Google Scholar
Weng JK, Li X, Stout J, Chapple C. Independent origins of syringyl lignin in vascular plants. Proc Natl Acad Sci USA. 2008;105:7887–92.View ArticleGoogle Scholar
Weng JK, Akiyama T, Bonawitz ND, Li X, Ralph J, Chapple C. Convergent evolution of syringyl lignin biosynthesis via distinct pathways in the lycophyte Selaginella and flowering plants. Plant Cell. 2010;22:1033–45.View ArticleGoogle Scholar
Weng JK, Akiyama T, Ralph J, Chapple C. Independent recruitment of an O-methyltransferase for syringyl lignin biosynthesis in Selaginella moellendorffii. Plant Cell. 2011;23:2708–24.View ArticleGoogle Scholar
Bulik S, Grimbs S, Huthmacher C, Selbig J, Holzhutter HG. Kinetic hybrid models composed of mechanistic and simplified enzymatic rate laws: a promising method for speeding up the kinetic modelling of complex metabolic networks. FEBS J. 2009;276:410–24.View ArticleGoogle Scholar
van Eunen K, Bouwman J, Daran-Lapujade P, Postmus J, Canelas AB, Mensonides FI, Orij R, Tuzun I, van den Brink J, Smits GJ, et al. Measuring enzyme activities under standardized in vivo-like conditions for systems biology. FEBS J. 2010;277:749–60.View ArticleGoogle Scholar
Lee Y, Escamilla-Trevino L, Dixon RA, Voit EO. Functional analysis of metabolic channeling and regulation in lignin biosynthesis: a computational approach. PLoS Comput Biol. 2012;8:e1002769.View ArticleGoogle Scholar
Hoffmann L, Besseau S, Geoffroy P, Ritzenthaler C, Meyer D, Lapierre C, Pollet B, Legrand M. Silencing of hydroxycinnamoyl-coenzyme A shikimate/quinate hydroxycinnamoyltransferase affects phenylpropanoid biosynthesis. Plant Cell. 2004;16:1446–65.View ArticleGoogle Scholar
Escamilla-Trevino LL, Shen H, Hernandez T, Yin Y, Xu Y, Dixon RA. Early lignin pathway enzymes and routes to chlorogenic acid in switchgrass (Panicum virgatum L.). Plant Mol Biol. 2014;84:565–76.View ArticleGoogle Scholar
Ralph J, Grabber JH, Hatfield RD. Lignin-ferulate cross-links in grasses—active incorporation of ferulate polysaccharide esters into ryegrass lignins. Carbohydr Res. 1995;275:167–78.View ArticleGoogle Scholar
Lin CY, Wang JP, Li Q, Chen HC, Liu J, Loziuk P, Song J, Williams C, Muddiman DC, Sederoff RR, Chiang VL. 4-Coumaroyl and caffeoyl shikimic acids inhibit 4-coumaric acid: coenzyme a ligases and modulate metabolic flux for 3-hydroxylation in monolignol biosynthesis of Populus trichocarpa. Mol Plant. 2015;8(1):176–87. doi:10.1016/j.molp.2014.12.003.
Shen H, He X, Poovaiah CR, Wuddineh WA, Ma J, Mann DG, Wang H, Jackson L, Tang Y, Stewart CN Jr, et al. Functional characterization of the switchgrass (Panicum virgatum) R2R3-MYB transcription factor PvMYB4 for improvement of lignocellulosic feedstocks. New Phytol. 2012;193:121–36.View ArticleGoogle Scholar
Shen H, Poovaiah CR, Ziebell A, Tschaplinski TJ, Pattathil S, Gjersing E, Engle NL, Katahira R, Pu Y, Sykes R, et al. Enhanced characteristics of genetically modified switchgrass (Panicum virgatum L.) for high biofuel production. Biotechnol Biofuels. 2013;6:71.View ArticleGoogle Scholar
Savageau MA. Biochemical systems analysis: a study of function and design in molecular biology. Reading, Mass.: Addison-Wesley Pub. Co. Advanced Book Program; 1976.Google Scholar
Voit EO. Computational analysis of biochemical systems : a practical guide for biochemists and molecular biologists. New York: Cambridge University Press; 2000.Google Scholar
Voit EO. A First course in systems biology. New York: Garland Science; Taylor & Francis distributor; 2012.Google Scholar
Voit EO. Biochemical systems theory: a review. ISRN Biomath. 2013;2013:53.Google Scholar
Torres NV, Voit EO. Pathway analysis and optimization in metabolic engineering. New York: Cambridge University Press; 2002.View ArticleGoogle Scholar
Savageau MA: Enzyme kinetics in vitro and in vivo: Michaelis–Menten revisited. In: Bittar EE, Bittar N, editors. Greenwich Principles of medical biology, vol 4. Conn.: JAI Press Inc; 1995. p. 93–146.Google Scholar
Winkel BS. Metabolic channeling in plants. Annu Rev Plant Biol. 2004;55:85–107.View ArticleGoogle Scholar
Achnine L, Blancaflor EB, Rasmussen S, Dixon RA. Colocalization of L-phenylalanine ammonia-lyase and cinnamate 4-hydroxylase for metabolic channeling in phenylpropanoid biosynthesis. Plant Cell. 2004;16:3098–109.View ArticleGoogle Scholar
Rasmussen S, Dixon RA. Transgene-mediated and elicitor-induced perturbation of metabolic channeling at the entry point into the phenylpropanoid pathway. Plant Cell. 1999;11:1537–52.View ArticleGoogle Scholar
Zhou R, Jackson L, Shadle G, Nakashima J, Temple S, Chen F, Dixon RA. Distinct cinnamoyl CoA reductases involved in parallel routes to lignin in Medicago truncatula. Proc Natl Acad Sci USA. 2010;107:17803–8.View ArticleGoogle Scholar
Bassard JE, Richert L, Geerinck J, Renault H, Duval F, Ullmann P, Schmitt M, Meyer E, Mutterer J, Boerjan W, et al. Protein–protein and protein–membrane associations in the lignin pathway. Plant Cell. 2012;24:4465–82.View ArticleGoogle Scholar
Chen HC, Song J, Wang JP, Lin YC, Ducoste J, Shuford CM, Liu J, Li Q, Shi R, Nepomuceno A, et al. Systems biology of lignin biosynthesis in Populus trichocarpa: Heteromeric 4-coumaric acid: coenzyme A ligase protein complex formation, regulation, and numerical modeling. Plant Cell. 2014;26:876–93.View ArticleGoogle Scholar
Wang JP, Naik PP, Chen HC, Shi R, Lin CY, Liu J, Shuford CM, Li Q, Sun YH, Tunlaya-Anukit S, et al. Complete proteomic-based enzyme reaction and inhibition kinetics reveal how monolignol biosynthetic enzyme families affect metabolic flux and lignin in Populus trichocarpa. Plant Cell. 2014;26:894–914.View ArticleGoogle Scholar
Orth JD, Thiele I, Palsson BO. What is flux balance analysis? Nat Biotechnol. 2010;28:245–8.View ArticleGoogle Scholar
Savageau MA. Biochemical systems analysis. I. Some mathematical properties of the rate law for the component enzymatic reactions. J Theor Biol. 1969;25:365–9.View ArticleGoogle Scholar
Savageau MA. Biochemical systems analysis. II. The steady-state solutions for an n-pool system using a power-law approximation. J Theor Biol. 1969;25:370–9.View ArticleGoogle Scholar
Lee Y, Voit EO. Mathematical modeling of monolignol biosynthesis in Populus xylem. Math Biosci. 2010;228:78–89.View ArticleGoogle Scholar
Lee Y, Chen F, Gallego-Giraldo L, Dixon RA, Voit EO. Integrative analysis of transgenic alfalfa (Medicago sativa L.) suggests new metabolic control mechanisms for monolignol biosynthesis. PLoS Comput Biol. 2011;7:e1002047.View ArticleGoogle Scholar
Vanholme R, Cesarino I, Rataj K, Xiao Y, Sundin L, Goeminne G, Kim H, Cross J, Morreel K, Araujo P, et al. Caffeoyl shikimate esterase (CSE) is an enzyme in the lignin biosynthetic pathway in Arabidopsis. Science. 2013;341:1103–6.View ArticleGoogle Scholar
Shen H, Mazarei M, Hisano H, Escamilla-Trevino L, Fu C, Pu Y, Rudis MR, Tang Y, Xiao X, Jackson L, et al. A genomics approach to deciphering lignin biosynthesis in switchgrass. Plant Cell. 2013;25:4342–61.View ArticleGoogle Scholar
Fu CX, Xiao XR, Xi YJ, Ge YX, Chen F, Bouton J, Dixon RA, Wang ZY. Downregulation of cinnamyl alcohol dehydrogenase (CAD) leads to improved saccharification efficiency in switchgrass. Bioenerg Res. 2011;4:153–64.View ArticleGoogle Scholar | CommonCrawl |
Single-cell RNA-sequencing uncovers transcriptional states and fate decisions in haematopoiesis
Emmanouil I. Athanasiadis ORCID: orcid.org/0000-0002-2771-55621,2,3 na1,
Jan G. Botthof1,2,3 na1,
Helena Andres4,
Lauren Ferreira1,2,3 nAff5,
Pietro Lio4 &
Ana Cvejic1,2,3
Nature Communications volume 8, Article number: 2045 (2017) Cite this article
Haematopoietic stem cells
The success of marker-based approaches for dissecting haematopoiesis in mouse and human is reliant on the presence of well-defined cell surface markers specific for diverse progenitor populations. An inherent problem with this approach is that the presence of specific cell surface markers does not directly reflect the transcriptional state of a cell. Here, we used a marker-free approach to computationally reconstruct the blood lineage tree in zebrafish and order cells along their differentiation trajectory, based on their global transcriptional differences. Within the population of transcriptionally similar stem and progenitor cells, our analysis reveals considerable cell-to-cell differences in their probability to transition to another committed state. Once fate decision is executed, the suppression of transcription of ribosomal genes and upregulation of lineage-specific factors coordinately controls lineage differentiation. Evolutionary analysis further demonstrates that this haematopoietic programme is highly conserved between zebrafish and higher vertebrates.
Mammalian blood formation is the most intensely studied system of stem cell biology, with the ultimate aim to obtain a comprehensive understanding of the molecular mechanisms controlling fate-determining events. A single cell type, the haematopoietic stem cell (HSC), is responsible for generating more than 10 different blood cell types throughout the lifetime of an organism1. This diversity in the lineage output of HSCs is traditionally presented as a stepwise progression of distinct, transcriptionally homogeneous populations of cells along a hierarchical differentiation tree2,3,4,5,6. However, most of the data used to explain the molecular basis of lineage differentiation and commitment were derived from populations of cells isolated based on well-defined cell surface markers7. One drawback of this approach is that a limited number of markers are used simultaneously to define the blood cell identity. Consequently, only a subpopulation of the overall cellular pool is examined and isolated cells, although homogeneous for the selected markers, show considerable transcriptional and functional heterogeneity8,9,10,11,12. This led to the development of various refined sorting strategies in which new combinations of marker genes were considered to better 'match' the transcriptional and functional properties of the cells of interest.
The traditional model of haematopoiesis assumes a stepwise set of binary choices with early and irreversible segregation of lymphoid and myeloid differentiation pathways2, 3. However, the identification of lymphoid-primed multipotent progenitors4, which have granulocytic, monocytic and lymphoid potential, but low potential to form megakaryocyte and erythroid lineages prompted development of alternative models of haematopoiesis. More recently, it has been demonstrated that megakaryocyte–erythroid progenitors can progress directly from HSC without going through a common myeloid intermediate (CMP)13; or that the stem cell compartment is multipotent, while the progenitors are unipotent6. Clear consensus on the lineage branching map, however, is still lacking.
Recent advances in single-cell transcriptional methods have made it possible to investigate cellular states and their transitions during differentiation, allowing elucidation of cell fate decision mechanisms in greater detail. Computational ordering methods have proved to be particularly useful in reconstructing the differentiation process based on the transcriptional changes of cells at different stages of lineage progression14,15,16.
Here we create a comprehensive atlas of single-cell gene expression in adult zebrafish blood cells and computationally reconstructed the blood lineage tree in vivo. Conceptually, our approach differs from the marker-based method in that the identity of the cell type/state is determined in an unbiased way, i.e., without prior knowledge of surface markers. The transcriptome of each cell is projected on the reconstructed differentiation path giving complete insight into the cell state transitions occurring during blood differentiation. Importantly, development of this strategy allowed us, for the first time, to asses haematopoiesis in a vertebrate species in which surface marker genes/antibodies are not readily available. Finally, this study provides unique insight into the regulation of haematopoiesis in zebrafish and also, along with complementary data from mouse and human, addresses the question of interspecies similarities of haematopoiesis in vertebrates.
Single-cell RNA-sequencing of zebrafish haematopoietic cells
As an alternative to marker-based cellular dissection of haematopoietic hierarchy, we have set out to classify haematopoietic cells based on their unique transcriptional state. We started by combining FACS index sorting with single-cell RNA-seq to reveal the cellular properties and gene expression of a large number of blood cells simultaneously. To cover the entire differentiation continuum, kidney-derived blood cells from eight different zebrafish transgenic reporter lines and one non-transgenic line were FACS sorted (Fig. 1a and Supplementary Table 1). Each blood cell was collected in a single well of a 96-well plate. At the same time, information about the cell size (FSC) and granularity (SSC), as well as the level of the fluorescence, were recorded.
Pseudotime ordering reveals a gradual transition of cells from immature to more differentiated within the myeloid branch. a Experimental strategy for sorting single cells from transgenic zebrafish lines. Cells were collected from a single kidney of each line and sorted for expression of the fluorescent transgene. Index sorting was used to dispense single cells into a 96-well plate and these were subsequently processed for RNA-seq analyses. b Five cell states were predicted using the Monocle2 algorithm for temporal analyses of single-cell transcriptomes. c Analysis of genes that are differentially expressed across the five states (given the same colour code used in b) reveals GO terms (inner circle) that are highly pertinent to specific cell types. The outer circle shows examples of May–Grünwald Giemsa-stained cells from kidneys of transgenic lines that largely label each particular cell type. d Jitter plots showing the expression (y axis) of differentially expressed marker genes in each cell type (x axis). Each dot in the jitter plot shows the expression of the gene log10 (counts +1) in each cell
RNA from each cell was isolated and used to construct a single mRNA-seq library per cell, which was then sequenced to a depth of around 1 × 106 reads per library. Following quality control (QC), 1422 cells were used for further analysis and for benchmarking of different alignment methods (Supplementary Figs. 1, 2 and 3). Importantly, the average single-cell profiles showed good correlation with independent bulk samples (PCC = 0.7–0.9, Supplementary Fig. 3e). In addition, PCA, ICA and diffusion maps (Supplementary Fig. 4a) showed that cells were intermixed irrespective of the fish or the plate they originated from. This confirmed that the cells were separated in the analyses based on their biological differences rather than batch-induced biases.
HSPC fates through a single path in the state space
A dynamic repertoire of gene expression in thousands of cells during differentiation could be used to infer a single branched differentiation trajectory. Due to the unsynchronised nature of haematopoiesis, each single cell exhibits a different degree of differentiation along the differentiation continuum. Therefore, the generated trajectory could be used to infer the differentiation path of a single cell. To examine the transcriptional transition undergone by differentiating cells, we identified the 1845 most highly variable genes (Supplementary Fig. 4b) and performed expression-based ordering using Monocle215. Based on global gene expression profiles of the cells, we identified five (1–5) distinct cell 'states' (Fig. 1b). To ensure the robustness of this approach, we verified computationally that changes in the highly variable genes and Monocle2 settings only lead to minor differences in the trajectory, mainly around the branching points (Supplementary Fig. 5).
Differential expression analysis of each state vs. all other states, followed by gene ontology (GO) enrichment analysis (see 'Methods' section), provided clear insights into the cell types in each state (Fig. 1c). Specifically, state 1 contains GO terms relating to antigen processing, including genes that are highly expressed in the monocyte lineage, such as cd74a/b 17, ctss2.2 18 and mhc2dab 19 (Supplementary Data 1). The functionality of state 2 relates to leucocyte migration, including genes specific to neutrophils, e.g., cxcr4b 20, rac2 21 and wasb 22, 23 (Supplementary Data 1). State 3 is highly enriched for genes that are involved in ribosome biogenesis, including fbl (Fibrillarin) and pes (Pescadilo), both of which are critical for stem cell survival24, 25 (Supplementary Data 1). Since there is also enrichment for HSC homoeostasis, this state is most likely to be haematopoietic stem/progenitor cells (HSPCs). With GO terms that include gas exchange and erythrocyte differentiation involving the adult haemoglobins, ba1, ba1l and hbaa1 26 together with the erythroid-specific aquaporin gene, aqp1a 26, 27 (Supplementary Data 1), state 4 can be assigned to the erythroid lineage. Finally, state 5 has functionality that is relevant for circulatory system development and blood coagulation, both of which include itga2b (also known as cd41) together with its heterodimer itgb3b 28 (Supplementary Data 1). Since these gene lists include other genes that interact with this platelet integrin receptor complex, as well as additional genes relevant for platelet function, we assigned this cell state to thrombocytes. Mature lymphocytes could not be detected, most likely as T cells mature in the thymus and B cells are comparatively rare and were not enriched for.
To experimentally confirm our computational predictions, we sorted cells from transgenic lines that were the most abundant in each of the five states (Fig. 2) and stained them using May–Grünwald Giemsa staining. Indeed, the morphological properties of the sorted cells (Fig. 1c and Supplementary Figs. 6–7) matched the assigned cell types, therefore adding confidence to these cell-type assignations. As expected, the signature genes, such as marco, lyzC, hhex, alas2 and itga2b, were within the most differentially expressed genes in monocytes, neutrophils, HSPC, erythrocytes and thrombocytes, respectively (Fig. 1d).
The distribution of cells from different transgenic lines modelled by Monocle. a The trajectories of cell states predicted by Monocle are shown in grey for each transgenic line used, with the associated cell types labelled in blue. The percentage of cells from each transgenic line contributing to each state is given next to the relevant trajectory. b Pie charts showing the contribution of transgenic lines to each cell type. The colour code relates to the colours given in the headers for each transgenic line used in a
Taken together, the reconstructed branched tree revealed a gradual transition of myeloid cells from immature to more differentiated cells. Within this tree, HSPCs assumed a committed state through a single path, suggesting that during steady state haematopoiesis, HSPCs can reach a specific cell fate through only one type of intermediate progenitor.
Distinct state cells with different repopulation potential
Functional in vivo transplantation assays have been traditionally used to assess the differentiation potential of different haematopoietic populations. To examine the repopulation and lineage potential of the cells within different states, we sorted cells from Tg(mpx:EGFP) 29, Tg(gata1:EGFP) 30 and Tg(runx1:mCherry) 31 fish to enrich for neutrophil, erythroid and HSPC cell state, respectively. We next injected 500 donor cells into sublethally irradiated, immunocompromised rag2 E450fs−/− zebrafish32 and assessed their engraftment at 1 day, 4- and 14 weeks post injection (PI) (Fig. 3a).
Cells within distinct states have different repopulation potentials. a Experimental strategy for the adult transplantation experiment. Kidneys were dissected from transgenic donor fish and sorted for cells expressing the fluorescent transgene. Positive cells were collected and injected into sublethally irradiated rag2 E450fs−/− fish. b Assessment for engraftment was made 1 day, 4- and 14 weeks post transplantation using flow cytometry. Successfully engrafted fluorescent donor cells were isolated at 4 weeks PI by index sorting single cells into a microtitre plate for subsequent RNA-seq analyses. c Distribution of runx1+ cells, from non-transplanted (left) and transplanted fish at 4 (middle) and 14 wpt (right), modelled by Monocle
Analysis of kidney repopulation revealed that mpx+, gata1+ and runx1+ cells were able to home to the kidney 1 day PI (Fig. 3b). However, only progeny of runx1+ cells were detectable at 4 weeks PI in all examined recipients (Fig. 3b). No progeny of mpx+ and gata1+ cells were evident at the same time point. To examine the lineage output of runx1+ cells following transplantation, we sorted engrafted runx1+ kidney cells 4 and 14 weeks PI and processed them for scRNA-seq analysis. The scRNA-seq data from 302 engrafted runx1+ cells projected onto a Monocle trajectory revealed the multilineage potential of donor runx1 cells at both 4 and 14 weeks PI (Fig. 3c). These data strongly suggested that at least some of these cells were HSCs.
According to transplantation assays, cytospins and transcriptional profiling of cells prior and following transplantation, cells located in the branches of the Monocle tree show progression of lineage-restricted progenitors to mature blood cells with no repopulation potential. However, cells in the middle of the Monocle tree (state 3) are a mixture of progenitors and HSCs with long-term multilineage potential.
Heterogeneity within the HSPC branch of the lineage tree
To increase the number of HSCs in our data set and the resolution in the HSPC branch of the Monocle trajectory, we added the 302 transplanted runx1+ cells to our 1422 previously sequenced cells. We re-analysed the whole data set (1724 cells in total), and generated a new Monocle trajectory (Fig. 4a).
Transcriptionally similar cells display different probabilities of being stem cells. a Cells predicted to be stem cells in the middle part of the lineage tree according to their stemness index. The insert shows the new Monocle tree including transplanted cells (1724 single cells and 1871 highly variable genes). b Distribution of stemness scores in different branches of the tree showing the presence of potential HSCs exclusively in the HSPC branch. c Contribution of different transgenic lines to predicted stem cells
Next, we considered the frequency of potential HSCs in this data set. To do so, we computed the stemness Srel index33, using the Kullback–Leibler distance of the predicted probabilities compared to the expected one, for each of the four different branches (Fig. 4a, b). The lower the 'stemness' factor, the higher the confidence that a particular cell is a stem cell. Using the threshold of 3 sigma over the mean stemness value (0.05), our analysis predicted that 35 out of 214 cells in the middle part of the tree are potential HSCs. The majority of cells that were identified as stem cells originated from the cd41 (13 cells) and runx1 (14 cells) transgenic lines (Fig. 4c). It should be noted that both these lines have been previously identified to contain transplantable HSCs31, 34, lending further confidence to our computational prediction. This suggests that, although both stem and progenitor cells are intermixed on the trajectory due to their overall similar transcriptomes, their lineage potentials (and thus stemness scores) are distinct.
Ribosomal genes and lineage factors control differentiation
Differentiation generally involves specific regulated changes in gene expression. To understand the dynamics of transcriptional changes during the differentiation of myeloid cells, we examined trends in gene expression in each of the four branches (Fig. 5). Dynamically expressed genes within each of the branches showed two main trends (see 'Methods' section). These included genes gradually upregulated through pseudotime and genes gradually downregulated (Fig. 5a, b).
Lineage differentiation is defined by two main trends in gene expression. a Heatmap of genes whose expression changed dynamically during pseudotime in each of the four branches. b Graph showing the average expression pattern of the dynamically expressed genes that follow the same trend across pseudotime. For each of the cell states, one gene is presented that follows one of the two main trends. Standard error is shown as a grey area around the trend lines. c Heatmap of expression of 168 genes annotated as 'ribosomal proteins' genes in pseudotime in each of the four branches
Genes upregulated in pseudotime included well known genes related to the specific function of the relevant cell type (Fig. 5b). The majority of cells characterised as erythroid dynamically expressed genes such as alas2, aqp1a.1, ba1, ba1l, cahz and hbaa1. Similarly, cells in the monocyte branch dynamically expressed genes like c1qa, cd74a, ifngr1, marco, myod1 and spi1a; among other genes, the cebpb, cfl1, cxcr4b, illr4, mpx and ncf1 were upregulated in pseudotime in the neutrophil branch and thrombocytes dynamically expressed fn1b, gp1bb, itga2b, mpl, pbx1a and thbs1b. A complete list of all genes that were dynamically expressed across pseudotime can be found in Supplementary Data 1.
Interestingly, genes downregulated through pseudotime (Fig. 5b) in each of the four branches were consistently enriched for genes involved in ribosome biosynthesis, as revealed by GO terms 'biosynthetic process', 'ribosome' and 'translation' (Supplementary Data 1). This is an interesting finding, because previous studies suggested that HSCs have significantly lower rates of protein synthesis than other haematopoietic cells35. Therefore, we went on to investigate the expression of ribosomal proteins in pseudotime in greater depth (Fig. 5c).
Out of 168 genes annotated as 'ribosomal proteins' on Ensembl BioMart database (Supplementary Data 1), 89 genes had low, random expression in our data set (Fig. 5c). These genes encoded mainly mitochondrial ribosomal proteins (Fig. 5c). In contrast, 79 genes that showed high expression across all cells encoded cytoplasmic ribosomal proteins and were downregulated in pseudotime in all four branches (Fig. 5c). Importantly, the observed downregulation of ribosomal genes in pseudotime was not correlated with the cell cycle state of the cell, apart from a weak correlation in the erythrocytic lineage (Supplementary Fig. 8). These findings further indicate that there is a common developmental event in which suppression of transcription of ribosomal genes and upregulation of lineage-specific factors direct lineage commitment and terminal differentiation.
Next, we compared ribosomal gene expression between the predicted HSCs and the remaining progenitors in the middle branch. The absolute number of ribosomal transcript was similar between the two populations (CV score, 0.180 ± 0.033). In agreement with this, there was a high Pearson correlation (0.986) between predicted HSCs and progenitors. Overall, this suggests highly similar ribosomal gene expression between HSCs and more committed progenitors. However, it is possible that modest differences in ribosomal gene expression between HSCs and progenitors cannot be detected by the methodologies currently available.
In order to address whether this trend has been evolutionarily conserved from zebrafish to mammals, we considered the correlation in ribosomal gene expression between human phenotypic HSCs (CD34+ CD38− CD45RA− CD90+ CD49f+) and the different progenitor fractions (for details, please see 'Methods' section). We used a publicly available scRNA-seq data set from bone marrow-derived HSPCs and analysed the expression of genes that encode cytosolic ribosomal proteins. After calculating average log10 expression profiles for each of the six different cell types (HSC, MPP, MLP, CMP, GMP and MEP), we calculated the pairwise Pearson correlation. The analysis revealed very strong correlations (0.92–0.99) between the ribosomal gene expression in HSCs and all five progenitor populations (Supplementary Fig. 9). To quantitatively assess whether the absolute expression value of each gene fluctuates, we calculated the coefficient of variation (CV)36 for each ribosomal gene across the six different cell types. Our results suggest that for the case of cytosolic ribosomal genes, absolute gene expression values across different cell types showed low levels of fluctuation (CV score, 0.116 ± 0.047), whereas mitochondrial ribosomal genes were randomly expressed at different levels (CV score, 1.589 ± 1.033). This shows that ribosomal gene expression of human HSCs is highly similar to more mature progenitors, confirming an evolutionary conservation of this trend from zebrafish.
HSPC transcriptome is conserved compared to mouse and human
Zebrafish are an important model system in biomedical research and has been extensively used for the study of haematopoiesis. Although it has been demonstrated that many transcription factors and signalling molecules in haematopoiesis are well conserved between zebrafish and mammals, comparative analysis of the whole transcriptome was lacking.
In order to explore the evolution of blood cell-type-specific genes, we performed conservation analysis between zebrafish and other vertebrate species (see 'Methods' section). For this analysis, we enriched our initial data set with 81 natural killer (NK) and 109 T cells derived from the spleen of two adult zebrafish37. Our analysis revealed particularly high conservation of the HSPC transcriptome. For example, 90% of HSPC-specific genes in zebrafish had an ortholog in human and mouse compared to 70–80% of erythrocyte-, monocyte-, neutrophil- and thrombocyte-specific genes (Fig. 6a). The lowest conservation was observed for T cells (59%) and NK cells (68%), possibly reflecting their adaptation to fish-specific pathogens and virulence factors (Fig. 6a).
Conservation analysis of zebrafish genes differentially expressed in the main blood cell types. a Percentage of zebrafish protein-coding genes (specific for distinct blood cell types, as well as non-differentially expressed) with orthologs in other vertebrate species. b The total number of paralogs duplicated exclusively pre- (green) and post-ray-finned speciation (red). The numbers 1–7 mark the number of cell types (erythrocytes, monocytes, neutrophils, thrombocytes, HSPCs, T cells and NK cells) in which the duplicated genes are expressed. c The percentage of conserved vs diverged genes duplicated exclusively post speciation (fish-specific genes)
Gene duplication is the major process of gene divergence during the molecular evolution of species. We therefore analysed duplications that occurred exclusively before (referenced hereafter as pre-speciation genes) or after speciation (referenced hereafter as post-speciation genes) of the last common ancestor between fish (Actinopterygii) and mammals (Sarcopterygii)37, (see 'Methods' section). Out of 7424 paralogs that were expressed in our data set (see 'Methods' section), around 79% were duplicated pre- and 21% were duplicated post speciation (Fig. 6b). Following ray-finned-specific duplication, the paralogs were more likely to functionally diverge (88%) and show expression in different cell types than to remain expressed in the same cell type (conserved expression), 12% (Fig. 6b, c). Interestingly, HSPCs had the highest percentage of paralogs (19%) with a conserved expression pattern (Fig. 6c). This number was lowest for duplicated genes in innate (0% for the neutrophils and 6% in monocytes) and adaptive immune cells (8% for the NK and 6% for the T cells). Altogether, our findings further underline the relevance of the zebrafish model system in advancing our understanding of the genetic regulation of haematopoiesis in both normal and pathological states.
BASiCz
The characterisation of mouse and human haematopoietic cells is dependent on the presence of cell surface markers and availability of antibodies specific for diverse progenitor populations. The antibodies for these cell surface markers are thus used to isolate relatively homogeneous cell populations by flow cytometry. Transcriptional profiling of isolated cell populations38, 39 and more recently single cells40 have further allowed genome-wide identification of cell-type-specific genes. However, beyond mouse and human, less is known about the transcriptome of blood cell types, mainly due to the lack of suitable antibodies.
To overcome this knowledge gap, we have generated a user-friendly cloud repository, BASiCz (Blood Atlas of Single Cells in zebrafish) for interactive exploration and visualisation of 31,953 zebrafish genes in 1422 haematopoietic cells across five different cell types. The generated database (http://www.sanger.ac.uk/science/tools/basicz) allows easy access and retrieval of sequencing data from zebrafish blood cells.
Cell differentiation during normal blood formation is considered to be an irreversible process with a clear directionality of progression from HSCs to more than 10 different blood cell types. It is, however, widely debated to what extent the process is gradual or direct6, 13 on the cellular level; and in the case of the gradual model, what the intermediates of the increasingly restricted differentiation output of progenitor cells are2,3,4,5, 33. Although these models are very different in the way that they describe lineage progression, the identity of haematopoietic cells is determined based on the cell surface markers and the progression of cells during differentiation is defined on a cellular rather than transcriptional level.
Here we used a marker-free approach to order cells along their differentiation trajectory based on the transcriptional changes detected in the single-cell RNA-seq data set. Our analysis showed a gradual transition of cells on a global transcriptional level from multipotent to lineage restricted. The computationally reconstructed tree further revealed that differentiating cells moved along a single path in the 'state space'. This path included an early split of cells towards thrombocyte–erythrocyte and monocyte–neutrophil trajectories. However, cells in the 'middle' of the tree (HSPC state) showed considerable cell-to-cell variability in their probability to transition to any of the four cell types. This suggested that although global transcriptional changes before and after the branching point were continuous, the probability of a cell transitioning to any of the four committed states was determined only by a subset of highly relevant genes. Therefore, cells that were transcriptionally similar overall could have a high probability of differentiation to distinct cell types.
Interestingly, once the cell fate decision was executed, suppression of transcription of ribosomal genes and upregulation of genes which are relevant for the function of each cell type coordinately controlled lineage differentiation. Of all genes that were annotated as 'ribosomal proteins' on the Ensembl BioMart database, only those that encoded cytoplasmic ribosomal proteins showed dynamic expression in pseudotime in our data set. Importantly, this change was not linked to the expression of cell cycle-specific genes, excluding proliferation rates as a potential reason for these data35. Furthermore, our analysis of data obtained from human HSCs and progenitors revealed that ribosomal gene expression levels are highly similar between the different progenitor types and stem cells35.
Our comparative analysis between zebrafish and human across seven different haematopoietic cell types revealed a high overall conservation of blood cell-type-specific genes. Together with BASiCz, a user-friendly cloud repository, we generated a comprehensive atlas of single-cell gene expression in adult zebrafish blood. Data-driven classification of cell types provided high-resolution transcriptional maps of cellular states during differentiation. This allowed us to define the haematopoietic lineage branching map, for the first time, in zebrafish in vivo.
Zebrafish strains and maintenance
The maintenance of wild-type (Tubingen Long Fin) and transgenic zebrafish lines29,30,31, 41,42,43,44,45 (Supplementary Table 1) was performed in accordance with EU regulations on laboratory animals46. All experiments were approved by the Sanger Institute's Animal Welfare and Ethical Review Body.
Single-cell sorting
A single kidney from heterozygote transgenic or wild-type fish was dissected and placed in ice cold PBS/5% foetal bovine serum. At the same time, testes were dissected from the same fish. Single-cell suspensions were generated by first passing through a 40-µm strainer using the plunger of a 1 ml syringe as a pestle. These were then passed through a 20-µm strainer before adding 4′,6-diamidino-2-phenylindole (DAPI, Beckman Coulter, cat no B30437) for mCherry/dsRed2, or propidium iodide (PI, Sigma, cat no P4864) for GFP/EGFP. Individual cells were index sorted into wells of a 96-well plate using a BD Influx Index Sorter. Kidneys from a non-transgenic line were used as a control for gating16.
Whole transcriptome amplification
The Smart-seq2 protocol47, 48 was used for whole transcriptome amplification and library preparation16. In brief, single cells were lysed by incubation in a 0.2% Triton X-100 solution at 72 °C for 3 min. Next, cDNA was generated using SmartScribe enzyme (100 units per sample) and a template switching oligo (1 µM). At this step, we added 92 external RNA controls consortium (ERCC) spike-ins at a final dilution of 1:10. PCR pre-amplification was carried out using KAPA HiFi HotStart ReadyMix and 24 PCR cycles. The reaction product of the PCR was then purified using Ampure XP beads in conjunction with a magnetic stand, washing several times with 80% ethanol. At this step, cDNA quality was assessed using an Agilent Bioanalyzer and qPCR. Samples of sufficient quality were used for library preparation. For tagmentation, we used the Illumina Nextera XT DNA kit, incubating the mixture for 5 min at 55 °C. After stripping the transposase enzyme, adaptor ligation was carried out using the Nextera PCR master mix and index primers, cycling for 12 cycles. The library was then purified again using beads. Following a final Bioanalyzer quality check, the libraries were pooled and diluted to the concentration required by the different sequencers.
They were sequenced in paired-end mode on the Illumina Hi-Seq2500 or Hi-Seq4000 platforms.
Sorted transgene-positive or gated wild-type cells were concentrated by cytocentrifugation at 7 × g for 5 min onto SuperFrostPlus slides using a Shandon Cytospin 3 cytocentrifuge. Slides were fixed for 3 min in −20 °C methanol and stained with May–Grünwald Giemsa (Sigma). Images were captured using a Leica DM5000b microscope in conjunction with a ×63 oil-immersion lens and an Olympus DP72 camera.
Transplantation experiments
Adult rag2 E450fs−/− mutant fish32 were irradiated in an IBL 437 irradiator using a 10 Gy dose from a Caesium 137 source. After 1–2 days of recovery, donor cells were prepared from kidneys of transgenic fish as described above. Using the same gating strategy as employed for the single-cell sorting, fluorescent cells were collected by flow cytometry into microtubes containing 20 µl ice cold PBS/5% foetal bovine serum. Using a volume of 10 µl, 500 cells were transplanted into the anaesthetised (0.02% tricaine, Sigma A5040) rag2 E450fs−/− recipients via intraperitoneal injection. As described above, engraftment into the whole kidney marrow was analysed by FACS at 1 day, 4 and 14 weeks post transplantation. The engrafted cells at 4 and 14 weeks post transplantation were single-cell index sorted and processed for single-cell RNA-seq as described above.
Benchmarking single-cell RNA-sequencing methods
One of the most important components that contributes to errors during the alignment and quantification of single-cell RNA-sequencing data is the presence of multi-mapped (or ambiguous) reads49. Currently, there are many different bioinformatic strategies that can be used to align (e.g., STAR50, Tophat51, Bowtie52, Salmon53, Sailfish54, etc.) and quantify scRNA-seq data (e.g., htseq55, cufflinks56, Salmon53, Sailfish54, etc.).
However, independent of the method applied, one of two possible strategies can be used to align reads, namely unique and multi-mapped. A comprehensive comparative analysis across many different scRNA-seq approaches has recently been published. It suggests that both setups (i.e., single and multi-mapped reads) are able to cope with ambiguous reads effectively49.
In order to assess the impact of using a unique vs. multi-mapped reads alignment strategy on our data set, we re-analysed our raw data using STAR50 in uniquely aligned reads mode. Salmon53 was used next to quantify the transcripts. The Pearson correlation of the average gene expressions between Salmon and Sailfish at single-cell level ranged from 0.81 to 0.91, suggesting a strong correlation between alignments that included uniquely mapped reads and those that did not (Supplementary Fig. 1a). As expected, the number of detected genes (TPM > 1) was lower for Salmon compared to Sailfish (Supplementary Fig. 1b). However, the genes' variability distribution (CV) across single cells for each plate was comparable between the two methods (Supplementary Fig. 1c).
Extended analysis of the reconstructed lineage tree in zebrafish
To further investigate how robust our computational reconstruction of the lineage tree is, we applied different cutoffs to define variable genes. We next reconstructed the lineage tree using Monocle215. Specifically, the highly variable genes were calculated using: 5% biological variation, 25% (default analysis) and 95% biological variation (three components). We then analysed the overall structure of the tree and the percentage of the misclassified cells as compared to the default setting that we used in the initial submission.
Single-cell RNA-seq processing and QC
Reads were aligned to the zebrafish reference genome (Ensemble BioMart version 83) combined with the EGFP, mCherry, tdTomato and ERCC spike-ins sequences. Quantification was performed using Sailfish54 version 0.9.0 with the default parameters using paired-end mode (parameter –l IU).
Transcript per million (TPM) values reported by Sailfish were used for the QC of the samples. Wells with fewer than 1000 expressed genes (TPM > 1), or more than 60% of ERCC or mitochondrial content were initially annotated as poor quality cells (Supplementary Fig. 1). However, due to the lower number of expressed genes in erythroid cells, we further investigated the expression levels of adult globin genes, ba1 and hbaa1 26, in all erythroid cells. Based on comparison with the empty wells, samples that expressed both ba1 (>40,000 TPM) and hbaa1 (>9000 TPM) were considered to pass QC (Supplementary Fig. 2). Therefore, a total of 1422 single cells were selected for further analysis.
Average single-cell profiles compared to corresponding bulk wells revealed strong correlations (Pearson's correlation coefficient) ranging from 0.7 to 0.9 as illustrated in Supplementary Fig. 2, suggesting that the single-cell expression profiles were effectively quantified.
For each of the 1422 single cells, both gene and ERCC counts reported by Sailfish, were transformed into normalised counts per million (CPM). To do this, we divided the number of counts for each gene by the total number of counts (i.e., sum of all counts per cell) in each cell followed by multiplication of the resulting number by 1,000,000. The library size and cell-specific biases were removed (e.g., differences during amplification, ERCC concentration, batch effects, etc.) using the scran R package (version 1.3.0)57. Out of 31,953 genes, we retained those that were expressed in at least 1% of all cells (CPM > 1). Thus, a total of 20,960 genes were used for further analysis.
Technical noise fit and identification of highly variable genes
To distinguish biological variability from the technical noise in our single-cell experiments, we inferred the most highly variable genes using ERCCs as spike-in in all 1422 blood cells36. We used the scLVM58 R package (version 0.99.2) to identify the 1845 most highly variable genes (Supplementary Fig. 3).
Principal component analysis (pcaMethods59 (version 1.64.0)), independent component analysis (FastICA60 (version 1.2) and diffusion maps (destiny61 (version 1.3.4)) were used to verify that all cells were intermixed in the reconstructed 3D component space based on their transcriptional properties and not based on the fish or a plate they originated from.
Pseudotime ordering DE and dynamically expressed genes
The set of 1845 most highly variable genes was used to order the 1422 single cells along a trajectory using the Monocle215 R package (version 1.99.0). The 'tobit' expression family and 'DDRTree' reduction method were used with the default parameters. As illustrated in Fig. 1, cells ordered in the pseudotime created five distinct states. To assign identity to each of the five states, we performed differential expression (DE) analysis between each state vs. the remaining four using the 'differentialGeneTest' Monocle2 function. We modelled expression profiles of each state using a Tobit family generalised linear model15. For each state, statistically significant genes that scored P < 0.01, q < 0.1 (false discovery rate (FDR)) and were expressed in more than 50% of the cells were further used to perform GO analysis.
To enrich for HSPCs, we added 302 transplanted runx1+ cells to our previous data set for a total of 1724 cells. We re-analysed the data the same way as described above and used the 1871 most variable genes for the calculation of a new Monocle trajectory.
Finally, we identified genes that change as a function of pseudotime across each of the four branches by setting the 'fullModelFormulaStr' parameter equal to '~sm.ns(Pseudotime)'. Genes whose expression changed dynamically in pseudotime were selected using the same statistical criteria as described for DE genes. For each branch, we clustered dynamically expressed genes using the 'plot_pseudotime_heatmap' function with the default parameters. The number of clusters (trends) in each branch was determined by its silhouette plot score (cluster R package version 2.0.5)62. To generate the trend lines across different states (see Fig. 3b), we used the average expression pattern of the dynamically expressed genes that follow the same trend across pseudotime and fit them using the ggplot2 63 R package (version 2.2.1) stat_smooth() parameter. We used the Gaussian linear model and the formula 'y ~ poly(x,2)' at 0.95 of standard error (grey area of the plot).
For the analysis of ribosomal genes, we used the Ensembl BioMart version 83 and selected all genes annotated with the term 'ribosomal protein'. We performed clustering using the pheatmap function (R pheatmap package version 1.0.8)64 using Euclidean distance and ward.D2 linkage.
To investigate the correlation between ribosomal and cell cycle gene expression, we identified a total of 342 zebrafish genes annotated as 'GO:0007049', i.e., 'cell cycle' using BioMart (version 83). Next, we performed clustering between a subset of the cell cycle genes expressed in more than 10% of cells in each of the branches of the Monocle trajectory and dynamically expressed ribosomal genes using the tools described above.
Analysis of human cells
In order to show the generalisability of our findings from zebrafish to humans, we used a publicly available human single-cell RNA-seq data set33, deposited in the Gene Expression Omnibus (GEO) under accession code GSE75478. This set contained data from 1344 single cells, which we aligned to the latest human reference genome (GRCh38p10 version 88) and quantified gene expression using Sailfish (version: 0.9.0). Following QC, we were left with 891 single cells, which included HSCs and various progenitor fractions (Supplementary Table 2). After normalisation with the scran package of the resulting CPMs (similarly to zebrafish data), we next identified 341 genes that were annotated as 'Ribosomal' using the BioMart database (GRCh38p10 version 88) and were expressed in more than 1% of all cells. Of these, 250 were expressed at a very low level in this data set. GO term enrichment analysis revealed that these genes encode mitochondrial ribosomes. In contrast, 91 genes that were expressed at a high level, encoded cytosolic ribosomal genes, as suggested by GO term enrichment analysis. Since our initial analysis using zebrafish cells focused only on genes that encode cytosolic ribosomes, we focused on the same population of genes in the human data set. Finally, we calculated the pairwise Pearson correlation between the cytosolic ribosomal genes for each progenitor population.
GO analysis
DE genes were ranked for each of the five states based on the mean log10 counts. Genes with average lower than 2 and those expressed in more than one state were not included in the GO analysis. GO analysis was performed using the gProfileR65 package (version 0.6.1) using the gprofiler command with the following parameters: organism = 'drerio', hier_filtering = 'moderate', correction_method = 'fdr' and max_p_value = 0.05.
Conservation analysis of the cell-type genes in zebrafish
In order to perform the conservation analysis, we identified the orthologous genes (BioMart Ensembl version 83) between the zebrafish and other vertebrate species, including cave fish, tilapia, amazon molly, tetraodon, fugu, cod, human, chimpanzee, mouse, rat, dolphin, wallaby, chicken, lizard, Xenopus, coelacanth and lamprey. For this analysis, we enriched our initial data set with 81 NK and 109 T cells derived from the spleen of two adult zebrafish37. Following the same computational approach as we did with the initial data set, we re-calculated the DE genes for each of the seven different clusters. We only considered 'protein_coding' genes that were expressed in more than 50% of cells within each cluster and scored more than mean log10 counts. This resulted in 41 erythrocyte-, 113 monocyte-, 102 neutrophil-, 212 thrombocyte-, 60 HSPC-, 34 NK- and 34 T-specific genes that were used for further analysis. For the case of the non-DE genes, we included only 'protein_coding' annotated genes that were expressed in more than 1% of all cells (CPM > 1) and with average gene expression higher than the global mean of 0.10. The final list of the non-DE genes included 8127 genes.
Analysis of duplicated genes in zebrafish
In order to analyse duplicated genes37, we first identified all zebrafish 'protein_coding' paralog genes listed in Ensembl (BioMart Ensembl version 83) and split them into two groups: (1) 17,158 pre-ray-finned fish duplicated genes, including Euteleostomi, Bilateria, Chordata, Vertebrata and Opisthokonta parent taxa, and (2) 11,806 post-ray-finned fish duplicated genes, including Neopterygii, Otophysa, Clupeocephala and Danio rerio children taxa. We next removed duplicated genes that were found in common between the two groups. This resulted in 8601 pre-, and 3249 post-ray-finned fish genes that we used in further analysis.
For the analysis of the expression pattern divergence, we focused on genes that were expressed in our data set. We analysed expression pattern of all paralogs of DE genes (i.e., erythrocytes, monocytes, neutrophils, thrombocytes, HSPCs, NK- and T cells) that were expressed in more than 10% of cell in each of the branches (cell states). The expression pattern was considered to be conserved if duplicated genes and their annotated paralogs were all expressed in the same cell type. However, if at least one of the paralogs was expressed in a different cell type, this was considered as an example of potential functional divergence.
Deep neural network DNN classifier
To generate the deep neural network (DNN) model, we used Keras66, a Python-based deep learning library for Theano67 and Tensorflow68. We worked with the Keras functional API, which allows the definition of complex systems, such as multi-output models.
The DNN was used to predict the probabilities of a specific gene expression profile to be classified into one of the four differentiated cell types. We used the entire set of genes for all differentiated cells in the branches (1724 cells in total), i.e., erythrocytes, thrombocytes, neutrophils and monocytes. The input was therefore formed by 20,960 nodes (genes), which were normalised using z-values or standard scores. For the hyper-parametric fine-tuning of the DNN, we generated and evaluated models with different number of hidden layers, hidden nodes, network initialisations, regularisations and batch normalisation. The final hyper parameters were chosen according to the optimal performance and convergence of the accuracy and loss values.
The model was comprised of two hidden layers with 100 and 50 nodes, using a weight decay regularisation with a λ-value of 0.001, and Gaussian dropout of 0.8 between them. The chosen activation functions were 'relu' for the hidden layers, and 'softmax' for the output. The validation was performed over 20% of the initial data set, using 'categorical cross-entropy' loss. The average classification accuracy after convergence was 0.998 ± 0.002, and cross entropy loss of 0.03 ± 0.004, validation accuracy of 0.964 ± 0.003 and cross entropy validation loss 0.15 ± 0.008.
The neural network output returns the probability of a gene expression input vector (cell) to be classified as each one of the differentiated cell types. We can use these probabilities and their distributions to generate a value that determines the 'Stemness' of the cells according to the NN output. The 'Stemness value' is a measure of similarity between the input vector and the average distributions for each output class, which can be then used to indicate the cell differentiation state of the input.
This measure has been previously33 used for similar purposes. It is based on the Kullback–Leibler distance between probabilities, and the 'Stemness value' (S i ) of cell i is determined by the equation:
$${\boldsymbol{S}}_i = \mathop {\sum}\limits_{j = 1}^{N_c} {{\boldsymbol{p}}_{ij}} {\mathrm{log}}\frac{{{\boldsymbol{p}}_{ij}}}{{{\bar{\boldsymbol p}}_j}}$$
Where N c is the number of classes, \({\bar{\boldsymbol p}}_j\) is the average probability of class j and \({\boldsymbol{p}}_{ij}\) is the probability of cell i to belong to class j.
We have generated a cloud repository to enable research community to access single-cell gene expression profiles of 1422 zebrafish blood cells across all the 31,953 zebrafish genes. The implementation of the cloud service was performed using shiny69 (version 0.14.2), https://shiny.rstudio.com, and plotly70 (version 4.5.6), https://plot.ly, R packages.
Statistics and reproducibility of experiments
Statistical tests were carried out using R software packages as indicated in the figure legends and in the 'Methods' section. No statistical method was used to predetermine sample sizes. No randomisation or blinding of samples was performed. Pearson correlation coefficient was used to compare the average profiles of single cells against the bulk. Significance of differentially expressed genes was calculated with an approximate likelihood ratio test (Monocle2 differentialGeneTest() function) of the full model '~state' cells against the reduced model '~1'. For the dynamically expressed genes, the full model '~sm.ns(Pseudotime)' was tested against the reduced model of no pseudotime dependence. In both cases, P values were normalised using the the Benjamini–Hochberg FDR, selecting statistically significant genes with P < 0.01 and FDR < 0.1. For the GO analysis, the hypergeometric test (equivalent to the one tailed Fisher's exact test) was used to evaluate the significant terms, while P values were corrected for multiple testing using the FDR approach, with FDR < 0.05 considered statistically significant, using the gProfiler R65 package.
Raw data can be found under the accession number E-MTAB-5530 on ArrayExpress. Additional Zebrafish-related RNA-seq data that were used in the present study can be found in E-MTAB-4617, E-MTAB-3947 while human-related data were collected from the GEO under accession code GSE75478.
Orkin, S. H. & Zon, L. I. Hematopoiesis: an evolving paradigm for stem cell biology. Cell 132, 631–644 (2008).
Kondo, M., Weissman, I. L. & Akashi, K. Identification of clonogenic common lymphoid progenitors in mouse bone marrow. Cell 91, 661–672 (1997).
Akashi, K., Traver, D., Miyamoto, T. & Weissman, I. L. A clonogenic common myeloid progenitor that gives rise to all myeloid lineages. Nature 404, 193–197 (2000).
Adolfsson, J. et al. Identification of Flt3+ lympho-myeloid stem cells lacking erythro-megakaryocytic potential a revised road map for adult blood lineage commitment. Cell 121, 295–306 (2005).
Månsson, R. et al. Molecular evidence for hierarchical transcriptional lineage priming in fetal and adult stem cells and multipotent progenitors. Immunity 26, 407–419 (2007).
Notta, F. et al. Distinct routes of lineage development reshape the human blood hierarchy across ontogeny. Science 351, aab2116 (2016).
ADS Article PubMed Google Scholar
Spangrude, G. J., Heimfeld, S. & Weissman, I. L. Purification and characterization of mouse hematopoietic stem cells. Science 241, 58–62 (1988).
Guo, G. et al. Mapping cellular hierarchy by single-cell analysis of the cell surface repertoire. Cell Stem Cell 13, 492–505 (2013).
Wilson, N. K. et al. Combined single-cell functional and gene expression analysis resolves heterogeneity within stem cell populations. Cell Stem Cell 16, 712–724 (2015).
Jaitin, D. A. et al. Massively parallel single-cell RNA-seq for marker-free decomposition of tissues into cell types. Science 343, 776–779 (2014).
Paul, F. et al. Transcriptional heterogeneity and lineage commitment in myeloid progenitors. Cell 163, 1663–1677 (2015).
Psaila, B. et al. Single-cell profiling of human megakaryocyte-erythroid progenitors identifies distinct megakaryocyte and erythroid differentiation pathways. Genome Biol. 17, 83 (2016).
Yamamoto, R. et al. Clonal analysis unveils self-renewing lineage-restricted progenitors generated directly from hematopoietic stem cells. Cell 154, 1112–1126 (2013).
Treutlein, B. et al. Reconstructing lineage hierarchies of the distal lung epithelium using single-cell RNA-seq. Nature 509, 371–375 (2014).
Trapnell, C. et al. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat. Biotechnol. 32, 381–386 (2014).
Macaulay, I. C. et al. Single-cell RNA-sequencing reveals a continuous spectrum of differentiation in hematopoietic cells. Cell Rep. 14, 966–977 (2016).
Leng, L. et al. MIF signal transduction initiated by binding to CD74. J. Exp. Med. 197, 1467–1476 (2003).
Shi, G. P. et al. Human cathepsin S: chromosomal localization, gene structure, and tissue distribution. J. Biol. Chem. 269, 11530–11536 (1994).
Wittamer, V., Bertrand, J. Y., Gutschow, P. W. & Traver, D. Characterization of the mononuclear phagocyte system in zebrafish. Blood 117, 7126–7135 (2011).
Furze, R. C. & Rankin, S. M. Neutrophil mobilization and clearance in the bone marrow. Immunology 125, 281–288 (2008).
Rosowski, E. E., Deng, Q., Keller, N. P. & Huttenlocher, A. Rac2 functions in both neutrophils and macrophages to mediate motility and host defense in larval zebrafish. J. Immunol. 197, 4780–4790 (2016).
Kumar, S. et al. Cdc42 regulates neutrophil migration via crosstalk between WASp, CD11b, and microtubules. Blood 120, 3563–3574 (2012).
Jones, R. A. et al. Modelling of human Wiskott–Aldrich syndrome protein mutants in zebrafish larvae using in vivo live imaging. J. Cell Sci. 126, 4077–4084 (2013).
Grimm, T. et al. Dominant-negative Pes1 mutants inhibit ribosomal RNA processing and cell proliferation via incorporation into the PeBoW-complex. Nucleic Acids Res. 34, 3030–3043 (2006).
Brombin, A., Joly, J.-S. & Jamen, F. New tricks for an old dog: ribosome biogenesis contributes to stem cell homeostasis. Curr. Opin. Genet. Dev. 34, 61–70 (2015).
Ganis, J. J. et al. Zebrafish globin switching occurs in two developmental stages and is controlled by the LCR. Dev. Biol. 366, 185–194 (2012).
Denker, B. M., Smith, B. L., Kuhajda, F. P. & Agre, P. Identification, purification, and partial characterization of a novel Mr 28,000 integral membrane protein from erythrocytes and renal tubules. J. Biol. Chem. 263, 15634–15642 (1988).
Huang, H. & Cantor, A. B. Common features of megakaryocytes and hematopoietic stem cells: what's the connection? J. Cell. Biochem. 107, 857–864 (2009).
Renshaw, S. A. et al. A transgenic zebrafish model of neutrophilic inflammation. Blood 108, 3976–3978 (2006).
Long, Q. et al. GATA-1 expression pattern can be recapitulated in living transgenic zebrafish using GFP reporter gene. Development 124, 4105–4111 (1997).
Tamplin, O. J. et al. Hematopoietic stem cell arrival triggers dynamic remodeling of the perivascular niche. Cell 160, 241–252 (2015).
Tang, Q. et al. Optimized cell transplantation using adult rag2 mutant zebrafish. Nat. Methods 11, 821–824 (2014).
Velten, L. et al. Human haematopoietic stem cell lineage commitment is a continuous process. Nat. Cell Biol. 19, 271–281 (2017).
Ma, D., Zhang, J., Lin, H.-F., Italiano, J. & Handin, R. I. The identification and characterization of zebrafish hematopoietic stem cells. Blood 118, 289–297 (2011).
Signer, R. A. J., Magee, J. A., Salic, A. & Morrison, S. J. Haematopoietic stem cells require a highly regulated protein synthesis rate. Nature 509, 49–54 (2014).
Brennecke, P. et al. Accounting for technical noise in single-cell RNA-seq experiments. Nat. Methods 10, 1093–1095 (2013).
Carmona, S. J. et al. Single-cell transcriptome analysis of fish immune cells provides insight into the evolution of vertebrate immune cell types. Genome Res. 27, 451–461 (2017).
Novershtern, N. et al. Densely interconnected transcriptional circuits control cell states in human hematopoiesis. Cell 144, 296–309 (2011).
Chen, L. et al. Transcriptional diversity during lineage commitment of human blood progenitors. Science 345, 1251033 (2014).
Nestorowa, S. et al. A single-cell resolution map of mouse hematopoietic stem and progenitor cell differentiation. Blood 128, e20–31 (2016).
Lin, H.-F. et al. Analysis of thrombocyte development in CD41-GFP transgenic zebrafish. Blood 106, 3803–3810 (2005).
Zhang, X. Y. & Rodaway, A. R. F. SCL-GFP transgenic zebrafish: in vivo imaging of blood and endothelial development and identification of the initial site of definitive hematopoiesis. Dev. Biol. 307, 179–194 (2007).
Hall, C., Flores, M. V., Storm, T., Crosier, K. & Crosier, P. The zebrafish lysozyme C promoter drives myeloid-specific expression in transgenic fish. BMC Dev. Biol. 7, 42 (2007).
Walton, E. M., Cronan, M. R., Beerman, R. W. & Tobin, D. M. The macrophage-specific promoter mfap4 allows live, long-term analysis of macrophage behavior during mycobacterial infection in zebrafish. PLoS ONE 10, e0138949 (2015).
Dee, C. T. et al. CD4-transgenic zebrafish reveal tissue-resident Th2- and regulatory T cell-like populations and diverse mononuclear phagocytes. J. Immunol. 197, 3520–3530 (2016).
Bielczyk-Maczyńska, E. et al. A loss of function screen of identified genome-wide association study loci reveals new genes controlling hematopoiesis. PLoS Genet. 10, e1004450 (2014).
Picelli, S. et al. Full-length RNA-seq from single cells using Smart-seq2. Nat. Protoc. 9, 171–181 (2014).
Picelli, S. et al. Smart-seq2 for sensitive full-length transcriptome profiling in single cells. Nat. Methods 10, 1096–1098 (2013).
Robert, C. & Watson, M. Errors in RNA-seq quantification affect genes of relevance to human disease. Genome Biol. 16, 177 (2015).
Dobin, A. et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics 29, 15–21 (2013).
Kim, D. et al. TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biol. 14, R36 (2013).
Langmead, B. & Salzberg, S. L. Fast gapped-read alignment with Bowtie 2. Nat. Methods 9, 357–359 (2012).
Patro, R., Duggal, G., Love, M. I., Irizarry, R. A. & Kingsford, C. Salmon provides fast and bias-aware quantification of transcript expression. Nat. Methods 14, 417–419 (2017).
Patro, R., Mount, S. M. & Kingsford, C. Sailfish enables alignment-free isoform quantification from RNA-seq reads using lightweight algorithms. Nat. Biotechnol. 32, 462–464 (2014).
Anders, S., Pyl, P. T. & Huber, W. HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics 31, 166–169 (2015).
Trapnell, C. et al. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat. Biotechnol. 28, 511–515 (2010).
Lun, A. T., Bach, K. & Marioni, J. C. Pooling across cells to normalize single-cell RNA sequencing data with many zero counts. Genome Biol. 17, 75 (2016).
Buettner, F. et al. Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat. Biotechnol. 33, 155–160 (2015).
Stacklies, W., Redestig, H., Scholz, M., Walther, D. & Selbig, J. pcaMethods--a bioconductor package providing PCA methods for incomplete data. Bioinformatics 23, 1164–1167 (2007).
Marchini, J. L., Heaton, C., Ripley, M. B. & Suggests, M. fastICA: FastICA Algorithms to Perform ICA and Projection Pursuit. https://cran.r-project.org/web/packages/fastICA/index.html (2017).
Haghverdi, L., Buettner, F. & Theis, F. J. Diffusion maps for high-dimensional single-cell analysis of differentiation data. Bioinformatics 31, 2989–2998 (2015).
Maechler, M., Rousseeuw, P., Struyf, A. & Hubert, M. Cluster: Cluster Analysis Basics and Extensions. https://cran.r-project.org/web/packages/cluster/index.html (2005).
Wickham, H. ggplot2: Elegant Graphics for Data Analysis (Springer, New York, 2009).
Kolde, R. Pheatmap: Pretty Heatmaps. https://cran.r-project.org/web/packages/pheatmap/index.html (2012).
Reimand, J. et al. g:Profiler-a web server for functional interpretation of gene lists (2016 update). Nucleic Acids Res. 44, W83–W89 (2016).
Chollet, F. et al. Keras https://keras.io (2015).
Al-Rfou, R. et al. Theano: a Python framework for fast computation of mathematical expressions. Preprint at arXiv e-prints abs/1605.02688 (2016).
Allaire, J. J., Eddelbuettel, D., Golding, N. & Tang, Y. tensorflow: R Interface to TensorFlow https://www.tensorflow.org (2016).
Chang, W., Cheng, J., Allaire, J. J., Xie, Y. & McPherson, J. shiny: Web Application Framework for R https://cran.r-project.org/web/packages/shiny/index.html (2017).
Sievert, C. et al. plotly: Create Interactive Web Graphics. https://plot.ly/javascript/ (2017).
The study was supported by Cancer Research UK grant number C45041/A14953 (to A.C. and E.A.), European Research Council project 677501—ZF_Blood (to A.C.) and a core support grant from the Wellcome Trust and MRC to the Wellcome Trust – Medical Research Council Cambridge Stem Cell Institute. The authors thank WTSI Cytometry Core Facility for their help with index cell sorting and the Core Sanger Web Team for hosting the cloud web application. The authors also like to thank the CRUK Cambridge Institute Genomics Core Facility for their contribution in sequencing the data.
Present address: Biotechnology Innovation Centre, Rhodes University, Grahamstown, 6139, South Africa
Emmanouil I. Athanasiadis and Jan G. Botthof contributed equally to this work.
Department of Haematology, University of Cambridge, Cambridge, CB2 0XY, UK
Emmanouil I. Athanasiadis, Jan G. Botthof, Lauren Ferreira & Ana Cvejic
Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Cambridge, CB10 1SA, UK
Wellcome Trust – Medical Research Council Cambridge Stem Cell Institute, Cambridge, CB2 1QR, UK
Computer Laboratory, University of Cambridge, Cambridge, CB3 0FD, UK
Helena Andres & Pietro Lio
Emmanouil I. Athanasiadis
Jan G. Botthof
Helena Andres
Pietro Lio
Ana Cvejic
E.I.A. carried out the analysis; J.G.B. and L.F. performed the experiments; H.A. generated the DNN; P.L. oversaw implementation of the DNN; J.G.B., E.I.A. and A.C. contributed to the discussion of the results and designed the figures; A.C. conceived the study and wrote the manuscript. All authors approved the final version of the manuscript.
Correspondence to Ana Cvejic.
Peer Review File
Description of Additional Supplementary Files
Supplementary Data 1
Athanasiadis, E.I., Botthof, J.G., Andres, H. et al. Single-cell RNA-sequencing uncovers transcriptional states and fate decisions in haematopoiesis. Nat Commun 8, 2045 (2017). https://doi.org/10.1038/s41467-017-02305-6
Phosphatidylinositol-3 kinase signaling controls survival and stemness of hematopoietic stem and progenitor cells
Sasja Blokzijl-Franke
Bas Ponsioen
Jeroen den Hertog
Oncogene (2021)
Statistical mechanics meets single-cell biology
Andrew E. Teschendorff
Andrew P. Feinberg
Nature Reviews Genetics (2021)
scGCN is a graph convolutional networks algorithm for knowledge transfer in single cell omics
Qianqian Song
Jing Su
Nature Communications (2021)
Transfer learning efficiently maps bone marrow cell types from mouse to human using single-cell RNA sequencing
Patrick S. Stumpf
Xin Du
Ben D. MacArthur
Communications Biology (2020)
Unsupervised generative and graph representation learning for modelling cell differentiation
Ioana Bica
Helena Andrés-Terré
Pietro Liò
Reviews & Analysis
Editorial Values Statement
Journal Impact
Editors' Highlights
Top 50 Articles
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
All Years (302)
Viewing 241-260 of 302 papers
Unsupervised Deep Embedding for Clustering Analysis
ICML 2016
Junyuan Xie, Ross Girshick, and Ali Farhadi
Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method… (More)
Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods. (Less)
A Diagram Is Worth A Dozen Images
ECCV 2016
Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi
Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images. Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little… (More)
Diagrams are common tools for representing complex concepts, relationships and events, often when it would be difficult to portray the same information with natural images. Understanding natural images has been extensively studied in computer vision, while diagram understanding has received little attention. In this paper, we study the problem of diagram interpretation, the challenging task of identifying the structure of a diagram and the semantics of its constituents and their relationships. We introduce Diagram Parse Graphs (DPG) as our representation to model the structure of diagrams. We define syntactic parsing of diagrams as learning to infer DPGs for diagrams and study semantic interpretation and reasoning of diagrams in the context of diagram question answering. We devise an LSTM-based method for syntactic parsing of diagrams and introduce a DPG-based attention model for diagram question answering. We compile a new dataset of diagrams with exhaustive annotations of constituents and relationships for about 5,000 diagrams and 15,000 questions and answers. Our results show the significance of our models for syntactic parsing and question answering in diagrams using DPGs. (Less)
"What happens if..." Learning to Predict the Effect of Forces in Images
Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi
What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific… (More)
What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 65,000 object movements in 3D which represent a variety of external forces applied to different types of objects. Our experimental evaluations show that the challenging task of predicting long-term movements of objects as their reaction to external forces is possible from a single image. The code and dataset are available at: https://prior.allenai.org/projects/what-happens-if (Less)
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in $32\times$ memory saving. In XNOR-Networks, both the filters and the input to… (More)
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in $32\times$ memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations (in terms of number of the high precision operations) and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. (Less)
Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding
Gunnar A. Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta
Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are… (More)
Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 seconds, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community. (Less)
Much Ado About Time: Exhaustive Annotation of Temporal Data
HCOMP 2016
Gunnar A. Sigurdsson, Olga Russakovsky, Ali Farhadi, Ivan Laptev, and Abhinav Gupta
Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to… (More)
Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to perceive. In contrast, we investigate and determine the most cost-effective way of obtaining high-quality multi-label annotations for temporal data such as videos. Watching even a short 30-second video clip requires a significant time investment from a crowd worker; thus, requesting multiple annotations following a single viewing is an important cost-saving strategy. But how many questions should we ask per video? We conclude that the optimal strategy is to ask as many questions as possible in a HIT (up to 52 binary questions after watching a 30-second video clip in our experiments). We demonstrate that while workers may not correctly answer all questions, the cost-benefit analysis nevertheless favors consensus from multiple such cheap-yet-imperfect iterations over more complex alternatives. When compared with a one-question-per-video baseline, our method is able to achieve a 10% improvement in recall (76.7% ours versus 66.7% baseline) at comparable precision (83.8% ours versus 83.0% baseline) in about half the annotation time (3.8 minutes ours compared to 7.1 minutes baseline). We demonstrate the effectiveness of our method by collecting multi-label annotations of 157 human activities on 1,815 videos. (Less)
FigureSeer: Parsing Result-Figures in Research Papers
Noah Siegel, Zachary Horvitz, Roie Levin, Santosh Divvala, and Ali Farhadi
'Which are the pedestrian detectors that yield a precision above 95% at 25% recall?' Answering such a complex query involves identifying and analyzing the results reported in figures within several research papers. Despite the availability of excellent academic search engines, retrieving such… (More)
'Which are the pedestrian detectors that yield a precision above 95% at 25% recall?' Answering such a complex query involves identifying and analyzing the results reported in figures within several research papers. Despite the availability of excellent academic search engines, retrieving such information poses a cumbersome challenge today as these systems have primarily focused on understanding the text content of scholarly documents. In this paper, we introduce FigureSeer, an end-to-end framework for parsing result-figures, that enables powerful search and retrieval of results in research papers. Our proposed approach automatically localizes figures from research papers, classifies them, and analyses the content of the result-figures. The key challenge in analyzing the figure content is the extraction of the plotted data and its association with the legend entries. We address this challenge by formulating a novel graph-based reasoning approach using a CNN-based similarity metric. We present a thorough evaluation on a real-word annotated dataset to demonstrate the efficacy of our approach. (Less)
Deep3D: Fully Automatic 2D-to-3D Video Conversion with Deep Convolutional Neural Networks
We propose Deep3D, a fully automatic 2D-to-3D conversion algorithm that takes 2D images or video frames as input and outputs stereo 3D image pairs. The stereo images can be viewed with 3D glasses or head-mounted VR displays. Deep3D is trained directly on stereo pairs from a dataset of 3D movies to… (More)
We propose Deep3D, a fully automatic 2D-to-3D conversion algorithm that takes 2D images or video frames as input and outputs stereo 3D image pairs. The stereo images can be viewed with 3D glasses or head-mounted VR displays. Deep3D is trained directly on stereo pairs from a dataset of 3D movies to minimize the pixel-wise reconstruction error of the right view when given the left view. Internally, the Deep3D network estimates a probabilistic disparity map that is used by a differentiable depth image-based rendering layer to produce the right view. Thus Deep3D does not require collecting depth sensor data for supervision. (Less)
G-CNN: an Iterative Grid Based Object Detector
CVPR 2016
Mahyar Najibi, Mohammad Rastegari, and Larry Davis
We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object… (More)
We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed. (Less)
Beyond Parity Constraints: Fourier Analysis of Hash Functions for Inference
Tudor Achim, Ashish Sabharwal, and Stefano Ermon
Random projections have played an important role in scaling up machine learning and data mining algorithms. Recently they have also been applied to probabilistic inference to estimate properties of high-dimensional distributions; however , they all rely on the same class of projections based on… (More)
Random projections have played an important role in scaling up machine learning and data mining algorithms. Recently they have also been applied to probabilistic inference to estimate properties of high-dimensional distributions; however , they all rely on the same class of projections based on universal hashing. We provide a general framework to analyze random projections which relates their statistical properties to their Fourier spectrum, which is a well-studied area of theoretical computer science. Using this framework we introduce two new classes of hash functions for probabilistic inference and model counting that show promising performance on synthetic and real-world benchmarks. (Less)
Cross-Sentence Inference for Process Knowledge
Samuel Louvan, Chetan Naik, Sadhana Kumaravel, Heeyoung Kwon, Niranjan Balasubramanian, and Peter Clark
For AI systems to reason about real world situations, they need to recognize which processes are at play and which entities play key roles in them. Our goal is to extract this kind of rolebased knowledge about processes, from multiple sentence-level descriptions. This knowledge is hard to acquire… (More)
For AI systems to reason about real world situations, they need to recognize which processes are at play and which entities play key roles in them. Our goal is to extract this kind of rolebased knowledge about processes, from multiple sentence-level descriptions. This knowledge is hard to acquire; while semantic role labeling (SRL) systems can extract sentence level role information about individual mentions of a process, their results are often noisy and they do not attempt create a globally consistent characterization of a process. To overcome this, we extend standard within sentence joint inference to inference across multiple sentences. This cross sentence inference promotes role assignments that are compatible across different descriptions of the same process. When formulated as an Integer Linear Program, this leads to improvements over within-sentence inference by nearly 3% in F1. The resulting role-based knowledge is of high quality (with a F1 of nearly 82). (Less)
Creating Causal Embeddings for Question Answering with Minimal Supervision
Rebecca Sharp, Mihai Surdeanu, Peter Jansen, and Peter Clark
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using generalpurpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question… (More)
A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using generalpurpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we generate causal embeddings cost-effectively by bootstrapping cause-effect pairs extracted from free text using a small set of seed patterns. Second, we train dedicated embeddings over this data, by using task-specific contexts, i.e., the context of a cause is its effect. Finally, we extend a state-of-the-art reranking approach for QA to incorporate these causal embeddings. We evaluate the causal embedding models both directly with a casual implication task, and indirectly, in a downstream causal QA task using data from Yahoo! Answers. We show that explicitly modeling causality improves performance in both tasks. In the QA task our best model achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7% (relative). (Less)
Semantic Parsing to Probabilistic Programs for Situated Question Answering
Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi
Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P3), a… (More)
Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P3), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines. (Less)
What's in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
COLING 2016
Peter Jansen, Niranjan Balasubramanian, Mihai Surdeanu, and Peter Clark
QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements… (More)
QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference. (Less)
Examples are not enough. Learn to criticize! Criticism for Interpretability
NIPS 2016
Been Kim, Sanmi Koyejo and Rajiv Khanna
Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data… (More)
Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data distributions, we also need criticism to explain what are not captured by prototypes. Motivated by the Bayesian model criticism framework, we develop MMD-critic which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot study shows that the MMD-critic selects prototypes and criticism that are useful to facilitate human understanding and reasoning. We also evaluate the prototypes selected by MMD-critic via a nearest prototype classifier, showing competitive performance compared to baselines. (Less)
Adaptive Concentration Inequalities for Sequential Decision Problems
Shengjia Zhao, Enze Zhou, Ashish Sabharwal, and Stefano Ermon
A key challenge in sequential decision problems is to determine how many samples are needed for an agent to make reliable decisions with good probabilistic guarantees. We introduce Hoeffding-like concentration inequalities that hold for a random, adaptively chosen number of samples. Our… (More)
A key challenge in sequential decision problems is to determine how many samples are needed for an agent to make reliable decisions with good probabilistic guarantees. We introduce Hoeffding-like concentration inequalities that hold for a random, adaptively chosen number of samples. Our inequalities are tight under natural assumptions and can greatly simplify the analysis of common sequential decision problems. In particular, we apply them to sequential hypothesis testing, best arm identification, and sorting. The resulting algorithms rival or exceed the state of the art both theoretically and empirically. (Less)
AI assisted ethics
Amitai Etzioni and Oren Etzioni
The growing number of 'smart' instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars… (More)
The growing number of 'smart' instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable. (Less)
My Computer is an Honor Student — but how Intelligent is it? Standardized Tests as a Measure of AI
AI Magazine 2016
Peter Clark and Oren Etzioni
Given the well-known limitations of the Turing Test, there is a need for objective tests to both focus attention on, and measure progress towards, the goals of AI. In this paper we argue that machine performance on standardized tests should be a key component of any new measure of AI, because… (More)
Given the well-known limitations of the Turing Test, there is a need for objective tests to both focus attention on, and measure progress towards, the goals of AI. In this paper we argue that machine performance on standardized tests should be a key component of any new measure of AI, because attaining a high level of performance requires solving significant AI problems involving language understanding and world modeling — critical skills for any machine that lays claim to intelligence. In addition, standardized tests have all the basic requirements of a practical test: they are accessible, easily comprehensible, clearly measurable, and offer a graduated progression from simple tasks to those requiring deep understanding of the world. Here we propose this task as a challenge problem for the community, summarize our state-of-the-art results on math and science tests, and provide supporting datasets (see www.allenai.org/data.html). (Less)
Selecting Near-Optimal Learners via Incremental Data Allocation
AAAI 2016
Ashish Sabharwal, Horst Samulowitz, and Gerald Tesauro
We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated… (More)
We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper- parameters. Inspired by the principle of "optimism under un- certainty," we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior. (Less)
Exact Sampling with Integer Linear Programs and Random Perturbations
Carolyn Kim, Ashish Sabharwal, and Stefano Ermon
We consider the problem of sampling from a discrete probability distribution specified by a graphical model. Exact samples can, in principle, be obtained by computing the mode of the original model perturbed with an exponentially many i.i.d. random variables. We propose a novel algorithm that views… (More)
We consider the problem of sampling from a discrete probability distribution specified by a graphical model. Exact samples can, in principle, be obtained by computing the mode of the original model perturbed with an exponentially many i.i.d. random variables. We propose a novel algorithm that views this as a combinatorial optimization problem and searches for the extreme state using a standard integer linear programming (ILP) solver, appropriately extended to account for the random perturbation. Our technique, GumbelMIP, leverages linear programming (LP) relaxations to evaluate the quality of samples and prune large portions of the search space, and can thus scale to large tree-width models beyond the reach of current exact inference methods. Further, when the optimization problem is not solved to optimality, our method yields a novel approximate sampling technique. We empirically demonstrate that our approach parallelizes well, our exact sampler scales better than alternative approaches, and our approximate sampler yields better quality samples than a Gibbs sampler and a low-dimensional perturbation method. (Less) | CommonCrawl |
Home Journals EJEE Optimization of Hessian Matrix in Modified Newton-Raphson Algorithm for Electrical Resistance Tomography
Optimization of Hessian Matrix in Modified Newton-Raphson Algorithm for Electrical Resistance Tomography
Liqing Xiao
School of Mechanical and Electrical Engineering, Huainan Normal University, Huainan 232038, China
[email protected]
To satisfy the accuracy of image reconstruction, this paper carries out offline optimization of the Hessian matrix in the modified Newton-Raphson algorithm (MNRA) for image reconstruction of electrical resistance tomography (ERT). Firstly, the selection strategy of regularization factor, which directly affects the accuracy of the reconstructed image, was discussed in details. Next, the improved particle swarm optimization (PSO) algorithm was adopted to alleviate the ill-posedness of the Hessian matrix through offline optimization. The variables of offline optimization include the radius ratio between each layer of the finite-element model (FE model) to the sensitive field (SF) during the γ-refinement of the ERT, and the positions of the nodes added through element subdivision. The experimental results show that, under the same conditions, the above optimization measure can improve the solution accuracy of the ERT's inverse problem by alleviating the ill-posedness of the Hessian matrix, which is used to correct the dielectric resistance distribution (DRD) in the SF, without sacrificing the real-time performance of the MNRA.
hessian matrix, regularization factor, ill-posedness, γ-refinement, element subdivision
Electrical resistance tomography (ERT) [1-10] is a branch of electrical tomography, alongside with electrical capacitance tomography (ECT) [11-16], electrical impedance tomography (EIT) [17-25] and electromagnetic tomography (EMT) [26-29]. Compared with traditional detection methods, this novel real-time detection technique satisfies the high accuracy requirement of modern detection tasks, and enjoys broad application prospects in various fields, namely, two-phase/multi-phase flow, geophysical exploration and biomedicine.
Many algorithms have been developed to reconstruct images accurately, without sacrificing real-time performance. Among them, the modified Newton-Raphson algorithm (MNRA) stands out as a theoretically complete iterative reconstruction algorithm for static images [30]. In the iterative process, the MNRA introduces a Hessian matrix, which contains a regularization factor, to correct the dielectric resistance distribution (DRD) in the sensitive field (SF), and thus effectively overcomes the ill-posedness of sensitivity matrix. The regularization factor directly bears on the quality of the image reconstructed by the MNRA. Currently, the regularization factor is selected in two ways: empirical selection or online real-time calculation. Either approach has certain defects. Lacking theoretical basis, empirical selection cannot guarantee that the image is reconstructed at the accuracy required by the system. Meanwhile, online real-time calculation solves the inverse problem of the ERT, more accurately at the cost of real-time performance, as it increases the computing load of the image reconstruction algorithm. By refining the finite-element model (FE model), Xiao et al. [30] effectively solved the ill-posedness without affecting the real-time performance. However, their approach still faces several defects:
1. The topology of the FE model was not optimized. Under the same conditions, each topology of the FE model corresponds to a specific Hessian matrix with a unique level of ill-posedness, and a distinct solution accuracy of the inverse problem.
2. The positions of the nodes added in the element subdivision process were not optimized.
To overcome the above defects, this paper firstly explores the selection strategy of the regularization factor. On this basis, the ill-posedness of Hessian matrix was alleviated through offline optimization of the FE model topology during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision. In this way, the author greatly improved the accuracy of the MNRA in image reconstruction.
2. Principle of the Mnra
The principle of the MNRA is as follows [30]:
Step 1. Initialize the DRD $\rho^{(0)}$ in the SF.
Step 2. Calculate the effective boundary voltage of the SF corresponding to the DRD $\rho^{(k)}$ in the k -th iteration of the MNRA: $\boldsymbol{v}^{(k)}=f\left(\boldsymbol{\rho}^{(k)}\right)$ .
Step 3. Compute the error of the MNRA by:
$e_{r r o r}=\frac{1}{2}\left(\left\|v^{(k)}-v_{0}\right\|_{2}\right)^{2}$ (1)
where, $\boldsymbol{v}_{0}$ is the measured effective boundary voltage of the SF.
Step 4. Judge if the MNRA satisfies the termination condition. If not, go to Step 5.
Step 5. Introduce the Hessian matrix with the regularization factor, correct the DRD $\rho^{(k+1)}$ in the SF, and jump back to Step 2:
$\boldsymbol{\rho}^{(k+1)}=\boldsymbol{\rho}^{(k)}+\Delta \boldsymbol{\rho}^{(k+1)}$ (2)
$\begin{aligned} \Delta \rho^{(k+1)}=&-\left\{\left[f^{\prime}\left(\rho^{(k)}\right)\right]^{T} f^{\prime}\left(\rho^{(k)}\right)+\mu^{(k)} E\right\}^{-1} \\ &\left[f^{\prime}\left(\rho^{(k)}\right)\right]^{T}\left(f\left(\rho^{(k)}\right)-v_{0}\right) \end{aligned}$ (3)
where, $\boldsymbol{E}$ is the unit matrix; $\mu^{(k)}$ is the regularization factor adopted by the MNRA in the $k$ -th iteration; $f^{\prime}\left(\boldsymbol{\rho}^{(k)}\right)$ is the sensitivity matrix when the DRD of the SF is $\boldsymbol{\rho}^{(k)}$ ; $\left[f^{\prime}\left(\boldsymbol{\rho}^{(k)}\right)\right]^{T} f^{\prime}\left(\boldsymbol{\rho}^{(k)}\right)+\mu^{(k)} \boldsymbol{E}$ is the Hessian matrix used to correct the DRD.
3. Offline Optimization of Hessian Matrix
The offline optimization of the Hessian matrix used to correct the DRD in the SF is carried out during the iteration of the MNRA, based on the selection strategy of the regularization factor, which directly affects the ERT's image reconstruction accuracy. In this paper, this Hessian matrix is optimized by improved particle swarm optimization (PSO) algorithm. The variables include the radius ratio between each FE model layer to the SF during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision. The specific flow of the offline optimization is as follows:
Step 1. Selection of regularization factor
In the MNRA, it is difficult to reconstruct a high-quality image with a fixed regularization factor. This factor should be adjusted reasonably according to the MNRA error (formula (1)) in the iterative process. In the early phase, the algorithm has a large error, i.e. low accuracy in image reconstruction. In this case, the regularization factor should be large enough to ensure the stability of the MNRA, and the solution accuracy of the inverse problem. In the late phase, the algorithm has a small error, i.e. high accuracy in image reconstruction. In this case, the regularization factor should be small enough, such that the reconstructed image reflects the actual DRD in the SF. Of course, the regularization factor should not be too small. Otherwise, the MNRA will diverge in the iterative process. Hence, the minimum value of the regularization factor should be identified rationally, considering the prior knowledge and the FE model accuracy in solving ERT's forward problem.
To sum up, during the iteration of the MNRA, the $\left\{\left[f^{\prime}(\boldsymbol{\rho})\right]^{T} f^{\prime}(\boldsymbol{\rho})+\mu^{(k)} \boldsymbol{E}\right\}^{-1} \cdot\left[f^{\prime}(\boldsymbol{\rho})\right]^{T}$ should be calculated offline and saved to facilitate the offline optimization of the Hessian matrix, which is used to correct the DRD in the SF. Meanwhile, the small value interval of the regularization factor should be expanded to narrow its large value interval. In this paper, the maximum and minimum values of the regularization factor are both configured, with the median value satisfying the log-uniform distribution:
$l=p / q^{k}$ (4)
$\mu^{(k)}=\left\{\begin{array}{ll}{a^{-d}} & {l<a^{-d}} \\ {a^{-b}} & {l \geq a^{-b}} \\ {a^{-h_{1}}} & {a^{-h_{1}} \leq l<a^{-h_{2}}}\end{array}\right.$ (5)
where, a, b, d, p and $q$ are positive numbers that satisfy $a>1$ , $d>1, q>1$ and $d>b$ ; $h_{1}$ and $h_{2}$ can be calculated by:
$h_{1}=b+\frac{d-b}{t-1} \cdot(m-1) \quad m=2, \cdots t$ (6)
where, $t$ is a positive integer greater than 2.
Step 2. Offline optimization of Hessian matrix
The global element subdivision of the FE model [30] was adopted to improve the data transmission efficiency between the FE model adopted for ERT's forward problem and the FE model adopted to correct the DRD in the SF.
The previous experiments have shown that, under the same conditions, the MNRA can solve the inverse problem more accurately by updating the sensitivity matrix. Hence, the following strategy can be implemented to enhance the quality of reconstructed image without sacrificing the real-time performance of the MNRA: First, different DRDs should be set up in the SF according to rich prior knowledge (especially in biomedicine), and the corresponding regularization factors should be computed by formulas (4)~(7). On this basis, the Hessian matrices should be established corresponding to the DRDs and regularization factors. Once every few iterations, the correlation coefficient should be computed between the optimal reconstructed image of the MNRA and the reconstructed image corresponding to each DRD, and used to judge whether the Hessian matrix needs to be updated. The offline optimization of the Hessian matrix involves the following steps:
Step 1. Set up parameters like a , b , d , p , q , k and t .
Step 2. Take the following two items as the variables: the radius ratio between each FE model layer to the SF during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision. Set up the fitness function as:
$F(Y)={}^{1}/{}_{\sum\limits_{i=1}^{w}{{{\eta }_{i}}\cdot cond({{H}_{i}})}}$ (8)
where, $\eta_{i}$ is a nonnegative weight; $\text { cond }$ is the conditional number; $\boldsymbol{H}_{i}$ is the Hessian matrix corresponding to each DRD in the SF and each regularization factor; w is the number of Hessian matrices; $\boldsymbol{Y}$ is a variable representing the radius ratio between each FE model layer to the SF during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision.
Moreover, conduct offline optimization of the Hessian matrix through improved PSO algorithm. During the optimization, update the particle velocity and position by:
$\left\{\begin{array}{l}{\boldsymbol{V}_{i}=\omega \times \boldsymbol{V}_{i}+c_{1} \times \text {rand} 1 \times\left(\text {pbest}_{i}-\boldsymbol{X}_{i}\right)} \\ {+c_{2} \times \text {rand} 2 \times\left(\text {gbest}_{g}-\boldsymbol{X}_{i}\right)} \\ {\boldsymbol{X}_{i}=\boldsymbol{X}_{i}+\boldsymbol{V}_{i}}\end{array}\right.$ (9)
where, $\boldsymbol{X}_{i}$ and $V_{i}$ are the position vector and velocity vector of particle i, respectively; pdestiand gbestg are the individual best-known solution and global best-known solution, respectively; w is the inertial weight; $c_{1} \text { and } c_{2}$ are learning factors; rand 1 and rand 2 are random numbers in (0, 1).
Step 3. To improve the real-time performance of the MNRA, carry out offline calculation and storage of the following term based on the optimization results in the previous step: $\left\{\left[f^{\prime}(\rho)\right]^{T} f^{\prime}(\rho)+\mu^{(k)} E\right\}^{-1} \cdot\left[f^{\prime}(\rho)\right]^{T}$ .
4. Simulation Experiment
In our simulation experiment, some parameters are configured as follows: the maximum number of iterations for the improved PSO algorithm for offline optimization of the Hessian matrix and that for the MNRA were both set to 200; the upper and lower bounds of the inertial weight were set to 0.90 and 0.10, respectively; for the efficiency of offline optimization, the DRDs in the SF were assumed as continuous uniform distributions, and the weight of each Hessian matrix $\eta_{i}$ was set to 1.00; considering accuracy and time consumption, the FE model was designed with 8 layers, and the radius of the SF was normalized; the regularization factors of the MNRA iterations were set to 10-1, 10-2…10-8 according to parameters like p =0.50, q =1.15, a =10, b =1, d =8 and t =8.
Figure 1. FE models
Under the above settings, the FE model topology corresponding to the optimal result of the offline optimization of Hessian matrix, as well as the positions of the nodes added through element subdivision are recorded as FE model 3 in Figure 1. FE model 1 was proposed by Xiao et al. [31], which can effectively enhance the solution accuracy of the forward problem. FE model 2 was also developed by Xiao et al. [30], which can satisfactorily alleviate the ill-posedness of Hessian matrix. In FE model 2, the nodes added through element subdivision are the centroids of triangular elements. FE model 4 is an FE model that computes the measured effective boundary voltages under different model settings, without committing the inverse crime. To minimize the error between the calculated and theoretical values of the FE model corresponding to each DRD in the SF, FE model 4 performs offline optimization of the radius of the second outmost layer, and assumes that the DRD is uniform in the other layers.
When the regularization factor was set to 10-1, 10-2…10-8, respectively, the mean conditional number of the Hessian matrices corresponding to FE model 1, FE model 2 and FE model 3 was 2.9720×105, 9.9067×104 and 8.2721×104, respectively. In other words, FE model 3, through offline optimization by the improved PSO algorithm, reduced the mean conditional number of Hessian matrices by 72.1666% from that of FE model 1 and 16.4999% from that of FE model 2. This means FE model 3 effectively alleviated the ill-posedness of Hessian matrix and thus enhanced the MNRA's solution accuracy of the inverse problem.
Figure 2 shows how the ill-posedness of Hessian matrix, which is adopted to correct the DRD in the SF, varies through the iterations of the MNRA. Note that Algorithms 1~3 are the Hessian matrices corresponding to the MNRA coupled with FE model 1, FE model 2 and FE model 3, respectively; (a) means the regularization factors are computed by formulas (4)~(7); (b) means the regularization factors are only computed by formula (4).
Figure 2. Variation of the ill-posedness in the iterative process
As shown in Figure 2(a), the Hessian matrices optimized by our approach (Algorithm 3) achieved the smallest conditional number at any number of iterations of the MNRA. This means our offline optimization strategy can ensure that the MNRA operates stably and the resolution of reconstructed image can be promoted. It can be seen from Figure 2(b) that our local optimization strategy of Hessian matrix also applies to regularization factors other than 10-1, 10-2…10-8. However, when the regularization factors were only computed by formula (4), the local optimization effect was suppressed. Thus, the offline calculation and storage of $\left\{\left[f^{\prime}(\boldsymbol{\rho})\right]^{T} f^{\prime}(\boldsymbol{\rho})+\mu^{(k)} \boldsymbol{E}\right\}^{-1} \cdot\left[\boldsymbol{f}^{\prime}(\boldsymbol{\rho})\right]^{T}$ only suit systems with low requirements on real-time performance.
In addition, when the sensitivity matrix remained unchanged, the real-time performance of the MNRA is mainly affected by the computing load of correcting the DRD in the SF, under the same experimental conditions. Under our local optimization strategy of Hessian matrix, the $\left.\left[f^{\prime}(\boldsymbol{\rho})\right]^{T} f^{\prime}(\boldsymbol{\rho})+\mu^{(k)} \boldsymbol{E}\right\}^{-1} \cdot\left[f^{\prime}(\boldsymbol{\rho})\right]^{T}$ is computed and saved offline. Under the same experimental conditions (Intel® Core™2 Duo Processor T8100; frequency: 2.10GHz; memory: 3.00GB), our strategy reduced the time consumption in each correction of DRD in the SF from 0.8750-0.8900s to 3.1000×10-4-4.7000×10-4s. In this way, the real-time performance of the MNRA is effectively improved, without sacrificing the solution accuracy of the inverse problem.
Next, six different models were constructed (Figure 3a) to verify the effectiveness of our local optimization strategy of Hessian matrix in enhancing the quality of the image reconstructed by the MNRA. The images reconstructed by different MNRAs are compared in Figures 3b~3e and Tables 1~2. Note that Algorithm 4 is the MNRA adopting the Hessian matrices of FE model 3 in Figure 1 and an empirical regularization factor.
a_a_a_a_a_a_.png
ha_ha_ha_.png
Figure 3. Preset models and images reconstructed by different algorithms
The accuracy of image reconstruction was evaluated by two indices: relative error $e$ and correlation coefficient $\rho$ :
$e=\frac{\|g-\hat{g}\|_{2}}{\|g\|_{2}} \times 100 \%$ (10)
$\rho=\frac{\sum_{i=1}^{L}\left(\hat{\boldsymbol{g}}_{i}-\overline{\hat{g}}\right) \cdot\left(\boldsymbol{g}_{i}-\bar{g}\right)}{\sqrt{\sum_{i=1}^{L}\left(\hat{\boldsymbol{g}}_{i}-\overline{\hat{g}}\right)^{2} \sum_{i=1}^{L}\left(\boldsymbol{g}_{i}-\bar{g}\right)^{2}}}$ (11)
where, $\boldsymbol{g}$ is the preset DRD; $\widehat{g}$ is the reconstructed DRD; L is the number of triangular elements in each model; $\bar{g} \text { and } \overline{\hat{g}}$ are the mean values of $g \text { and } \widehat{g}$ , respectively.
Table 1. Comparison of relative errors (%)
Preset images
Algorithm 1
Image a
Image b
Image c
Image d
Image e
Image f
Table 2. Comparison of correlation coefficients
As shown in Tables 1~2, when the regularization factors were all selected by formulas (4)-(7) in the MNRA iteration process, Algorithm 1 failed to reconstruct images desirably, due to the relatively large conditional number of the Hessian matrix used to correct the DRD in the SF: the relative error was high (mean: 43.8995%) and the correlation coefficient was small (mean: 0.8519).
Algorithm 2 alleviated the ill-posedness of Hessian matrix by refining the FE model, thus enhancing the solution accuracy of the inverse problem: the mean relative error was down by 7.2598%, and the mean correlation coefficient was up by 2.4064%, from the levels of Algorithm 1.
Algorithm 3 further reduced the ill-posedness of Hessian matrix through offline optimization of the radius ratio between each FE model layer to the SF during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision, and thereby further improved the accuracy of image reconstruction: the mean relative error was reduced by 17.4119%, and the mean correlation coefficient was increased by 3.3929%, from the levels of Algorithm 2.
Algorithm 4 adopts the same FE model topology to solve the ERT's forward problem and adds nodes at the same positions through element subdivision. However, the regularization factor of this algorithm was empirically selected and fixed through the iterative process. Hence, Algorithm 4 achieved an image reconstruction accuracy better than that of Algorithms 1 and 2, and poorer than that of Algorithm 3: the mean relative error was 10.4953% higher and the mean correlation coefficient was 3.2391% lower than that of Algorithm 3.
To verify if our offline optimization strategy can enhance the quality of the image reconstructed by the MNRA, the four algorithms were applied to reconstruct the preset models in the presence of noise. The results are compared in Figure 4 and Tables 3-4.
The results in Tables 3 and 4 show that the noise disturbed the solution accuracy of the four different MNRAs in ERT's inverse problem. Based on offline optimization of Hessian matrix, Algorithm 3 achieved the highest reconstruction accuracy: the mean relative error was 20.5650%, 15.7860% and 8.3251% lower than that of Algorithms 1, 2 and 4, respectively, and the mean correlation coefficient was 7564%, 3.5398% and 3.1914% higher than that of Algorithms 1, 2 and 4, respectively. Hence, Algorithm 3 boasts the best quality of image reconstruction in the presence or absence of noise.
_._._.png
bu_bu_bu_.png
Figure 4. Images reconstructed by different algorithms in the presence of noise
Table 3. Comparison of relative errors in the presence of noise (%)
Table 4. Comparison of correlation coefficients in the presence of noise
In terms of real-time performance, this paper employs Xiao et al.'s strategy [21] for the iterative process of the MNRA, which ensures the simplicity and efficiency of data transmission between the FE model adopted for ERT's forward problem and the FE model adopted to correct the DRD in the SF. Therefore, our offline optimization strategy does not affect the real-time performance of the MNRA.
5. Experimental Verification
During the experiment, the ERT system designed by Tianjin University was adopted to acquire the effective boundary voltage of the SF (Figure 5a). The images reconstructed by the four algorithms are compared in Figure 5b.
158968965.png
Figure 5. Experimental device and results
For Algorithms 1~4, the relative errors of were 35.9679%, 34.3641%, 30.3435% and 33.5247%, respectively, and the correlation coefficients were 0.6900, 0.7197, 0.7477 and 0.7014, respectively. Compared with Algorithms 1, 2 and 4, Algorithm 3 successfully alleviated the ill-posedness of Hessian matrix and enhanced the quality of reconstructed image, thanks to its dynamic selection of regularization factor and local optimization of Hessian matrix based on two variables (i.e. the radius ratio between each FE model layer to the SF during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision).
Among the various ERT techniques, the MNRA is a theoretically complete image reconstruction algorithm, known for its high imaging accuracy. To further enhance the MNRA's accuracy of image reconstruction, this paper explores the selection strategy of regularization factor in algorithm iteration, and then proposes a local optimization strategy to alleviate the ill-posedness of Hessian matrix with the improved PSO algorithm. The variables of the strategy include the radius ratio between each FE model layer to the SF during the γ-refinement of the ERT's forward problem, and the positions of the nodes added through element subdivision. Experimental results show that our local optimization strategy of Hessian matrix could effectively improve the accuracy of images reconstructed by the MNRA.
This paper is supported by 2018 Annual School-Level Key Scientific Research Project, Huainan Normal University (Grant No.: 2018xj18zd) and 2019 Key Project of Excellent Young Talents Supporting Program of Colleges and Universities, Anhui Province (Grant No.: gxyqZD2019065).
[1] Bobade, V., Evans, G., Eshtiaghi, N. (2019). Bubble rise velocity and bubble size in thickened waste activated sludge: Utilising electrical resistance tomography (ERT). Chemical Engineering Research and Design, 148: 119-128. https://doi.org/10.1016/j.cherd.2019.05.021
[2] Kazemzadeh, A., Ein-Mozaffari, F., Lohi, A. (2019). Mixing of highly concentrated slurries of large particles: Applications of electrical resistance tomography (ERT) and response surface methodology (RSM). Chemical Engineering Research and Design, 143: 226-240. https://doi.org/10.1016/j.cherd.2019.01.018
[3] Vadlakonda, B., Mangadoddy, N. (2018). Hydrodynamic study of three-phase flow in column flotation using electrical resistance tomography coupled with pressure transducers. Separation and Purification Technology, 203: 274-288. https://doi.org/10.1016/j.seppur.2018.04.039
[4] Díaz De Rienzo, M.A., Hou, R., Martin, P.J. (2018). Use of electrical resistance tomography (ERT) for the detection of biofilm disruption mediated by biosurfactants. Food and Bioproducts Processing, 110: 1-5. https://doi.org/10.1016/j.fbp.2018.03.006
[5] Low, S.C., Allitt, D., Eshtiaghi, N., Parthasarathy, R. (2018). Measuring active volume using electrical resistance tomography in a gas-sparged model anaerobic digester. Chemical Engineering Research and Design, 130: 42-51. https://doi.org/10.1016/j.cherd.2017.11.039
[6] Malik, D., Pakzad, L. Experimental investigation on an aerated mixing vessel through electrical resistance tomography (ERT) and response surface methodology (RSM). Chemical Engineering Research and Design, 129: 327-343. https://doi.org/10.1016/j.cherd.2017.11.002
[7] Ren, Z., Kowalski, A., Rodgers, T.L. (2017). Measuring inline velocity profile of shampoo by electrical resistance tomography (ERT). Flow Measurement and Instrumentation, 58: 31-37. https://doi.org/10.1016/j.flowmeasinst.2017.09.013
[8] Singh, B.K., Quiyoom, A., Buwa, V.V. (2017). Dynamics of gas-liquid flow in a cylindrical bubble column: Comparison of electrical resistance tomography and voidage probe measurements. Chemical Engineering Science, 158: 124-139. https://doi.org/10.1016/j.ces.2016.10.006
[9] Vadlakonda, B., Mangadoddy, N. (2017). Hydrodynamic study of two phase flow of column flotation using electrical resistance tomography and pressure probe techniques. Separation and Purification Technology, 184: 168-187. https://doi.org/10.1016/j.seppur.2017.04.029
[10] Son, Y., Kim, G., Lee, S., Kim, H., Min, K., Lee, K.S. (2017). Experimental investigation of liquid distribution in a packed column with structured packing under permanent tilt and roll motions using electrical resistance tomography. Chemical Engineering Science, 166: 168-180. https://doi.org/10.1016/j.ces.2017.03.044
[11] Gupta, S., Loh, K.J. (2018). Monitoring osseointegrated prosthesis loosening and fracture using electrical capacitance tomography. Biomedical engineering letters, 8(3): 291-300. https://doi.org/10.1007/s13534-018-0073-4
[12] Voss, A., Hänninen, N., Pour-Ghaz, M., Vauhkonen, M. (2018). Aku Seppänen. Imaging of two-dimensional unsaturated moisture flows in uncracked and cracked cement-based materials using electrical capacitance tomography. Materials and Structures, 51(3): 1-10. https://doi.org/10.1617/s11527-018-1195-y
[13] Nied, C., Lindner, J.A., Sommer, K. (2017). On the influence of the wall friction coefficient on void fraction gradients in horizontal pneumatic plug conveying measured by electrical capacitance tomography. Powder Technology, 321: 310-317. https://doi.org/10.1016/j.powtec.2017.07.072
[14] Perera, K., Pradeep, C., Mylvaganam, S., Time, R.W. (2017). Imaging of oil-water flow patterns by Electrical Capacitance Tomography. Flow Measurement and Instrumentation, 56: 23-34. https://doi.org/10.1016/j.flowmeasinst.2017.07.002
[15] Kryszyn, J., Wróblewski, P., Stosio, M., Wanta, D., Olszewski, T., Smolik, W.T. (2017). Architecture of EVT4 data acquisition system for electrical capacitance tomography. Measurement, 101: 28-39. https://doi.org/10.1016/j.measurement.2017.01.020
[16] Voss, A., Pour-Ghaz, M., Vauhkonen, M., Seppänen, A. (2016). Electrical capacitance tomography to monitor unsaturated moisture ingress in cement-based materials. Cement and Concrete Research, 89: 158-167. https://doi.org/10.1016/j.cemconres.2016.07.011
[17] Varanasi, S.K., Manchikatla, C., Polisetty, V.G., Jampana, P. (2019). Sparse optimization for image reconstruction in Electrical Impedance Tomography. IFAC PapersOnLine, 52(1):34-39. https://doi.org/10.1016/j.ifacol.2019.06.033
[18] Yoshida, T., Piraino, T., Lima, C.A.S., Kavanagh, B.P., Amato, M.B.P., Brochard, L. (2019). Regional ventilation displayed by electrical impedance tomography as an incentive to decrease positive end-expiratory pressure. American Journal of Respiratory and Critical Care Medicine, 200(7): 933-937. https://doi.org/10.1164/rccm.201904-0797LE
[19] Gregory, H., Tanya, H., Jeffrey, D. (2019). Thoracic electrical impedance tomography to minimize right heart strain following cardiac arrest. Annals of Pediatric Cardiology, 12(3): 315-317. https://doi.org/10.4103/apc.APC_189_18
[20] Simone, S., Giacomo, B., Silvia, V., Ermes, L., Tommaso, M., Giuseppe, F. (2019). A calibration technique for the estimation of lung volumes in nonintubated subjects by electrical impedance tomography. Respiration; International Review of Thoracic Diseases, 98(3): 189-197. https://doi.org/10.1159/000499159
[21] Jiang, Y.D., Soleimani, M. (2019). Capacitively coupled electrical impedance tomography for brain imaging. IEEE transactions on Medical Imaging, 38(9): 2104-2113. https://doi.org/10.1109/TMI.2019.2895035
[22] Miedema, M., Adler A., McCall, Ka.E., Perkins, E.J., van Kaam, A.H., Tingay, D.G. (2019) Electrical impedance tomography identifies a distinct change in regional phase angle delay pattern in ventilation filling immediately prior to a spontaneous pneumothorax. Journal of Applied Physiology, 127(3): 707-712. https://doi.org/10.1152/japplphysiol.00973.2018
[23] Tomicic, V., Cornejo, R. (2019). Lung monitoring with electrical impedance tomography: technical considerations and clinical applications. Journal of Thoracic Disease, 11(7): 3122-3135. https://doi.org/10.21037/jtd.2019.06.27
[24] Murphy, E.K., Skinner, J., Martucci, M., Rutkove, S.B., Halter, R.J. (2019). Toward electrical impedance tomography coupled ultrasound imaging for assessing muscle health. IEEE Transactions on Medical Imaging, 38(6): 1409-1419. https://doi.org/10.1109/TMI.2018.2886152
[25] Marefatallah, M., Breakey, D., Sanders, R.S. (2019). Study of local solid volume fraction fluctuations using high speed electrical impedance tomography: Particles with low Stokes number. Chemical Engineering Science, 203: 439-449. https://doi.org/10.1016/j.ces.2019.03.075
[26] Kaur, C., Singh, P., Sahni, S. (2019). Electroencephalography-based source localization for depression using standardized low resolution brain electromagnetic tomography-variational mode decomposition technique. European neurology, 81: 63-75. https://doi.org/10.1159/000500414
[27] Prinsloo, S., Rosenthal, D.I., Lyle, R., Garcia, S.M., Gabel-Zepeda, S., Cannon, R., Bruera, E., Cohen, L. (2019). Exploratory study of low resolution electromagnetic tomography (LORETA) real-time Z-score feedback in the treatment of pain in patients with head and neck cancer. Brain Topography, 32(2): 283-285. https://doi.org/10.1007/s10548-018-0686-z
[28] Shiina, T., Takashima, R., Pascual-Marqui, R.D., Suzuki, K., Watanabe, Y., Hirata, K. (2018). Evaluation of electroencephalogram using exact low-resolution electromagnetic tomography during photic driving response in patients with migraine. Neuropsychobiology, 77: 1-6. https://doi.org/10.1159/000489715
[29] De Pascalis, V., Scacchia, P. (2017). The behavioural approach system and placebo analgesia during cold stimulation in women: A low-resolution brain electromagnetic tomography (LORETA) analysis of startle ERPs. Personality and Individual Differences, 118: 56-63. https://doi.org/10.1016/j.paid.2017.03.003
[30] Xiao, L.Q., Wang, H.X., Shao, X.G. (2014). Improved Newton-Raphson image reconstruction algorithm based on model refining. Chinese Journal of Scientific Instrument, 35(7): 1546-1554.
[31] Xiao, L.Q., Wang, H.X., Cheng, H.L., Xu, X.J. (2012). Topology optimization of ERT finite element model based on improved GA. Chinese Journal of Scientific Instrument, 33(7): 1490-1496. | CommonCrawl |
Farmers' perceptions of grassland management in Magui Khola basin of Madi Chitwan, Nepal
Shanker Raj Barsila ORCID: orcid.org/0000-0001-6840-15031,
Niraj Prakash Joshi2,
Tuk Narayan Poudel3,
Badrika Devkota3,
Naba Raj Devkota4 &
Dev Raj Chalise5
Pastoralism volume 12, Article number: 40 (2022) Cite this article
Management of grassland is one of the important factors in traditional livestock farming systems. A survey was conducted in Madi of Chitwan Nepal to understand the perceptions of the farmers/graziers about grassland and feed management. For that, a well-prepared pretested set of questionnaires was used to collect information related to feeds and grassland ecological knowledge of the farmers. The questionnaire consisted of a set of questions about the household, factors affecting grassland productivity and alternative feeding resources. The survey revealed variations in household livestock ownerships, mostly for cattle (1–3) and buffalo (1–5), whilst goat ownership was similar across the survey sites. Grazing duration in months was similar in the study sites (about 7 months per year). Likewise, there was no conflict for grazing livestock, whereas it is believed that goat and buffalo have the same level of detrimental effect on grassland. A significantly higher number of respondents reported that flooding had a negative impact (p = 0.032) on grassland productivity. The Imperata cylindrica (L.) P. Beauv. locally known as Siru was a dominant forage species followed by the mosaics of Saccharum spontaneum L. locally known as Kaans in Nepali and Jhaksi in Tharu language, Saccharum bengalense Retz. locally known as Baruwa in Nepali and Narkat in the Tharu language. The respondents also pointed out that at least 2 to 3 years were needed for the recovery of grasslands when hampered by flooding and riverbank cut-off. Similar species dominated in the recovered grasslands over time of flooding. The seasonal fodder plantation was a major area of grassland improvement issue across the survey sites. There were high dependencies of the graziers on natural herbages and crop residues for feeding livestock in summer and winter, though the herbage species and preferences remained different. This study provides the primary background of the biophysical factors of grassland management for sustainable uses that require institutional support. The study further provides an insight into the need for implementation of the demand-based grassland technology interventions, possibly at a higher rate of adoption than the current local scale. However, the social-ecological consequences of grassland systems, i.e. the impact of climate change, herd dynamics and nutrient flow in vegetation and soil, have to be monitored in a long run.
Grasslands represent natural vegetation predominantly consisting of grasses, grass-like plants and forbs. They are found in the regions where the growth of trees is constrained by climatic and edaphic factors (CNP 2016). In some cases, these grasslands result from previous anthropogenic disturbance from cultivation and grazing of stock (Pokharel 1993; Thapa et al. 2021), but in others, the origin of the grasslands is unclear. Grasslands could be either tall or short. Grasslands are considered a primary source of the ecosystem cycle. Moreover, tall grassland has a crucial role in regulating soil water and nutrient cycle, thereby maintaining the biological stabilization mechanisms for soil surface (CNP 2016). Hence, it plays a vital role in mitigating climate change as an important carbon sequester. Besides, tall grasslands are considered an indicator of nutrient-rich soils. Similarly, tall grassland serves as a habitat for many important endangered animals, birds, reptiles, insects and plants (Thapa et al. 2021). Thus, grasslands are critical in enhancing biodiversity, maintaining a wide range of ecosystem services such as food, water, carbon storage and mitigation, pollination and cultural services (Bardgett et al. 2021). However, the widespread degradation of grasslands has been an important global concern (Kemp et al. 2013; CBD 2022; Bardgett et al. 2021).
Terai of Nepal is home to the tallest grasslands in the world. Mostly, the protected areas, and their surroundings, in the lowland Terai Nepal contain some of the few remaining tall grassland/forest mosaics and their fauna. Eight grassland associations have been known for Chitwan National Park (CNP) in Nepal (Lehmkuhl et al. 1988). These important grasslands support a wide range of biodiversity. Thus, these areas such as CNP are an important biodiversity hotspot (Dinerstein and Loucks 2002) and can play an important role in achieving Aichi Biodiversity Targets 14 and 15 with direct linkage with SDG#6 (MoFE 2018; CBD 2022). In addition, these grasslands are important resources for local people who rely largely on farming systems integrating livestock and forest (Lehmkuhl et al. 1988; Sætre 1993; Brown 1997; CNP 2016; MoFE 2018; Bardgett et al. 2021). The buffer zone around CNP also has numerous riverine grasslands. The productivity and traditional management information of such grasslands according to the perception of local farmers regarding both domestic and wild ungulates are less explored and documented. The usage history and productivity measurement activities have been poorly organized concerning the wild ungulates and domestic animals. The case is further worsened in the surrounding buffer zones where grazing pressure would be highly expected. In general, there are only shreds of evidence on grassland biodiversity, animal species-plant assemblages associations, the effect of cutting and burning, the spatial and temporal response of ungulates and the socioeconomics of livestock and grasslands. Under these circumstances, the conservation management of the remaining grasslands in Nepal remains a challenge (Peet et al. 1999), and there is a need for socio-ecological solutions (Bardgett et al. 2021) to balance the nature protection goals and livelihood needs of people. In this regard, understanding the local perceptions is an important aspect of grassland development planning, as it might help to increase the rate of adoption of new technologies by the farmers in the rural settings.
A few efforts have been made in the past to document the grassland ecology and conservation practices in the Terai region of Nepal. However, these efforts are limited to describing the grasslands around the national parks and wildlife reserves of Nepal, where anthropic activities might have affected the species distribution and grassland productivity. The interactive effects of grazing and firing have significant impacts on biomass production and the energy content of herbs. Fire is an important concept worldwide, and this is still in haphazard usage in Nepal. Likewise, grazing influences the composition, quantity and quality of aboveground biomass (Lehmkuhl 1999). The regrowth potential and nutrient flow after the above-illustrated factors are unknown, whilst the river is the focus of landscape dynamics, erosion, deposition and channel meandering which has destroyed, created and modified the grasslands for a long time. Besides, these grasslands are threatened by illegal grass cutting, summer wildfire, uncontrolled grazing, natural succession of woody vegetation and human disturbances (CNP 2016). More importantly, the present state of important grasslands of the CNP and peripheral buffer zones can result in the decline of biodiversity as well as loss of livelihood means of locals, which will have serious implications on our common effort to achieve Sustainable Goals, specifically SDG#15 life on land, SDG#1 no poverty, and SDG#2 zero hunger (CNP 2016; MoFE 2018). Hence, this study aimed to assess farmers' perception of grassland management in the two human settlements in the grasslands of Magui Khola basin of Madi, Chitwan, Nepal.
The Magui Khola basin was selected as a site for study purposes in Madi Chitwan, Nepal (Fig. 1). The river basin consists of grasslands with distinct features of anthropic usages, i.e. one is close to the Churia hills, the next one is close to the riverbanks and adjoining human settlements, and thirdly connecting to CNP in the north. Accordingly, based on the similar nature of vegetation distribution, the farmers were selected from the two villages, namely Bankatta and Khairahani; one of them was close to the Churia hills, whilst the other one to the riverbanks and human settlements, adjacent to CNP forest, respectively.
Map showing the survey sites, namely Bankatta and Khairahani in Madi valley, Chitwan, Nepal
Local knowledge holders
The local Tharu community people (75%) speaking native Tharu language and Nepali and the hill migrants (25%) have a specific culture of integrating livestock in the farming system across the survey sites. The major domestic species were cattle, buffaloes, sheep and goats. The local traditional farming is based on crop cultivation integrated with animal husbandry (mainly buffalo farming followed by the goat, cattle and sheep). Cultivation of major staple food crops, supplemented with the production of garden vegetables and pulses, is also significant. The basic social unit is the family smallholding (Poudel et al. 2020) with a well-developed inter-familial cooperation system during grazing and other labour-intensive farm activities in both the selected sites for a survey.
Forests were partially taken by the state in 2016 (Chitwan National Park, CNP). The study area is surrounded by the CNP and the community forest; villagers are allowed to have access to the forest land (known as buffer zones to CNP) for grazing, collection of fodder collection, thatch and medicinal herbs and timber, etc. According to our estimates, the community members spend around the year outdoor on activities related to farming (grazing, fodder collection, forest use, cultivation), whereas medicinal and wild food plant gatherings are done on holidays, and that is more seasonal. About 70% of the transport is still done by wooden carts, whilst 95% of the food is self-produced. Hand mowing of herbages is mostly done by sickle, less often by machines and most rarely by tractors. Local people possess a deeper understanding of the ecological knowledge that is utilized in their traditional farming activities. They can recognize herbage species and have deep knowledge about their habitat preferences and usages (Dangol and Gurung 1991).
As part of the long-term forage production analysis, the study was initiated in 2016 on folk's traditional social-ecological knowledge. Data were collected from 78 local farmers, 39 from each study site. Prior informed consent was obtained before all the interviews, and ethical guidelines suggested by the International Society of Ethnobiology were followed. Data were collected by participatory methods, i.e. free listing, indoor and partly outdoor semi-structured pre-tested and designed set of questionnaire interviews (Newing et al. 2011). The interviews were conducted in the local Tharu and Nepalese languages, were recorded in a format and translated to Nepali language from Tharu by a Tharu translator for the native Tharu-speaking respondents. The summary of the questionnaire sets has been presented in Table 1.
Table 1 Nature of questionnaires used to interview through semi-structured questionnaires survey in Madi, Chitwan, Nepal
Estimation of the number of respondents
The total population size in the given survey sites (villages) was obtained from the Madi municipality. Later, the minimum respondents' sample size from the population was determined using the sample determination formula with 95% confidence level by following Yamane (1964). The sample size was estimated as Eq. 1.
$$n = \frac{N}{1+N{e}^{2}}$$
where n is the number of the sample size of respondents, N is the size of population (census data collected from Madi Municipal Office, Nepal) and e is the expected allowable error which is 5% or 0.05.
Ranking of feeding resources
The seasonal ranking of feed resources was obtained from the equation modified from the Problem Confrontation Index (PCI) was computed as used by Hossain and Miah (2011) and Saha et al. (2022).
The PCI was computed by using the following formula:
$$PCI = Ph\times 3 + Pm\times 2 + Pl\times 1 + Pn \times 0$$
where PCI is the Problem Confrontation Index, Ph is the no. of the respondents expressed "abundantly available", Pm is the no. of the respondents expressed impact as "less abundantly available", Pl is the no. of the respondents expressed impact as "available in low quantities" and Pn is the no. of the respondents expressed "not available".
Household sojourn
The details of the household sojourn of the survey sites are presented in Table 2. The average age of the respondents was about 52 years, and the average family size was about 6. The average landholding was about 1 ha. However, there was variation in livestock ownerships. The cattle were significantly higher in Khairahani as compared to Bankatta, and it was higher for buffaloes in Madi 1 as compared to Kharahani. Whilst the number of goats owned (about 4 goats/household) was similar across the survey sites (Table 2), the number of grazing durations in months was the same (about 7 months/year) at both the survey sites.
Table 2 Household information on the survey sites
Perception of respondents on grassland issues
The survey results showed that there was no conflict among the grassland users for grazing livestock in the survey sites. Upon a question about the most detrimental domestic species for grazing, the respondents referred (p = 0.048) to goats as compared to buffalo. Likewise, the respondents noted substantial changes in the herbage composition associated with grazing, as almost 65% of them responded to it positively (p = 0.032). Among the dominating species in grasslands, Imperata cylindrica-S. spontaneum mixtures remained the most abundant mixtures (p = 0.003) followed by I. cylindrica-S. spontaneum-S. bengalenses mixtures. The grasslands can be recovered in about 3 years after flooding or cutting. Later, the newly grown species remained like that of preceding cut-off and flood deposition (p = 0.011). In the perception of locals, river belt protection (about 44%) was the most desirable area of grassland protection (p = 0.014) and management followed by fodder cropping (41%), whilst tree plantation was the least preferred choice (about 15%). The details of the attitude of respondents to grassland management in Madi, Chitwan, are presented in Table 3.
Table 3 Attitude of local respondent farmers on grassland management in Madi valley of Chitwan, Nepal
Grazing and alternative feed resources
The survey data has clearly stated that the farmers were relying mostly on the natural grassland for feeding livestock. Based on the season of growing and available bulk, naturally available feedstuffs were the top priorities for both summer and winter seasons and that was rather dominated by I. cylindrica (L.) P. Beauv., S. spontaneum L. and Cynodon dactylon (L.) Pers., respectively (Table 4). The straw remained the common statement for most of the respondents in terms of crop residues in the winter lean season. The respondents further responded to the browse species, e.g. Morus and Thysanolaena as of their low priorities to feed livestock mostly in the winter/lean season. The standing forages were of interest to most of the respondents (80.8%) in the summer season only. The details of the list of the ranking of seasonally available feedstuffs are presented in Table 4.
Table 4 Ranking of seasonally available feedstuffs based on biomass availability in the study areas
Respondents' perception of grasslands management
The main purpose of this study was to examine the perception of local farmers within the context of local settings and assessment of local knowledge on livestock grazing and feed resources management. The study used household surveys, focal persons interviews and group discussions to elucidate the current state of livestock grazing on natural grasslands and seasonal feed resources available for round the year feeding.
It is known that the traditional knowledge as noted by the respondents in their perceptions in the present study is intrinsic and adaptive to some scales and could be passed through generations (Berkes et al. 2000) and can be applied well in ecological processes (Alcorn 1989), sustainable use of natural resources (Schmink et al. 1992; Berkes 1999) and rangeland assessments (Angassa & Beyene 2003).
The observation of socio-economic data in the present study implies the need for strategic planning of grassland management in the Madi, Chitwan, where natural disasters such as flooding and inundation govern the herbage species coverage and the aboveground biomass available for livestock grazing. Flooding is one of the driving factors of grassland diversity and productivity (Henry et al., 1996: Van Eck et al., 2006), which has been supported by the respondents' positive response (see Table 2). The time required for colonization, and the observation of similarities on herbage species at the floodplains to that of the pasture at the origin at higher elevations are some of the key determinants of grassland diversity and productivity as well.
The available scientific evidence suggests that Terai grassland is dominated by tall grasses in the subtropical Terai (Thapa et al. 2021) and especially in and around the CNP (Lehmkuhl et al. 1988; Ghimire et al. 2019), and this is well represented in the farmers' indigenous knowledge in the present survey. There might be competition between the species as it appears from the respondents' response that Saccharum spontaneum L. dominated in national park areas whilst Imperata cylindrica (L.) P. Beauv. dominated in the nearby forest areas and riverbanks widely. This also implies further study needed on the differentiation in the ecological processes of the available herbage species in the survey sites, in response to different anthropic factors, for example, colonization and adaptation in the floodplains concerning grazing. There is increasing evidence that biotic and abiotic disturbances are important natural factors affecting community composition and structure (Sousa 1984). The biotic factors were not perceived much by the local communities; instead, goats are more detrimental grazers, but flooding was taken as the disturbance factor of grassland species in the present study.
Anthropic factors and grassland
Human interventions have typical effects on grassland productivity and species distribution. Such phenomena in the present study could not be illustrated in detail, although grazing is free and was responded to as a continuous activity around the year in the study sites, although the grazable pastures were available for only about seven months.
Flooding occurs due to excess rainfall, and these kinds of lands exposed to flooding could be useful better for grazing than cultivation (Henzell and L. 'tMannetjie 1980). Farmers well perceived this thought in the present study sites. Furthermore, the forage may be available for a longer time in these areas as it happened also in the present survey that Saccharum spontaneum L. (Kaans/Jhaksi) and Imperata cylindrica (L.) P. Beauv. were harvested year-round in variable quantities (see Table 4) most probably due to available residual soil moisture. Scientific evidence had shown that the flooding drastically reduces oxygen diffusion into the soil causing hypoxia which is the main limitation that reduces root aerobic respiration (Burdick & Mendelssohn 1990) and absorption of minerals and water (Baruch 1994) and that damages the general metabolism (Crawford 1982), which, however, may reduce the herbage cover.
This might be the reason that the inundated flood plains in the survey sites have similar species mostly dominated by three taxa associations (see Table 2) that might have also morphological adaptations in addition to the biochemical (Jackson & Drew 1984). Though much information is lacking in this respect in the present survey, but it can well be hypothesized that plants adapted and regenerated in the floodplains could be tolerant and could have wider land coverage over time with biomass that could be available for grazing. The deposition of soil nutrients towards the inundated plain areas might have further additive effects on the dominance of such herbage species, whilst grasses are known to respond quickly and competitively to the soil fertility gradient. However, the data set are not enough to link to the effect of climate change. The present survey result might open the questions for the physiological mechanism of adaptation to flooding and the inundation of pasture species.
Grazing and the ecological consequences
The mixed mosaic of short and tall grasses and herbs at the riverbanks and forest openings are the principal food sources for domestic ruminants (Lehmkuhl et al. 1988; Ghimire et al. 2019; Sharma and Shaw 1993), with variable nutrient concentrations (Thakur et al. 2014; Thapa et al. 2021). When the grass is too short and dry in the winter, the farmers were forced to feed crop by-products, whilst the cultivated fodder production remained as a newly introduced practice, in a technology adoption process. However, the present survey results lack information on the dynamics of nutrient flow in the herbages across the pasture growing period.
The grasslands in the study sites were uncontrollably grazed by domestic animals almost for more than half a year and we found almost no management practices adopted. Grazing is the principal cause of the spatial heterogeneity of vegetation, which modifies the ecosystem processes and biodiversity (Adler et al. 2001) and modifies the plant diversity itself (Milchunas & Lauenroth 1993). The intensity of heterogeneity could be expected much in the survey sites, where both the domestic and wild ungulates and other small animals (wild hares and wild pigs), many kinds of deer, rhinos and elephants graze on the same piece of grassland. Thus, changes that occurred in spatial heterogeneity caused changes in habitat diversity and thus influence the grazing habit of other animals (Dennis et al. 1998). Whilst grazing reduces the quantity of available forage, in many systems, it increases forage quality, typically measured as nitrogen or crude protein content (McNaughton 1984; Jefferies et al. 1994), although other essential minerals such as sodium may be unchanged (McNaughton et al. 1997). The possible mechanisms for the increases in nutrient concentrations following grazing may include a reduction in senescent material, maintenance of leaves in an early phenological state (Hobbs et al. 1994), or increases in belowground available nitrogen (Holland & Detling 1990) and that might allow graziers to graze continuously up to 7 months as has been perceived from the response of local respondents in the survey. However, the respondents' statements about the most detrimental species being goats might be due to the different vegetation preferences by browsing habits and much of the disturbance habit (Animut and Goetsch 2008; Garcia et al. 2012). Several studies have shown that changes in management intensity could affect the sward structure, plant species diversity, productivity and the nutritive value of the forage (Hofmann et al. 2001; Marriott et al. 2005; Pavlů et al. 2006), which however could not be well illustrated from the farmers' responses except for the comparison of grazing behaviour of goats and buffaloes in the present survey. Buffalo is a grazier type animal whilst goat is an intermediate browser (NRC 2007). The grazers can well adapt to the short stature and low biomass forage whilst goats tend to select tall and brouse species in general (Soest 1994), being agile and competitive for browse selection (Sanon et al. 2007) and can neutralize the plant secondary compounds such as tannins which play a defines mechanism in plants (Robbins 1995). Such grazing habits of goats would render treading, tethering and urinary damage to vegetation and might thus be expected as the detrimental species as compared to buffalo.
Ecological consequences on the composition and nutritive values of feedstuffs
The domination of tall grasses as abundant fodder species has been perceived by farmers and that has been well documented in the other studies. Most of the farmers could identify the severity of grassland dynamics and colonization of the species in the inundated areas. The indicators of the farmers' perception in this regard could be supported by other studies of south Asia where flooding is a determining factor (Adhakari 2013; Mirza 2011) of the grassland systems (Van Eck et al. 2006). However, the other sources of grassland deterioration due to wildlife is not addressed in the present survey and for the changes in the composition of nutritive value of the pasture species thorough out the grazing season. The problem of flooding would be expected, mainly derived from the higher rainfall in summer and inundation at the lower elevations banks that promotes the translocation of the same vegetation towards the lower elevations.
Scientific investigations have further proved that grazing alters plant diversity in permanent grasslands through the stocking rates (e.g. Diaz et al. 2001, 2007), the seasonality (e.g. Sternberg et al. 2000) and the livestock species grazed (e.g. Huntly 1991). Thus, the severity of grazing pressure, including the wild and domestic animals and their interaction, would be expected in the present study. These results suggest the necessity to consider not only taxonomic indices but also plant functional ecology to evaluate the effects of farming practices such as domestic animals grazing and interaction of wildlife and domestic animals in further studies.
Nitrogen (N) in open grasslands has a detrimental effect on grassland diversity (Jacquemyn et al. 2003). The alluvial deposition of fertilizer nitrogen and phosphorous (P) due to flooding might have reduced the appearance of many species instead of the S. spontaneum, S. Bengalese and I. cylindrica in the grasslands and might have hindered the colonization of other species. Though not mentioned in the present survey, such phenomena in the grasslands need to be further verified through pasture ecological research in the future.
The present study suggested some research and grassland development strategies in the survey sites. In the survey sites, most of the community development activities in the past were attempted towards nature and wildlife protection, which however lacked sufficient strategies to address the livelihood opportunities. It is repeatedly reported that the dependencies of local people on natural grasslands and forest is higher for livelihoods in the villages around the Chitwan National Park (Stræde and Treue 2006; Dangol 2015), and livestock grazing is one of them. There is a lack of scientific investigation on the assessment of changes in vegetation dynamics and the feed resources situation in Nepal in general and in the study area particularly, due to the priority of CNP as the protectionist rather than the productionists' role.
In the perception of the local farmers in the present survey sites, the grassland degradation and feed resources usages confirm the need for participation of the stakeholders with a possibly higher rate of adoption. The data also further shows broader areas of potential interventions, i.e. grassland management, promotion of alternative feed resources and livestock ownerships.
There is a need to address the local farmers' livelihood priorities where the dependencies are higher on natural resources grazing and feed management in the winter. Farmers' awareness of the problem of resources to feed livestock is derived from the farming system adopted, economic, social and ecological uses of livestock in the study sites, which, however, will be considered in detail in future studies. The severity of grassland degradation in the grazing season is not demonstrated well in the present study because the livestock grazing sites were in close vicinity of the CNP and surrounding buffer zones are always open for the wild and domestic animals. Moreover, farmers nevertheless used various grazing management practices although the common method has been well pointed out as a long-term solution to protect the grasslands, e.g. fodder production and riverbank protection. Later, the benefits of multispecies grazing have been postulated (Walker 1997), which, however, could be well-applied to species having similar grazing behaviour, only that was stated as detrimental for being different habits of buffaloes and goats grazing in the same grasslands in the present study.
All these factors suggest the need to undertake a scientific investigation to determine the processes and the demand of farmers in addition to the survey on local knowledge (Celio et al. 2014; Assefa & Hans-Rudolf 2016). The use of local knowledge could prove to be an important aspect of technology dissemination (Kelly et al. 2015) and their chance of higher rate of adoption (Füsun Tatlıdil et al. 2009) by the farmers.
This study provides collective evidence of biotic and abiotic factors for grassland management for sustainable uses in sub-tropical Terai Nepal. The resources identified for improvement could use the farmers' perceptions derived possibly from traditional farming. The locals were aware of vegetation changes in the grasslands, and the need of supply and management of the alternative forage resources respectively. Whilst the resources that need to be considered are the pasture ecology (soils and herbage nutrient dynamics as result of flooding) herbage biomass production and regrowth, animal species grazing and interactions and intensity of the natural disasters, as most of these risks are prevalent in the commonly available resources. Therefore, the management of these available grasslands and alternative feed resources necessarily depends on effective social institutions (stakeholders) which require careful attention to maintaining the quality of these resources and the forms for sustainable livestock grazing. A detailed study is needed to further explore grazing systems and other social-ecological functions of grasslands in Magui Khola basin.
The result from this study can further be used to select herbage species for the regeneration of flooded and inundated lands to improve the land cover and to advise farmers on the importance of herd composition during grazing. Most of the farmers, however, have a some knowledge in identifying the severity of grassland degradation as flooding was stated as the main response factor. In addition, farmers were also aware of the seriousness of multispecies grazing and herbage utilization which affect the raising of livestock as buffalo and goats have different grazing behaviour patterns. It is anticipated that the decisions of farmers on the use and management of the grassland depends on their perception of its deterioration, whilst the technology disseminated by the stakeholders for improving grassland and feed resource usages would have a higher rate of adoption among the farmers.
All data generated or analysed during this study are included in this manuscript.
Contingency coefficient
Chi.sq.:
Pearson chi-square coefficient
P :
PCI:
Problem Confrontation Index
Adhakari, B.R. 2013. Flooding and inundation in Nepal Terai: Issues and concerns. Hydro Nepal: Journal of Water, Energy and Environment 12: 59–65.
Adler, P., D. Raff, and W. Lauenroth. 2001. The effect of grazing on the spatial heterogeneity of vegetation. Oecologia 128 (4): 465–479. https://doi.org/10.1007/s004420100737.
Alcorn, J. B. 1989. Process as resource: The traditional agricultural ideology of Bora and Huastec resource management and its implications for research. Advances in Economic Botany, 63–77. http://www.jstor.org/stable/43927545.
Angassa, A., and F. Beyene. 2003. Current range condition in southern Ethiopia in relation to traditional management strategies: The perceptions of Borana pastoralists. Tropical Grasslands 37 (1): 53–59.
Animut, G., and A.L. Goetsch. 2008. Co-grazing of sheep and goats: Benefits and constraints. Small Ruminant Research 77 (2–3): 127–145. https://doi.org/10.1016/j.smallrumres.2008.03.012.
Assefa, E., and B. Hans-Rudolf. 2016. Farmers' perception of land degradation and traditional knowledge in Southern Ethiopia—Resilience and stability. Land Degradation & Development 27 (6): 1552–1561. https://doi.org/10.1002/ldr.2364.
Bardgett, R.D., J.M. Bullock, S. Lavorel, P. Manning, U. Schaffner, N. Ostle, M. Chomel, G. Durigam, E.L. Fry, D. Johnson, J.M. Lavallee, G.L. Provost, S. Luo, K. Png, M. Sankaran, X. Hou, H. Zhou, L. Ma, W. Ren, X. Li, Y. Ding, Y. Li, and H. Shi. 2021. Combatting global grassland degradation. Nature Reviews Earth & Environment 2: 720–735. https://doi.org/10.1038/s43017-021-00207-2.
Baruch, Z. 1994. Responses to drought and flooding in tropical forage grasses. Plant and Soil 164 (1): 87–96. https://doi.org/10.1007/BF00010114.
Berkes, F. 1999. Role and significance of "tradition" in indigenous knowledge. Indigenous Knowledge and Development Monitor 7: 19.
Berkes, F., J. Colding, and C. Folke. 2000. Rediscovery of traditional ecological knowledge as adaptive management. Ecological Applications 10 (5): 1251–1262. https://doi.org/10.1890/1051-0761(2000)010[1251:ROTEKA]2.0.CO;2.
Brown, K. 1997. Plain tales from the grasslands: Extraction, value and utilization of biomass in Royal Bardia National Park Nepal. Biodiversity and Conservation 6 (1): 59–74. https://doi.org/10.1023/A:1018323631889.
Burdick, D.M., and I.A. Mendelssohn. 1990. Relationship between anatomical and metabolic responses to soil waterlogging in the coastal grass Spartina patens. Journal of Experimental Botany 41: 223-228.
CBD. 2022. Aichi Biodiversity Targets. Convention on Biological Diversity. https://www.cbd.int/sp/targets/ Accessed on 14 Feb 2022.
Celio, E., C.G. Flint, P. Schoch, and A. Grêt-Regamey. 2014. Farmers' perception of their decision-making in relation to policy schemes: A comparison of case studies from Switzerland and the United States. Land Use Policy 41: 163–171. https://doi.org/10.1016/j.landusepol.2014.04.005.
CNP. 2016. Grassland habitat mapping in Chitwan National Park. Chitwan: Chitwan National Park (CNP).
Crawford, R.M. 1982. Physiological response to flooding. In Physiological Plant Ecology. II. Encyclopaedia of Plant Physiology. New Series Vol. 12B, ed. O.L. Lange, C.B. Osmond, and H. Ziegler, 453–477. Berlin: Springer-Vedag.
Dangol, D.R., and S.B. Gurung. 1991. Ethnobotany of the Tharu tribe of Chitwan district Nepal. International Journal of Pharmacognosy 29 (3): 203–209. https://doi.org/10.3109/13880209109082879.
Dangol, D.R. 2015. Plant communities and local uses: observations from Chitwan National Park. In Biodiversity Conservation Efforts in Nepal (Special Issue), 85–93. Kathmandu: Department of National Parks and Wildlife Conservation.
Dennis, P., M.R. Young, and I.J. Gordon. 1998. Distribution and abundance of small insects and arachnids in relation to structural heterogeneity of grazed, indigenous grasslands. Ecological Entomology 23 (3): 253–264. https://doi.org/10.1046/j.1365-2311.1998.00135.x.
Díaz, S., I. Noy-Meir, and M. Cabido. 2001. Can grazing response of herbaceous plants be predicted from simple vegetative traits? Journal of Applied Ecology 38 (3): 497–508. https://doi.org/10.1046/j.1365-2664.2001.00635.x.
Diaz, S., S. Lavorel, S.U.E. McIntyre, V. Falczuk, F. Casanoves, D.G. Milchunas, and B.D. Campbell. 2007. Plant trait responses to grazing–a global synthesis. Global Change Biology 13 (2): 313–341. https://doi.org/10.1111/j.1365-2486.2006.01288.x.
Dinerstein, E., and C. Loucks. 2002. Asia: Bhutan, India, and Nepal. In Tropical and subtropical grasslands, savannas and shrublands. WWF, Washington DC, USA. https://www.worldwildlife.org/ecoregions/im0701, Accessed on 12 Feb 2022.
Füsun Tatlıdil, F., I. Boz, and H. Tatlidil. 2009. Farmers' perception of sustainable agriculture and its determinants: A case study in Kahramanmaras province of Turkey. Environment, Development and Sustainability 11 (6): 1091–1106. https://doi.org/10.1007/s10668-008-9168-x.
García, R.R., R. Celaya, U. García, and K. Osoro. 2012. Goat grazing, its interactions with other herbivores and biodiversity conservation issues. Small Ruminant Research 107 (2–3): 49–64. https://doi.org/10.1016/j.smallrumres.2012.03.021.
Ghimire, S.K., M.K. Dhamala, B.R. Lamichhane, R. Ranabhat, and K. B. KC, and S. Poudel. 2019. Identification of suitable habitat for swamp deer Rucervus duvaucelii duvaucelii (Mammalia: Artiodactyla: Cervidae) in Chitwan National Park Nepal. Journal of Threatened Taxa 11 (6): 13644–13653. https://doi.org/10.11609/jot.4129.11.6.13644-13653.
Henry, C. P., C. Amoros, and G. Bornette. 1996. Species traits and recolonization processes after flood disturbances in riverine macrophytes. Vegetatio, 122(1), 13-27.
Henzell, E.F., and L. 'tMannetjie. 1980. Grassland and forage research in tropical and subtropical climates. In Perspectives in world agriculture, 485–532. Farnham Royal: Commonwealth Agricultural Bureaux, England.
Hobbs, T. J., A. D. Sparrow, and J.J. Landsberg. 1994. A model of soil moisture balance and herbage growth in the arid rangelands of central Australia. Journal of Arid Environments, 28(4), 281-298.
Hofmann, M., N. Kowarsch, S. Bonn, and J. Isselstein. 2001. Management for biodiversity and consequences for grassland productivity. Grassland Science in Europe 6: 113–116.
Holland, E.A., and J.K. Detling. 1990. Plant response to herbivory and belowground nitrogen cycling. Ecology 71 (3): 1040–1049. https://doi.org/10.2307/1937372.
Hossain, M.S., and M.A.M. Miah. 2011. Poor farmers' problem confrontation in using manure towards integrated plant nutrition system. Bangladesh Journal of Extension Education 23 (1&2): 139–147.
Huntly, N. 1991. Herbivores and the dynamics of communities and ecosystems. Annual Review of Ecology and Systematics 22 (1): 477–503.
Jackson, M.B., and M.C. Drew. 1984. Effects of flooding on growth and metabolism of herbaceous plants. In Flooding and Plant Growth, ed. T.T. Kozlowski, 47–128. New York: Academic Press.
Jacquemyn, H., R. Brys, and M. Hermy. 2003. Short-term effects of different management regimes on the response of calcareous grassland vegetation to increased nitrogen. Biological Conservation 111 (2): 137–147. https://doi.org/10.1016/S0006-3207(02)00256-2.
Jefferies, R.L., D.R. Klein, and G.R. Shaver. 1994. Vertebrate herbivores and northern plant communities: Reciprocal influences and responses. Oikos 71 (2): 193–206. https://doi.org/10.2307/3546267.
Kelly, E., K. Heanue, C. Buckley, and C. O'Gorman. 2015. Proven science versus farmer perception. In 2015 Conference International Association of Agricultural Economists, Milan, Italy, 9–14 August 2015.pp 1–34.
Kemp, D.R., H. Guodong, H. Xiangyang, D.L. Michalk, H. Fujiang, W. Jianping, and Z. Yingjun. 2013. Innovative grassland management systems for environmental and livelihood benefits. PNAS 110 (21): 8369–8374. https://doi.org/10.1073/pnas.1208063110.
Lehmkuhl, J.F., R.K. Upreti, and U.R. Sharma. 1988. National parks and local development: Grasses and people in Royal Chitwan National Park Nepal. Environmental Conservation 15 (2): 143–148. https://doi.org/10.1017/S0376892900028952.
Lehmkuhl, J.F. 1999. The organisation and human use of Terai riverine grasslands in the Royal Chitwan National Park, Nepal. In Grassland ecology and management in protected areas of Nepal. Proceedings of a Workshop, Royal Bardia National Park, Thakurdwara, Bardia, Nepal, 15–19 March, 1999. Volume 2: Terai protected areas, 37–49. Kathmandu: International Centre for Integrated Mountain Development.
Marriott, C.A., G.R. Bolton, J.M. Fisher, and K. Hood. 2005. Short-term changes in soil nutrients and vegetation biomass and nutrient content following the introduction of extensive management in upland sown swards in Scotland, UK. Agriculture, Ecosystems & Environment 106 (4): 331–344. https://doi.org/10.1016/j.agee.2004.09.004.
McNaughton, S.J. 1984. Grazing lawns: Animals in herds, plant form, and coevolution. The American Naturalist 124 (6): 863–886. https://doi.org/10.1086/284321.
McNaughton, S.J., F.F. Banyikwa, and M.M. McNaughton. 1997. Promotion of the cycling of diet-enhancing nutrients by African grazers. Science 278 (5344): 1798–1800. https://doi.org/10.1126/science.278.5344.1798.
Milchunas, D.G., and W.K. Lauenroth. 1993. Quantitative effects of grazing on vegetation and soils over a global range of environments: Ecological Archives M063–001. Ecological Monographs 63 (4): 327–366 (https://dx.doi.org/10.6084).
Mirza, M.M.Q. 2011. Climate change, flooding in South Asia and implications. Regional Environmental Change 11 (1): 95–107. https://doi.org/10.1007/s10113-010-0184-7.
MoFE. 2018. Nepal's sixth national report to the convention on biological diversity. Kathmandu: Ministry of Forests and Environment (MoFE).
Newing, H., C. Eagle, R. Puri, and C. Watson. 2011. Conducting research in conservation: A social science perspective. Abingdon: Routledge.
NRC. 2007. Nutrient requirements of small ruminants. Sheep, goats, cervids and New World camelids. Washington, DC: National Academy Press.
Pavlů, V., M. Hejcman, L. Pavlů, J. Gaisler, P. Hejcmanová-Nežerková, and L. Meneses. 2006. Changes in plant densities in a mesic species-rich grassland after imposing different grazing management treatments. Grass & Forage Science 61 (1): 42–51.
Peet, N.B., A.R. Watkinson, D.J. Bell, and U.R. Sharma. 1999. The conservation management of Imperata cylindrica grassland in Nepal with fire and cutting: An experimental approach. Journal of Applied Ecology 36 (3): 374–387. https://doi.org/10.1046/j.1365-2664.1999.00405.x.
Pokharel, S.K. 1993. Floristic composition, biomass production and biomass harvest in the grassland of the Royal Bardia National Park. Nepal. M. Sc: Thesis. Norway: Agricultural University of Norway, Norway.
Poudel, S., S. Funakawa, H. Shinjo, and B. Mishra. 2020. Understanding households' livelihood vulnerability to climate change in the Lamjung district of Nepal. Environment, Development and Sustainability 22 (8): 8159–8182. https://doi.org/10.1111/j.1365-2494.2006.00506.x.
Robbins, C.T., D.E. Spalinger, and W. van Hoven. 1995. Adaptation of ruminants to browse and grass diets: are anatomical-based browser-grazer interpretations valid? Oecologia 103 (2): 208–213.
Sætre, D.V. 1993. People and grasses: A case study from Royal Bardia National Park. Nepal. M. Sc: Thesis. Norway: Agricultural University of Norway, Norway.
Saha, S.M., S.A. Pranty, M.J. Rana, M.J. Islam, and M.E. Hossain. 2022. Teaching during a pandemic: Do university teachers prefer online teaching? Heliyon 8 (1): e08663. https://doi.org/10.1016/j.heliyon.2021.e08663.
Sanon, H.O., C. Kaboré-Zoungrana, and I. Ledin. 2007. Behaviour of goats, sheep and cattle and their selection of browse species on natural pasture in a Sahelian area. Small Ruminant Research 67 (1): 64–74. https://doi.org/10.1016/j.smallrumres.2005.09.025.
Schmink, M., K.H. Redford, and C. Padoch. 1992. Traditional peoples and the biosphere: Framing the issues and defining the terms. In Conservation of neotropical forests: Working from traditional resource use, ed. K.H. Redford and C. Padoch, 3–13. New York: Columbia University Press.
Sharma, U.R., and W.W. Shaw. 1993. Role of Nepal's Royal Chitwan National Park in meeting the grazing and fodder needs of local people. Environmental Conservation 20 (2): 139–142.
Van Soest, P. J.1994. Nutritional Ecology of the Ruminant, Second Edition. Ithaca: Cornell University Press 476 pp.
Sousa, W.P. 1984. The role of disturbance in natural communities. Annual Review of Ecology and Systematics 15 (1): 353–391.
Sternberg, M., M. Gutman, A. Perevolotsky, E.D. Ungar, and J. Kigel. 2000. Vegetation response to grazing management in a Mediterranean herbaceous community: A functional group approach. Journal of Applied Ecology 37 (2): 224–237. https://doi.org/10.1046/j.1365-2664.2000.00491.x.
Stræde, S., and T. Treue. 2006. Beyond buffer zone protection: A comparative study of park and buffer zone products' importance to villagers living inside Royal Chitwan National Park and to villagers living in its buffer zone. Journal of Environmental Management 78 (3): 251–267. https://doi.org/10.1016/j.jenvman.2005.03.017.
Thakur, S., C.R. Upreti, and K. Jha. 2014. Nutrient analysis of grass species consumed by greater one-horned Rhinoceros (Rhinoceros unicornis) in Chitwan National Park Nepal. International Journal of Applied Sciences and Biotechnology 2 (4): 402–408. https://doi.org/10.3126/ijasbt.v2i4.11119.
Thapa, S.K., J.F. de Jong, N. Subedi, A.R. Hof, G. Corradini, S. Basnet, and H.H.T. Prins. 2021. Forage quality in grazing lawns and tall grasslands in the subtropical region of Nepal and implications for wild herbivores. Global Ecology and Conservation 30: e01747. https://doi.org/10.1016/j.gecco.2021.e01747.
Van Eck, W.H.J.M., J.P.M. Lenssen, H.M. Van de Steeg, C.W.P.M. Blom, and H. De Kroon. 2006. Seasonal dependent effects of flooding on plant species survival and zonation: A comparative study of 10 terrestrial grassland species. Hydrobiologia 565 (1): 59–69. https://doi.org/10.1007/s10750-005-1905-7.
Walker, J. W. 1997. Multispecies grazing: The ecological advantage. Proceedings of the American Society of Animal Science Western Section, 48 (New Mexico State University), p7–10.
Yamane, T. 1964. Statistics: An introductory analysis, 2nd ed. New York: Harper and Row.
The Directorate of Research and Extension of Agriculture and Forestry and the mayor of the Madi Municipality are acknowledged for their support during the survey period.
The research was funded by the University Grants Commission, Nepal (Project No. CRG-73/74-AG&F-02).
Department of Animal Nutrition and Fodder Production, Agriculture and Forestry University, Bharatpur Metropolitan City-15, Rampur, Chitwan, Nepal
Shanker Raj Barsila
International Economic Development Program, Graduate School of Humanities and Social Sciences, Hiroshima University, Higashihiroshima, Hiroshima, Japan
Niraj Prakash Joshi
Multi-Dimensional Action for Development-Nepal (MADE-Nepal), Bharatpur, Chitwan, Nepal
Tuk Narayan Poudel & Badrika Devkota
Gandaki University, Pokhara Metropolitan City, Kaski, Nepal
Naba Raj Devkota
Reef Catchments Limited, Suite 1, 85 Gordon Street, Mackay, QLD, 4740, Australia
Dev Raj Chalise
Tuk Narayan Poudel
Badrika Devkota
SRB and DRC designed the field survey and data analysis. NPJ and NRD helped to design the questionnaires and their pretesting, whilst TNP and BD carried out the household survey. All the authors were involved in the revision of the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Shanker Raj Barsila.
We affirm that the study does not involve the use of any animal or human data or tissue. Informed verbal consent was acquired from each respondent at the time of the household survey.
Barsila, S.R., Joshi, N.P., Poudel, T.N. et al. Farmers' perceptions of grassland management in Magui Khola basin of Madi Chitwan, Nepal. Pastoralism 12, 40 (2022). https://doi.org/10.1186/s13570-022-00243-7
Biophysical factors
Seasonal feedstuffs
Crop by-products
South Asia Collection | CommonCrawl |
Finding the Jordan Canonical Form of a Classical Adjoint of a Jordan Block
Let $A$ be a size $n$ Jordan matrix with $0$ on its diagonal, that is $$A = J_n(0) = [a_{ij}] = \begin{cases} 1, &j=i+1\\ 0, &\text{elsewhere} \end{cases} $$
What is the Jordan Canonical Form of the classical adjoint of A, $\text{adj} A$?
Can we start with the fact that $A$ is singular and $A (\text{adj} A) = 0_n?$
linear-algebra matrices jordan-normal-form adjoint-operators
darij grinberg
$\begingroup$ If the matrix is in Jordan form and is 0s on its diagonal, the last row of the matrix is all 0 and the matrix is singular, shouldn't it be? $\endgroup$ – RGS Nov 14 '16 at 9:35
$\begingroup$ Are you talking about the adjoint? $\endgroup$ – user198504 Nov 14 '16 at 9:38
$\begingroup$ I am talking about A. A has a row that is only 0, the bottom one. Isn't it? $\endgroup$ – RGS Nov 14 '16 at 9:38
$\begingroup$ The first column is zero $\endgroup$ – user198504 Nov 14 '16 at 9:39
$\begingroup$ Yup, and the last row as well. Thus $A $ is not non-singular. $A $ is singular $\endgroup$ – RGS Nov 14 '16 at 9:40
If you just start computing the classical adjoint for $n=2,3,4...$ you should notice a pattern as to what they look like.
$$adj\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & - 1\\ 0 & 0\end{pmatrix}$$
$$adj\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$
$$ adj \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0\end{pmatrix} = \begin{pmatrix} 0 & 0 & 0 & -1\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}$$
Once you prove that this pattern holds, the Jordan Form is straightforward to compute.
Ken DunaKen Duna
$\begingroup$ Oh cool. But dont you think that the adjoint should alternated between 1 and -1 as n varies. Also, I think the only nonzero entry will be in the topright corner, not in the buttom left corner. $\endgroup$ – user198504 Nov 16 '16 at 8:44
$\begingroup$ You are right about the sign. The bottom left corner is definitely correct though. $\endgroup$ – Ken Duna Nov 16 '16 at 14:08
$\begingroup$ Oh wait, I forgot to take the transpose! You were right on both counts! $\endgroup$ – Ken Duna Nov 16 '16 at 14:14
Finding the Jordan canonical form of this upper triangular $3\times3$ matrix
Jordan Canonical Form of matrix
Jordan Canonical Form transition matrix
ordered partition, block matrix given by $r_j \times r_j$ nilpotent Jordan blocks is nilpotent, rational canonical form, jordan canonical form
Jordan Canonical form question
Finding the characteristic and minimal polynomials of this block matrix.
Finding the Jordan Canonical Form of a Matrix
Jordan Canonical Form matrices
Finding Jordan canonical form of a matrix given the characteristic polynomial
Jordan Canonical form with zero eigenvalue? | CommonCrawl |
On the logarithm of the minimizing integrand for certain variational problems in two dimensions
On the logarithm of the minimizing integrand for certain variational problems in two dimensions Akman, Murat; Lewis, John; Vogel, Andrew 2012-01-24 00:00:00 Let f be a smooth convex homogeneous function of degreep, 1 < p < ∞, on $${\mathbb{C} \setminus \{0\}.}$$ We show that if u is a minimizer for the functional whose integrand is $${f(\nabla v ), v}$$ in a certain subclass of the Sobolev space W 1,p (Ω), and $${\nabla u \not = 0 }$$ at $${z \in \Omega,}$$ then in a neighborhood of z, $${ \log f (\nabla u ) }$$ is a sub, super, or solution (depending on whether p > 2, p < 2, or p = 2) to L where $$L \zeta=\sum_{k,j=1}^{2}\frac{\partial}{\partial x_k}\left( f_{\eta_k \eta_j}(\nabla u(z)) \frac{\partial \zeta }{ \partial x_j }\right),$$ we then indicate the importance of this fact in previous work of the authors when f(η) = |η| p and indicate possible future generalizations of this work in which this fact will play a fundamental role. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Analysis and Mathematical Physics Springer Journals http://www.deepdyve.com/lp/springer-journals/on-the-logarithm-of-the-minimizing-integrand-for-certain-variational-oCJS1VwokI
Akman, Murat; Lewis, John; Vogel, Andrew
Analysis and Mathematical Physics
, Volume 2 (1) – Jan 24, 2012
/lp/springer-journals/on-the-logarithm-of-the-minimizing-integrand-for-certain-variational-oCJS1VwokI
Analysis and Mathematical Physics /
Springer Journals
Copyright © 2012 by Springer Basel AG
Mathematics; Analysis; Mathematical Methods in Physics
Let f be a smooth convex homogeneous function of degreep, 1 < p < ∞, on $${\mathbb{C} \setminus \{0\}.}$$ We show that if u is a minimizer for the functional whose integrand is $${f(\nabla v ), v}$$ in a certain subclass of the Sobolev space W 1,p (Ω), and $${\nabla u \not = 0 }$$ at $${z \in \Omega,}$$ then in a neighborhood of z, $${ \log f (\nabla u ) }$$ is a sub, super, or solution (depending on whether p > 2, p < 2, or p = 2) to L where $$L \zeta=\sum_{k,j=1}^{2}\frac{\partial}{\partial x_k}\left( f_{\eta_k \eta_j}(\nabla u(z)) \frac{\partial \zeta }{ \partial x_j }\right),$$ we then indicate the importance of this fact in previous work of the authors when f(η) = |η| p and indicate possible future generalizations of this work in which this fact will play a fundamental role.
Analysis and Mathematical Physics – Springer Journals
On the dimension of p-harmonic measure
Bennewitz, B.; Lewis, J.
Nonlinear potential theory of degenerate elliptic equations
Heinonen, J.; Kilpeläinen, T.; Martio, O.
Note on p harmonic measure
Lewis, J.
p harmonic measure in simply connected domains
Lewis, J.; Nyström, K.; Poggi Corradini, P.
Distortion of boundary sets under conformal mapping
Makarov, N.
Akman, M., Lewis, J., & Vogel, A. (2012). On the logarithm of the minimizing integrand for certain variational problems in two dimensions. Analysis and Mathematical Physics, 2(1), 79-88.
Akman, Murat, John Lewis, and Andrew Vogel. "On the logarithm of the minimizing integrand for certain variational problems in two dimensions." Analysis and Mathematical Physics 2.1 (2012): 79-88. | CommonCrawl |
Calculate the mass (g) of each product formed whe…
Calculate the mass (g) of each product formed when 174 $\mathrm{g}$ of silver sulfide reacts with excess hydrochloric acid:
\mathrm{Ag}_{2} \mathrm{S}(s)+\mathrm{HCl}(a q) \longrightarrow \mathrm{AgCl}(s)+\mathrm{H}_{2} \mathrm{S}(g)[\text { unbalanced }]
Elemental phosphorus occurs as tetratomic molecules, $\mathrm{P}_{4}$ .
What mass (g) of chlorine gas is needed to react completely with
455 $\mathrm{g}$ of phosphorus to form phosphorus pentachloride?
Elemental sulfur occurs as octatomic molecules, S. $\mathrm{S}_{8}$ . What mass $(\mathrm{g})$ of fluorine gas is needed to react completely with 17.8 $\mathrm{g}$ of sulfur to form sulfur hexafluoride?
Solid iodine trichloride is prepared in two steps: first, a reaction between solid iodine and gaseous chlorine to form solid iodine monochloride; then, treatment with more chlorine.
(a) Write a balanced equation for each step.
(b) Write a balanced equation for the overall reaction.
(c) How many grams of iodine are needed to prepare 2.45 $\mathrm{kg}$ of
final product?
Lead can be prepared from galena [lead(II) sulfide] by first roasting the galena in oxygen gas to form lead(II) oxide and sulfur dioxide. Heating the metal oxide with more galena forms the molten metal and more sulfur dioxide.
(b) Write an overall balanced equation for the process.
(c) How many metric tons of sulfur dioxide form for every metric
ton of lead obtained?
Sisi G.
Calculate the mass (g) of each product formed when 43.82 $\mathrm{g}$ of diborane $\left(\mathrm{B}_{2} \mathrm{H}_{6}\right)$ reacts with excess water:
\mathrm{B}_{2} \mathrm{H}_{6}(g)+\mathrm{H}_{2} \mathrm{O}(l) \longrightarrow \mathrm{H}_{3} \mathrm{BO}_{3}(s)+\mathrm{H}_{2}(g)[\text { unbalanced }]
Use standard enthalpies of formation to calculate $\Delta H_{\mathrm{rxn}}^{\circ}$ for each reaction.
\begin{array}{l}{\text { a. } C_{2} \mathrm{H}_{4}(g)+\mathrm{H}_{2}(g) \longrightarrow \mathrm{C}_{2} \mathrm{H}_{6}(g)} \\ {\text { b. } \mathrm{CO}(g)+\mathrm{H}_{2} \mathrm{O}(g) \longrightarrow \mathrm{H}_{2}(g)+\mathrm{CO}_{2}(g)} \\ {\text { c. } 3 \mathrm{NO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}(l) \longrightarrow 2 \mathrm{HNO}_{3}(a q)+\mathrm{NO}(g)} \\ {\text { d. } \mathrm{Cr}_{2} \mathrm{O}_{3}(s)+3 \mathrm{CO}(g) \longrightarrow 2 \mathrm{Cr}(s)+3 \mathrm{CO}_{2}(g)}\end{array}
Balance the following equations and indicate whether they are combination, decomposition, or combustion reactions:
\begin{array}{l}{\text { (a) } \mathrm{C}_{3} \mathrm{H}_{6}(g)+\mathrm{O}_{2}(g) \longrightarrow \mathrm{CO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}(g)} \\ {\text { (b) } \mathrm{NH}_{4} \mathrm{NO}_{3}(s) \longrightarrow \mathrm{N}_{2} \mathrm{O}(g)+\mathrm{H}_{2} \mathrm{O}(g)} \\ {\text { (c) } \mathrm{C}_{5} \mathrm{H}_{6} \mathrm{O}(l)+\mathrm{O}_{2}(g) \longrightarrow \mathrm{CO}_{2}(g)+\mathrm{H}_{2} \mathrm{O}(g)} \\ {\text { (d) } \mathrm{N}_{2}(g)+\mathrm{H}_{2}(g) \longrightarrow \mathrm{NH}_{3}(g)} \\ {\text { (e) } \mathrm{K}_{2} \mathrm{O}(s)+\mathrm{H}_{2} \mathrm{O}(l) \longrightarrow \mathrm{KOH}(a q)}\end{array} | CommonCrawl |
Extraction of bioactive compounds from Psidium guajava and their application in dentistry
Shaik Shaheena1,2,
Anjani Devi Chintagunta1,3,
Vijaya Ramu Dirisala1 &
N. S. Sampath Kumar ORCID: orcid.org/0000-0002-8577-21431
Guava is considered as poor man's apple rich in phytochemicals with medicinal value and hence it is highly consumed. Gas chromatography–mass spectroscopy (GC–MS) analysis of guava leaf extract revealed the presence of various bioactive compounds with antimicrobial, antioxidant, anticancer, and antitumor properties. Hence, it is used in tooth paste formulations along with other ingredients such as Acacia arabica gum powder, stevia herb powder, sea salt, extra virgin coconut oil, peppermint oil in the present study. Three formulations F1, F2 and F3 have been made by varying the concentration of these ingredients and the prepared formulations were studied for their antimicrobial activity and physico-chemical parameters such as pH, abrasiveness, foaming activity, spreading and cleaning ability. Among these, F3 showed significant antioxidant and antimicrobial properties, minimal cytotoxicity, maximum spreadability and very high cleaning ability. This study surmises that the herbal toothpaste formulation is greener, rich in medicinal values and imparts oral hygiene.
Psidium guajava (Guava) is an evergreen tree that belongs to the family Myrtaceae, grows in tropical and subtropical regions but preferably in dry climates. It is originated from Mexico or Central America and due to its health benefits it is grown abundantly in various countries that include Brazil, Bangladesh, China, Indonesia, India, Nigeria, Mexico, Pakistan, Thailand and Philippines (Uzzaman et al. 2018). It is commercially cultivated in almost all the states of India. In the year 2016–2017, the total estimated area under guava cultivation was 2,61,700 hectares (ha) with the production of 36,48,200 million tons (MT) (Horticultural Statistics at a Glance 2017). Guava is considered as multipurpose medicinal tree similar to Mangifera indica and Azadirachta indica because of myriad medicinal values from various parts viz., leaf, roots, bark and fruit (Sravani et al. 2015; Naidu et al. 2016; Raju et al. 2019). The leaf extract of guava has pharmacological activity (Uzzaman et al. 2018) due to the presence of bioactive compounds that treat dysentery, diarrhoea, flatulence, gastric problems and regulate blood glucose levels. The guava leaves contain essential oils rich in cineol, triterpenes, tannins, eugenol, kaempferol and other compounds such as flavonoids, malic acid, gallic acid, chlorophyll and mineral salts (Kumar et al. 2019). To impart the beneficial aspects of the guava leaf extract to the daily used products, an attempt was made to formulate a tooth paste with guava leaf extract as a major ingredient in the present work.
Inconsistent eating habits and high sugar consumption encourage the growth of bacteria leading to various oral diseases. Approximately, 600 bacterial species are estimated to exist in the human oral microbiome (Bora et al. 2014) among which, some are involved in protecting the mouth while the rest are responsible for causing oral diseases. The bacteria ferment sugars and starch into acid which dissolves the minerals in the tooth enamel and leads to decalcification and formation of tooth decay/cavities. In order to maintain the oral health, the bacterial growth should be prevented by including the bioactive compounds with antimicrobial properties in toothpaste formulation (Vijaya et al. 2017).
In general, commercially available toothpastes contain ingredients to enhance properties like antimicrobial, antioxidants, aesthetic appeal, surfactants, thickening agent to change rheological properties, preservatives and binders to provide consistency and stability for formulations (Das et al. 2013). Unfortunately they will have fluorides, strong abrasives, sodium lauryl sulphate, colouring dyes and other agents like triclosan that have negative impact in maintaining healthy gums and teeth. Besides, large number of chemicals can cause damage to the enamel and gums. So, in this paper we have formulated a herbal toothpaste with guava leaf extract, Acacia arabica powder, sea salt, stevia herb extract powder which were scientifically proven to be harmless, carcinogen free natural source with high therapeutic value.
Fresh green leaves of guava (Psidium guajava) tree were collected from the premises of Vignan's Foundation for Science, Technology and Research, Vadlamudi, Guntur, Andhra Pradesh. The leaves were gently rinsed with water, sundried to remove the moisture and powdered using a blender. The powder was then passed through aluminium sieve of (1 mm) to get uniform particle size. Guava leaf powder was stored in an air tight container for further studies.
Guava leaf extraction procedure
The guava leaf powder (25 g) was suspended in ethyl acetate (100 mL) and stirred for 24 h under sterile conditions (Seo et al. 2014). The extract was filtered using Whatman no. 1 filter paper and the filtrate was used for identification of various phytochemicals/bioactive compounds based upon the retention time and mass spectra of the library retrieved from National Institute of Standards and Technology (NIST).
Gas chromatography–mass spectroscopy (GC–MS) method
GC–MS analysis was carried out in Agilent Technologies, Gas Chromatograph 7890 and Mass Spectrometer 5975. DB-5HT nonpolar capillary column (30 m × 0.25 mm × 0.1 μm) manufactured from (5%-phenyl)-methylpolysiloxane was used for the identification of phytochemicals. Helium gas was used as carrier gas with a consistent flow rate of 1.2 mL/min, sample injection volume was 0.5 μL and the ion-source temperature was 230 °C. The oven temperature was programmed from 80 °C (isothermal for 1 min), with an increase of 10 °C/min, to 200 °C, then 5 °C/min to 300 °C. Mass spectra were taken at 70 eV; a scan interval of 10 spectra/s and fragments from 50 to 800 Da. The relative percentage of each component can be calculated by comparing its average peak area to the total areas. The spectrum of the unknown component can be compared with the spectrum of the known components stored in the NIST library.
Ingredients for paste preparation
Sea salt, Acacia arabica gum powder, stevia herb extract powder (procured), extra virgin coconut oil and peppermint oil were used as ingredients for formulation of toothpaste.
Preparation of ingredients
Extra virgin coconut oil
Fresh coconuts were collected and grated. Small quantity of water was added to the pressed and mashed coconut and left for 30 min for extraction of coconut milk. Filtration was carried out through the cheese cloth and the filtrate was left overnight at 25 °C for the separation of coconut cream (top layer) and extra virgin oil (bottom layer). Oil was separated and stored in the refrigerator.
Fresh leaves of menthe were collected, pressed slightly to release the oil and blended with almond oil for 24 h. Residual leaves were removed and fresh leaves were added at a regular interval of 24 h and the process was continued for a week to obtain peppermint essential oil. The oil was stored in an air tight container away from light.
Acacia arabica gum powder
Acacia gum was purchased, air dried and ground to get fine powder using mechanical mixer and stored in an air tight container.
Formulation of toothpaste
Three formulations of tooth paste were prepared by varying the concentrations (%) of ingredients viz, guava leaf powder, Acacia arabica gum powder, sea salt, stevia herb extract powder and pepper mint oil (Table 1). Acacia arabica gum powder was mixed with small quantity of distilled water using a dropper to make a smooth paste. Subsequently sea salt, guava leaf powder and stevia herb extract powder were added and mixed well to make the paste uniform. Consequently, extra virgin coconut oil and pepper mint oil were added and mixed well until the toothpaste attains the desired consistency. F1, F2 and F3 pastes were packed and stored in plastic jars.
Table 1 Composition ratio of ingredients used in different tooth paste formulations
Physico-chemical evaluation of toothpaste
To assess the tooth paste formulations, physicochemical properties of the paste were estimated. All the assays were conducted in triplicates and data was represented as mean ± standard deviation. The statistical analysis was performed using SPSS 10.0 software. Significant differences were determined with 95% confidence interval (P < 0.05).
Determination of pH
Toothpaste solution (2%, w/v) was prepared and the pH was determined at room temperature using a calibrated pH meter.
Determination of abrasiveness
A pea-sized dab of toothpaste was placed on a clean plastic slide and few drops of distilled water were added to it. Then the toothpaste sample was rubbed in back and forth motion for 25 times within a distance of 1 cm using a fresh cotton swab. Then the slide was carefully rinsed, dried with tissue paper and examined under a microscope to determine the number of scratches on the surface of the slide. The degree of scratches was rated from 0 (no scratches) to 5 (a high degree of scratches).
Determination of foaming activity
Toothpaste (1 g) was mixed with distilled water (15 mL) in a measuring cylinder, shaken vigorously for 1 min, placed on the table to measure the height of the foam above the water level (Das et al. 2013). The foaming ability of the toothpaste was determined using the following equation.
$$Foaming\;ability(\% ) = \frac{{Height\;of\;the\;foam\;above\;water}}{{Total\;height}} \times 100$$
Spreading ability test
Toothpaste (1 g) was laid at the centre of a glass slide and covered with another glass slide. A known weight (1 kg) was placed carefully on these slides for 10 min to allow the paste to spread and then the diameter of the paste was measured (Mangilal and Ravikumar 2016).
Cleaning ability test
The composition of the eggshell is similar to the teeth enamel with calcium as the major compound. For this reason boiled eggs were used for testing the cleaning ability of the formulated tooth pastes as reported by Das et al. (2013) with necessary modifications. In boiling water, vinegar and few drops of food colour (red) were added. After cooling, the boiled eggs were immersed and allowed to stain for 5 min at 25 °C. The eggs were removed from the food colouring solution and placed on a paper towel to remove access water. Then the eggshell was washed with a wet tooth brush without losing the colour of stain followed by washing with known quantity of toothpaste. As per the requirement, 5–10 brush strokes with F1, F2 and F3 toothpastes on eggshell were given for colour removal. Same kind of pressure and motion were used in brushing procedure for all the three formulated toothpastes. The cleaning ability of the three toothpaste formulations were observed and the results were interpreted as '+++' very high cleaning ability, '++' high cleaning ability, '+' moderate cleaning ability, '−' bad cleaning ability.
Determination of antimicrobial activity
Antimicrobial assay
Antibacterial activity of the three toothpaste formulations was evaluated against five strains of microorganisms: Bacillus subtilis (MTCC 1305), Proteus vulgaris (MTCC 744), Staphylococcus aureus (MTCC 9760), Streptococcus mutants (MTCC 890) and Streptococcus oralis (MTCC 2696) using the well diffusion method. The inoculum of these bacterial strains was prepared in LB medium and incubated at 37 °C for 24 h. Nutrient agar plates (LB) were inoculated with each test microorganism (1 mL of the broth cultures) and dried for 1 h. Ampicillin (50 mg/mL) was used as a positive control. Solutions (2%w/v) of the three tooth paste formulations (F1, F2 and F3) were prepared and 60 μL of each formulation was poured in the designated well. The plates were then kept for 2 h in the refrigerator for diffusion of samples and then incubated at 37 °C for 24 h.
Calculation of zone of inhibition
After incubation, the zone of inhibition appears as a clear and circular halo around the wells. The diameter of the circular halo was measured both vertically and horizontally and their average was considered (cm).
In vitro cytotoxicity test
Vero cells were procured from National Centre for Cell Sciences, Pune, India and maintained in Dulbecco's Modified Eagle's Medium (DMEM) supplemented with Foetal bovine serum (FBS, 10%), l-glutamine (2 mM), penicillin G sodium (100 U/mL) and streptomycin sulphate (100 mg/mL). Cells were seeded (1 × 104 cells/mL) in aforementioned media in 96-well plate and incubated at 37 °C in 5% CO2. After reaching confluence, cytotoxicity of the cells was tested with various concentrations of formulated tooth paste. Medium was discarded after 24 h of incubation and the adherent cells were washed with phosphate buffer saline (PBS) and 30 µL of 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-tetrazolium bromide (MTT; 10 mg/mL in PBS) and incubated for 6 h. Dimethyl sulfoxide (DMSO, 70 µL) was added to solubilise the formazan crystals produced by viable cells. The absorbance was measured at 540 nm using UV–Vis spectrophotometer (Shimadzu, Japan; Model: UV-1800). Cell viability (%) in the presence of tooth paste was measured and expressed as follows:
$$\% \;{Proliferation} = \left[ {OD_{{sample}} - OD_{{control}} } \right] \times \frac{{100}}{{OD_{{control}} }}$$
In order to study the stability of the formulated toothpaste, its physico-chemical properties were studied at a regular interval of 4 months for a period of 12 months.
Extraction of guava leaf extract and identification of bioactive compounds
In the present study, the extraction of bioactive compounds from the guava leaf was carried out using ethyl acetate. The ethyl acetate extract was subjected to GC–MS analysis which has manifested the presence of sesquiterpenes and fatty acids predominantly (Fig. 1, Table 2). Caryophyllene, α-copaene, cis-muurola-3,5-diene, humulene, cyclosativene, bicyclo[5.3.0]decane, 2-methylene-5-(1-methylvinyl)-8-methyl, 1H-benzocycloheptene, 2,4a,5,6,7,8,9,9a-octahydro-3,5,5-trimethyl-9-methylene-, (4aS-cis), 1H-cyclopropa[a]naphthalene, 1a,2,3,5,6,7,7a,7b-octahydro-1,1,7,7a-tetramethyl-, [1aR-(1aà,7à,7aà,7bà)], naphthalene, 1,2,3,5,6,8a-hexahydro-4,7-dimethyl-1-(1-methylethyl)-, (1S-cis), α-cadinol, α-bisabolol etc. are some of the sesquiterpenes identified in the guava leaf extract. These compounds are well known for their antimicrobial, anti-inflammatory, antioxidant, antiproliferative, anticancer, antitumors and anaesthetic properties (Zahin et al. 2017).
GC–MS chromatogram of bioactive compounds in guava leaf extract
Table 2 Identification of phytochemicals in guava leaf extract by GC–MS
Evaluation of physical–chemical properties of tooth paste formulations
Various ingredients involved in the tooth paste formulation includes guava leaf powder, Acacia arabica gum powder, sea salt, stevia herb extract powder and pepper mint oil. Three formulations of herbal toothpaste (F1, F2 and F3) were prepared by varying the concentrations of these ingredients and their properties were studied to identify the best formulation (Table 1). The physical and chemical characteristics of different toothpaste formulations showed significant variations (Table 3). pH of all the toothpaste formulations was in the alkaline range of 8–11 in which F1 showed highest pH of 11.8 ± 0.01 and remaining two were in proximity to pH 9. Similar patterns were observed for abrasiveness and foaming ability tests also. Rubbing F1 against the glass slides, created more scratches than F2 and F3. Regarding the foaming ability, the F2 toothpaste showed lower foaming ability (15.1%) and F1 exhibited the highest value of 16.6%. Even though all the three formulations have shown less foaming properties, they showed very good spreading ability. F3 formulation has showed highest spreading area (8.1 ± 0.03 cm) followed by F2 and F1 with 7.8 ± 0.2 cm and 6.0 ± 0.5 cm respectively. Based on the colour change appeared on the pigmented eggs, F3 has shown better ability of cleaning stains (+++) and comparatively remaining two formulations (++) has shown less change in colour.
Table 3 Physico-chemical properties of tooth paste formulations
Antibacterial activity
In vitro antibacterial activity of the formulated toothpaste (F1, F2 and F3) were evaluated against Bacillus subtilis, Proteus vulgaris, Staphylococcus aureus, Streptococcus mutants and Streptococcus oralis strains as shown in Table 4. F3 formulation was very effective against the tested bacteria followed by F2 and least activity was shown for F1. As tabulated, the highest inhibition zone was observed against Proteus vulgaris (1.1 cm) and Bacillus subtilis (0.8 cm) and the lowest zone of inhibition was observed against Staphylococcus aureus (0.5 cm) for F3 formulated paste.
Table 4 Anti-microbial activity of tooth paste formulations
In vitro cell viability assay
Cytotoxicity of formulated toothpastes at 0, 10, 20, 40, 80, 160, 320 and 640 μg/mL concentration for Vero cells was determined using MTT assay. Results have confirmed that tested toothpastes have no significant effect on the reduction of Vero cell viability (P < 0.05) up to the concentration of 320 μg/mL (Fig. 2). But, the cell viability decreased slightly at 640 μg/mL concentration of formulated toothpastes.
Cytotoxicity of formulated toothpaste against Vero cells
Stability test
Upon comparing the physical, chemical and biological activities of all the three formulations, F3 was found to be more suitable and selected for stability test. The test was conducted for a period of 12 months at an interval of 4 months to observe changes in F3 during its storage period. As shown in Table 5, no significant change was observed in the pH, foaming ability, spreadability and cleaning ability of F3 inferring that it is best opted formulation for human application. Table 6 depicts the organoleptic evaluation of toothpaste formulation F3. Pale green colour and better taste of the paste is mostly imparted by the guava leaf extract.
Table 5 Stability test for tooth paste formulation (F3)
Table 6 Organoleptic analysis of tooth paste formulation (F3)
Guava leaf is rich in bioactive compounds and in order to utilize these compounds in the preparation of value added products, these are extracted using ethyl acetate. Seo et al. (2014) reported the extraction of essential compounds of guava using water, ethanol and methanol, and found highest content of phenolic compounds in water extract. The ethyl acetate extract was found to be rich in sesquiterpenes and fatty acids. Sesquiterpenes such as cubenol, 6S-2,3,8,8-tetramethyltricyclo[5.2.2.0(1,6)]undec-2-ene, benzene, (1,3,3-trimethylnonyl) and 1,6,10-dodecatrien-3-ol, 3,7,11-trimethyl-, (E) act as flavouring and fragrance agents. Apart from these, cis-à-bisabolene, an intermediate in the biosynthesis of hernandulcin which is a natural sweetener was also identified in the extract (Christianson 2017). Another major compound identified in the extract was fatty acid esters which are found to be fragrance ingredients. Thus, the GC–MS analysis of guava leaf extract identified various beneficial compounds with enormous medicinal importance. Hence, an attempt was made in the present study to exploit these beneficial properties of guava leaf in formulation of herbal tooth paste.
Apart from guava leaf extract, the active ingredient in the toothpaste formulation include Acacia arabica gum powder, sea salt, guava leaf powder, stevia herb extract powder and pepper mint oil (Table 1). Three tooth paste formulations F1, F2 and F3 were made by changing the concentration of the ingredients. Acacia arabica is one of the species recognized worldwide as a multipurpose tree and been effectively utilized treating cough, diarrhoea, diabetes, dysentery, eczema, skin diseases, wound healing, burning sensation and as an astringent, demulcent, anti-asthmatic (Farag et al. 2015). Especially its gum contains four aldobiouronic acids viz., 6-o-(β-glucopyranosyluronicacid)-d-galactose; 6-o-(4-o-methyl-β-d-glucopyranosyluronicacid)-d-galactose; 4-o-(α-d-glucopyranosyluronic acid)-d-galactose; and 4-o-(4-o-methyl-α-d-glucopyranosyluronic acid)-d-galactose (Rajendran et al. 2010). Gum was included in the formulation as bio-adhesive and binder to mix all the ingredients and makes the preparation intact. Moreover, the presence of aldobiouronic acids in the gum gives cooling effect after using the toothpaste. On the other hand, stevia was used as an alternative for synthetic or powerful sweetener, which indeed reduces the growth of oral bacteria and species specific odour. It has been proved to contain sweet diterpene and acylated glycoside (Karp et al. 2017) which gives sweetening property. This natural sweetener has the ability to maintain glucose level in diabetic's patients and scientifically proven to be nontoxic (Karp et al. 2017).
The physical and chemical characteristics of different toothpaste formulations such as pH, abrasiveness, foaming ability, spreadability and cleaning ability were studied. pH of the three tooth paste formulations is in alkaline range. pH value of toothpaste plays a crucial role in evaluating its properties as it gives an indication of the constituents. It was reported that ideal toothpaste should always have a pH between 5.5 and 10.5 (Price et al. 2000) which was exactly found in F2 and F3. Das et al. (2013) reported the stimulation of bacterial growth in mouth due to lower pH that leads to dental carries. Thus, an alkaline pH helps in neutralizing acid biofilm, kill germs and reduce unpleasant odours (Bouassida et al. 2017). Besides, guava leaf powder added in the formulation provides sufficient abrasiveness for maximum cleaning with minimum wear on enamel surface.
Another desirable characteristic preferred by the consumers is foam formation as it facilitates the toothpaste to spread all over the oral cavity during mechanical brushing. So, to attract more consumers sodium lauryl/laureth sulfate (SLS) is used as a surfactant to produce foam. But SLS is found to have degenerative effect on the cell membranes because of its protein denaturing properties and carcinogenic nature. Even though, there is no significant correlation found so far but either way usage of chemicals in long run can definitely affect the consumer's health. Considering all these factors in our formulation we have not used any surfactant, because of which the current formulations has not produced much foam and thus, categorized as non-foaming toothpaste.
As a matter of fact, stain removing ability has a significant role than foaming property. Teeth get exposed to various confectionery products, beverages, food colours, tobacco products etc., which will strongly attach and create stains. Moreover, the demand for products that enhance whitening of the teeth has increased significantly. The current formulation appear to be effective in removing stains from the teeth and improve the whiteness as the guava powder present in the toothpaste acts as an abrasive. Guava extract not only improves the abrasiveness of the formulated toothpaste but also enhances the microbial properties.
The tooth paste formulations were evaluated for their antibacterial activity against Bacillus subtilis, Proteus vulgaris, Staphylococcus aureus, Streptococcus mutants and Streptococcus oralis and found that the formulation F3 as very effective in comparison to the other formulations. Nisha et al. (2011) reported that essential oil of P. guajava is efficient in inhibiting the growth of both Gram-positive and Gram-negative bacteria at higher concentration. Oluwasina et al. (2019) studied the effect toothpaste formulated from extracts of Syzygium aromaticum, Jatropha curcas latex and Dennettia tripetala against E. coli, Bacillus sp. S. aureus, S. Epidermidis etc. Bora et al. (2014) reported that the key factor to choose dentifrice is its antibacterial efficacy, as opportunistic microorganisms will proliferate and produce a harsh environment which leads to the destruction of enamel. The formulated toothpaste showed clear inhibition zone against all the test bacteria, which indicated the antimicrobial activity of the toothpaste.
Cytotoxicity of the tooth paste formulations have been tested and confirmed that the formulations have no significant effect on the reduction of Vero cell viability (P < 0.05) up to the concentration of 320 μg/mL. Moreover, the tooth paste formulation (F3) was found to be stable up to 12 months without change in its physico-chemical properties. These finding authenticate F3 as an efficient tooth paste formulation than the others.
As an outcome of the present work, a polyherbal toothpaste was prepared with guava leaf powder possessing antimicrobial and antioxidant properties as a major ingredient. Other ingredients used in the tooth paste formulation were successful in removing stains from the teeth, cleansing oral cavity and acting as carriers of various therapeutic compounds. The uniqueness of the present study lies in formulating the toothpaste with natural herbs, absolutely void of chemicals in contrary to many commercially available tooth pastes which are made of chemicals. These chemicals act as potential carcinogens. Moreover, the tooth paste formulation (F3) was found to exhibit stability, abrasiveness with minimum effect on the enamel and negligible toxicity substantiating the suitability of the formulation for human application.
The data related to current study are available from the corresponding author on reasonable request.
Bora A, Goswami A, Kundu GK, Ghosh B (2014) Antimicrobial efficacy of few commercially available herbal and non-herbal toothpastes against clinically isolated human cariogenic pathogens. JNDA 14:35–40
Bouassida M, Fourati N, Krichen F, ZouariR Ellouz-Chaabouni S, Ghribi D (2017) Potential application of Bacillus subtilis SPB1 lipopeptides in toothpaste formulation. J Adv Res 8:425–433
Christianson DW (2017) Structural and chemical biology of terpenoid cyclases. Chem Rev 117: 11570–11648
Das I, Roy S, Chandni S, Karthik L, Kumar G, Rao KVB (2013) Biosurfactant from marine actinobacteria and its application in cosmetic formulation of toothpaste. Der Pharmacia Lettre 5:1–6
Farag MA, Al-Mahdy DA, Salah El Dine R, Fahmy S, Yassin A, Porzel A, Brandt W (2015) Structure activity relationships of antimicrobial gallic acid derivatives from pomegranate and Acacia fruit extracts against potato bacterial wilt pathogen. Chem Biodivers 12:955–962
Horticultural Statistics at a Glance (2017) http://nhb.gov.in/statistics/Publication/Horticulture%20At%20a%20Glance%202017%20for%20net%20uplod%20(2).pdf. Accessed 3 June 2019
Karp S, Wyrwisz J, Kurek MA, Wierzbicka A (2017) Combined use of cocoa dietary fibre and steviol glycosides in low-calorie muffins production. Int J Food Sci Technol 52:944–953
Kumar A, Agarwal DK, Kumar S, Reddy YM, Chintagunta AD, Saritha KV, Pal Govind, Jeevan Kumar SP (2019) Nutraceuticals derived from seed storage proteins: implications for health wellness. Biocatal Agric Biotechnol 17:710–719
Mangilal T, Ravikumar M (2016) Preparation and evaluation of herbal toothpaste and compared with commercial herbal toothpastes: an in vitro study. IJAHM 6:2266–2273
Naidu NK, Vijaya Ramu V, Sampath Kumar NS (2016) Anti-inflammatory and anti-helminthic activity of ethanolic extract of Azadirachta indica leaves. IJGP 10:S1–S4
Nisha K, Darshana M, Madhu G, Bhupendra MK (2011) GC-MS analysis and anti-microbial activity of Psidium guajava (leaves) grown in Malva region of India. Int J Drug Dev Res 3:237–245
Oluwasina OO, Ezenwosu IV, Ogidi CO, Oyetayo VO (2019) Antimicrobial potential of toothpaste formulated from extracts of Syzygium aromaticum, Dennettia tripetala and Jatropha curcas latex against some oral pathogenic microorganisms. AMB Express 9:20
Price BT, Sedarous M, Hiltz GS (2000) The pH of tooth-whitening products. J Can Dent Assoc 66:421–426
Rajendran A, Priyadarshini M, Sukumar D (2010) Phytochemical studies and pharmacological investigations on the flowers of Acacia arabica. Afr J Pure Appl Chem 4:240–242
Raju NV, Sukumar K, Babul Reddy G, Pankaj PK, Muralitharan G, Annapareddy S, Sai DT, Chintagunta AD (2019) In-vitro studies on antitumour and antimicrobial activities of methanolic kernel extract of MangiferaIndica L. cultivar Banganapalli. Biomed Pharmacol J 12:357–362
Seo J, Lee S, Elam ML, Johnson SA, Kang J, Arjmandi BH (2014) Study to find the best extraction solvent for use with guava leaves (Psidium guajava L.) for high antioxidant efficacy. Food Sci Nutr 2:174–180
Sravani D, Aarathi K, Sampath Kumar NS, Krupanidhi S, VijayaRamu D, Venkateswarlu TC (2015) In vitro anti-inflammatory activity of Mangifera indica and Manilkara zapota leaf extract. RJPT 8:1477–1480
Uzzaman S, Akanda KM, Mehjabin S, Parvez GMM (2018) A short review on a nutritional fruit: guava. Opn Acc Tox Res 1:1–8
Vijaya RD, Nair RR, Krupanidhi S, Reddy PN, Sambasiva Rao KRS, Sampath Kumar NS, Parvatam G (2017) Recombinant pharmaceutical protein production in plants: unraveling the therapeutic potential of molecular pharming. Acta Physiol Plant 39:1–9
Zahin M, Ahmad I, Aqil F (2017) Antioxidant and antimutagenic potential of Psidium guajava leaf extracts. Drug Chem Toxicol 40:146–153
Authors acknowledge Vignan's Foundation for Science Technology and Research (VFSTR) for extending their help by providing necessary facilities to carry out this work.
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors received wavier for article-processing charge from AMB express.
Department of Biotechnology, Vignan's Foundation for Science, Technology and Research, Guntur, Andhra Pradesh, 522213, India
Shaik Shaheena
, Anjani Devi Chintagunta
, Vijaya Ramu Dirisala
& N. S. Sampath Kumar
Department of Molecular Biosciences, Faculty of Life Sciences, Kyoto Sangyo University, Kyoto, 603-8555, Japan
Advanced Technology Development Centre, Indian Institute of Technology, Kharagpur, West Bengal, 721302, India
Anjani Devi Chintagunta
Search for Shaik Shaheena in:
Search for Anjani Devi Chintagunta in:
Search for Vijaya Ramu Dirisala in:
Search for N. S. Sampath Kumar in:
NSSK have conceived the idea of polyherbal tooth paste preparation and guided the work carried out in the present article and drafted the manuscript. ADC was involved in designing the work and manuscript drafting. SS has performed the experimental work. VRD has edited the manuscript meticulously. All authors read and approved the final manuscript.
Correspondence to N. S. Sampath Kumar.
This article does not contain any studies with human or animal subjects.
All authors gave their consent for publication.
Shaheena, S., Chintagunta, A.D., Dirisala, V.R. et al. Extraction of bioactive compounds from Psidium guajava and their application in dentistry. AMB Expr 9, 208 (2019). https://doi.org/10.1186/s13568-019-0935-x
DOI: https://doi.org/10.1186/s13568-019-0935-x
Cleansing ability | CommonCrawl |
How can I avoid problems that arise from rolling ability scores?
Rolling ability scores is a time-honored tradition across many editions of D&D. However, it can sometimes cause problems for players and/or the DM. For example, one player character may end up much weaker or much stronger than the rest of the party, which can result in a poor experience for some of the players. In other cases, a player may have their characters repeatedly commit suicide-by-monster so they can try to reroll for higher stats, which can be quite frustrating for the GM and other players.
A lot of character power is dependent on ability scores. How much time a player gets in the spotlight can be heavily impacted by how powerful their character is. Having one character be much more/less powerful than the rest of the party can result in imbalanced time for the player of that character in the spotlight, which results in players having less fun. Giving all PCs fairly equivalent ability scores can help avoid that problem.
What approaches are available to mitigate these problems?
Note: Answers should ideally be able to prevent both the "Joe rolled all 7s, and his character is useless" problem and the "Karen rolled all 18s and her character makes everyone else's character useless" problem. That is, an answer that only avoids very low average/total scores is not as good as one that avoids both very low and very high average/total scores.
I intended this to be a canonical question for all editions. I don't mind edition-specific answers as long as they're not also character-specific.
gm-techniques character-creation dungeons-and-dragons ability-scores
Akixkisu
Oblivious SageOblivious Sage
\$\begingroup\$ Is this a hypothetical problem or one you actually experienced, I have rarely seen it have any effect. \$\endgroup\$ – John Jan 12 at 16:10
Option One: Non-Random Ability Score Generation
Rather than giving up control of how powerful player characters are to the whims of fate, you can instead use systems that attempt to consistently produce ability scores at an established level of quality. The two most popular schemes for doing so are standard arrays and point-buy systems.
Standard arrays present players with between one and three pre-generated sets of ability scores. Players select a set and assign the scores within to their abilities however they want. For example, players may be given two arrays: 16 14 14 13 10 8 and 16 14 13 12 11 10. Carol wants to make a fighter, and chooses the second array. She then picks the 16 for her strength, the 14 for her constitution, the 13 for her dexterity, the 12 for her wisdom, the 11 for her intelligence, and the 10 for her charisma. Bob wants to make a wizard, so he chooses the first array, and assigns the 16 to his Int, the 14s to his Dex and Con, the 13 to his Cha, the 10 to his Wis, and the 8 to his Str.
Point-buy systems instead give the player some sort of base ability scores and then a pool of points they can spend to increase ability scores. In some systems each point you spend increases an ability score by 1, while in others getting a single high score costs more than getting two medium scores. The Pathfinder point-buy scheme is an example of the latter. Point-buy systems give the players more control over their characters' ability scores, which generally makes them happier, but it can also increase the potential/temptation of heavily min-maxing characters (e.g. a fighter with 18s in Str/Dex/Con and 6s in Int/Wis/Cha), which can still result in some frustration for the DM.
Option Two: Collective Ability Score Generation
Most problems that arise from rolling ability scores ultimately center around large differences between player characters. We can resolve this issue, and still roll ability scores, by stealing the idea of having a set of arrays from the first option. Instead of each player rolling an array of ability scores for their own character, each player rolls an array of ability scores for the party and then chooses any of the rolled arrays to use for their character. Once generated, these arrays should be saved and used again for any characters created later, rather than rolling an additional array.
For example, Alex, Betty, and Charles are making character's for Dana's new campaign. Alex rolls 6 11 8 13 9 10, Betty rolls 18 7 12 11 15 10, and Charles rolls 16 12 14 13 15 9. Alex and Charles decide to use the array that Betty rolled, so they can put the 18 in their main ability score, while Betty opts to use the array Charles rolled so she can make a more MAD (Multiple Attribute Dependent) character. When Alex's character dies a few levels in, he doesn't roll a new set of ability scores for his new character. Instead, he goes back to the three arrays generated when the campaign started and chooses one of them. If Eric joins them a few levels after that, he also would use one of the same arrays everyone else picked from when they created a character.
This approach means that if a single player rolls poorly, they're not stuck with a weaker character. If a single player rolls well, everyone can make characters who are just as strong. While this approach tends to slightly increase average party strength, having all the characters on an even playing field makes it easier for the DM to adjust encounters appropriately.
\$\begingroup\$ I wish this also included some discussion of in-play ways to make "uneven ability scores" =/= "a problem for the table." It seems to assume that assuring parity is the way to avoid problems, but in my experience that's not the only way. \$\endgroup\$ – nitsua60 Oct 10 '18 at 19:49
\$\begingroup\$ @nitsua60 Assuring parity is a broadly applicable way of avoiding the problem. There are certainly other ways to deal with the resulting problems, but there's a big difference between, "We're playing 4e and Joe's barbarian's highest stat is a 13, what can we do," and, "We're playing 3.5e and Karen's druid's lowest stat is a 16, what can we do," when you're discussing those alternate approaches. Assuring parity in rolls ahead of time is a solution that can work for almost everyone. \$\endgroup\$ – Oblivious Sage Oct 10 '18 at 20:04
\$\begingroup\$ @enkryptor "Doctor, it hurts when I do this" "Don't do it then". Why wouldn't it be a valid answer? \$\endgroup\$ – JollyJoker Oct 11 '18 at 12:20
\$\begingroup\$ @enkryptor It's important to differentiate between people who truly want to stick with rolling ability scores (for whom the 2nd option I presented should work nicely) and people who have been rolling ability scores since the 80s and genuinely didn't know there were other ways do it. \$\endgroup\$ – Oblivious Sage Oct 11 '18 at 13:03
\$\begingroup\$ @Beofett MAD = Multiple Attribute Dependent. For example, a 3.5 monk who needs Str for attack & damage bonuses, Dex for AC, Con for hit points, and Wis for their abilities. \$\endgroup\$ – Oblivious Sage Oct 12 '18 at 17:29
Draft Ability Scores
One method I've used in the past is to create a pool of ability scores from all the players' rolls. Then each player (order decided by dice roll) chooses an Ability Score from the pool. This means that each player will get some high numbers and some low numbers.
The draft order is important as the first roller will get higher on average with just a rotation so I have the last player choose their first and second scores consecutively and then each even numbered score has an inverted pick order.
Bob rolls 8, 12, 12, 13, 14, 14
Mark rolls 12, 14, 15, 16, 16, 17
Anne rolls 9, 11, 12, 15, 15, 16
These all go in a pool and the first pick order is Mark, Anne, Bob. As stated, first players choose in order, then in reverse order, then in forward order again and so on.
1st pick: Mark 17, Anne 16, Bob 16
2nd pick: Bob 16, Anne 15, Mark 15
3rd pick: Mark 15, Anne 14, Bob 14
4th pick: Bob 14, Anne 13, Mark 12
5th pick: Mark 12, Anne 12, Bob 12
6th pick: Bob: 11, Anne 9, Mark 8
The end result is:
Mark: 17, 15, 15, 12, 12, 8
Anne: 16, 15, 14, 13, 12, 9
Bob: 16, 16, 14, 14, 12, 11
The last pick ends up with the most middling result but they are all fairly balanced to each other.
David CoffronDavid Coffron
\$\begingroup\$ I can attest to this being very effective. Came up with a system just like this over the summer and it's only drawback is that if someone dies (or something) you have to figure that out in some other fashion. \$\endgroup\$ – blurry Oct 11 '18 at 14:08
\$\begingroup\$ This goes some way to solving OP's problems, but introduces a new problem that the first person to pick has an advantage. Related question: rpg.stackexchange.com/q/120334/36002 \$\endgroup\$ – Richard Oct 11 '18 at 14:15
\$\begingroup\$ For future reference: this is called a "snake" draft. \$\endgroup\$ – GalacticCowboy Oct 11 '18 at 19:44
Tarot Ability Scores
My group successfully used Tarot ability score generation; despite the name, it doesn't actually require a tarot deck. This involves taking a deck of cards (we just used standard playing cards) and using them to simulate dice rolls.
The first step has you preparing your drawing deck. This will consist of 18 cards, all of face values 1-6. The amount of each valued card in the drawing deck is selected by the DM; for example, if they want a low-powered game, there should be fewer 5s and 6s. The array we used (for a D&D 5e game) was 6,6,6,5,5,5,5,4,4,4,4,3,3,3,3,2,2,2; this has a slight bias toward medium scores in the 12-15 range.
Once the drawing deck is prepared, the player draws six cards and lays them out. Next, they draw another card and stack it on top of an existing one (but not more than one new card per stack) until they have drawn six more cards. They then repeat the last step with the last six cards, leaving them with six stacks of three cards.
Total up each stack, and the value is one of your attributes just as if you had rolled it.
This method can be done with cards face-up or face-down. Face-up cards allow the player to calibrate their abilities toward their desired result, while retaining some randomness and organic feel. Face-down cards leave choices out of the player's hands, but does ensure an average ability spread that's more fair than simple rolling.
We found the face-up version to be the best option we have tried yet.
(Method source and additional suggestions here)
edited Jan 6 at 16:09
Oblivious Sage
SarahSarah
\$\begingroup\$ You could do all sorts of variants. First card is face down, the rest are face up, for example. \$\endgroup\$ – Xavon_Wrentaile May 26 '19 at 18:19
Consider placing high/low caps on the total stats rolled.
One method that you can try is setting a maximum and/or minimum value for the stat total rolled by the players, and have them reroll if their total is above the maximum or below the minimum. This allows some of the variation and randomness that dice rolling offers while preventing too much disparity between party members or abuse by players.
For example, if you decide to set the minimum cap at 70 and the maximum cap at 80, one player might roll: 18 6 11 9 14 16, for a total of 74. Another might roll 14 16 13 12 10 14, for a total of 79. If your third player rolled exceptionally well: 18 16 15 11 13 12 (total 85), and your fourth player rolled poorly: 10 13 9 11 14 10 (total 67), both would need to reroll until they landed a total between 70 and 80.
You might need to experiment a little to find the right limits for your group. You could increase the difference between your caps if you wanted to allow greater disparity between players, only set a minimum cap if you're only worried that some characters will be too weak, or only set a maximum cap if you're only concerned with players killing their characters for a chance to reroll a better one.
One thing I always suggest is that everyone try to put aside their preconceived notions about the importance of ability scores for a good amount of time and just see what happens. Depending on your play-style and the edition, ability scores may have a much smaller effect in practice.
To give a few examples:
Generally, the older the edition, the less rules you'll find that specifically reference ability scores, and the modifiers tend to be smaller.
How often the DM calls for rolls and what difficulty factors they choose will also change how much ability scores affect the game.
Some players play in a style that tries to minimize the number of rolls they have to make and seek circumstances to maximize the odds when they do roll. They always look for creative and low risk solutions. This tends to minimize the effect of ability scores.
But I would encourage you to simply try it and see how it works with your edition and your group rather than analyze such factors.
Also, it can help to try to think more in terms of the group than the individual. Yes, sometimes one PC is consistently more effective than the others, but that doesn't mean that the others were ineffective or that they didn't contribute in important ways. Would that PC alone have fared as well without their allies?
If you have given it a fair shake, and if it isn't working for your group, then the other answers provide good solutions.
Robert FisherRobert Fisher
\$\begingroup\$ I agree with this, but it would be nice to have more specific examples since this is aiming to be a canonical question... Which editions are more/less dependent on ability scores, and what changes can you make to your play style to be less dependent on ability scores? \$\endgroup\$ – user3067860 Oct 11 '18 at 13:43
Dice pools for stats
(This one is totally unique here, as far as I see.)
Rather than having players assign points to stats, give each player a pool of dice that they can divide up between all six stats. Then they roll each stat's pool and whatever they get is the value they have for that stat. I found that this works best with the parameters being:
24d6 as the pool to divide (this works out to an average of 4d6 per stat)
Minimum two dice assigned to a stat (which should be rolled as 2d6+1), no maximum.
After each pool is rolled, take the highest 3d6 (or the highest 4d6 if that isn't over 18, otherwise reroll if you want to do it that way), and that's that character's stat.
This gives some of the customization of point-buy, allowing players to choose which stats they care about and which they're willing to let go. However, it's still random enough to generate the kinds of interesting characters that rolling is famous for. In my experience, it tends to generate characters with a 16+ in the stats they really care about, but with at least one surprisingly poor stat, and often one that's unexpectedly good. And by all coming out of one pool, it ensures that there will be around the same number of good stats vs bad stats in the party, without mandating it.
Use filtered group-wide arrays
(This is very similar to some other answers, but this variant worked very successfully for me.)
Have everyone in the group roll up one (or two, depending on number of players) sets of 6 stats, using whatever dice rolling method you prefer (3d6, 4d6-drop-lowest, 4d6-cap-18, etc.). Calculate the total modifier for each array. Then, throw away the highest set and the lowest set (or the two highest and two lowest), and let the players choose between the remaining. Then, they can assign the stats as they choose.
This has the advantage of pushing the available arrays towards the median distribution, so there isn't one "obviously best" array that everyone takes.
Generate a lot of stats and let players pick
(Again, this is somewhat similar to other answers, but I've used a variant successfully)
Another option is to have everyone roll up three sets of stats in order (no swapping stats around). Then, put all the triplets together and let each player choose one triplet to claim, from which they can use one stat array. This should generate enough rolls that no triplet will be a total disaster, and lets players find a set of stats that matches the character they want - all without making it so loose that it's equivalent to rolling until you get the right arrays.
BobsonBobson
Disclaimer: I am only familiar with D&D 3.x; I hope that this answer remains applicable to a broader set of editions, but cannot guarantee it.
TL;DR: Skip to the latest non-italic section name "Tailored Random Ability Score Generation" to see the proposed method.
Random Ability Score Generation is only partially the problem.
You have noted two different problems.
In other cases, a player may have their characters repeatedly commit suicide-by-monster so they can try to reroll for higher stats, which can be quite frustrating for the GM and other players.
The player is not satisfied by their character itself.
For example, one player character may end up much weaker or much stronger than the rest of the party, which can result in a poor experience for some of the players.
The players' characters do not function well together.
The Ability Score Generation plays a role in each, but is not the sole contribution factor. I still remember a terrible experience playing a sneaky character, where inexplicably the monsters would always immediately locate and attack my hidden character at the beginning of combat, like any other character. I felt cheated by the DM, who in turn considered that it was only fair that my character be targeted. In the end, I switched to another character and play style. My character, in isolation, was not the issue; my DM handling of the character was not to my satisfaction.
This answer will focus on Ability Score Generation, just remember it may only be part of the issue.
Why yet another answer?
There are already many suggestions in this thread of random ability score generation which give the player some agency. I particularly like the Collective Generation from Oblivious Sage which allows choosing from multiple arrays and the Deck of Cards from Elial which allows control during the generation of the array.
I believe they are good, yet I believe they lack a critical component: defining, more precisely, the problem they are attempting to solve, so as to obtain a set of goals by which solutions can be measured.
This answer will therefore endeavor to first establish such goals; and then only propose a solution which satisfies them all.
If your players are anything like me, then they invest a lot of time in creating a character. They'll start from an idea, refine it, research material, refine it further, drop some concepts, add some others, etc... Finally, after hours upon hours of labor, they'll end up with a pretty good idea of their character's strengths and weaknesses, background and future development, etc.
After so many hours spent refining and enriching the character, if they obtain an array of ability which either mechanically does not allow the character or thematically does not fit the character, then of course they'll be extremely disappointed.
Party Balance
A balanced Ability Score Generation, whether random or not, will not however solve party balance issues.
At one extreme, 8/8/8/8/18/8 before racial adjustment is a strong array for a Druid, but utterly crippling for a Monk or Paladin. At another extreme, 16/16/16/16/16/16 is excellent for Monk and Paladin, but is actually worse than the previous array for a Druid.
The point I am trying to make here is that Equality of Opportunities does not imply Equality of Outcomes. Some characters are inherently more dependent on their ability scores than others, and not all characters favor the same distribution (spiky vs flat).
Of course, a good Ability Score Generation is NOT sufficient to achieve a good Party Balance, however please do note that a bad Ability Score Generation can definitely sabotage it.
It is thus in the interest of the Dungeon Master to work with their players to identify the needs of their respective characters and ensure that no player feels left out by whichever Ability Score Generation method is selected.
For example, taking the Deck of Cards proposal, the composition of the deck can favor either extremes (more 6s and 2s) or medians (more 5s, 4s and 3s), with the former benefiting SAD1 characters and the latter benefiting MAD1 ones.
1 SAD: Single Ability Dependent; MAD: Multiple Abilities Dependent.
Tailored Spot Light
Where I make a slight digression about Party Balance and Session 0.
From experience, Party Balance is not as much a matter of capabilities, and more a matter of Spot Light Time. What matters is that in each session, each player should have roughly the same amount of time to "shine", where the scare quotes are used because different player/character pairs will shine differently.
I encourage the DM and players, in the session 0, to establish each character's roles in the party. For example, a Sneak could be:
Primary Debuffer/Scout: their main role, where they should outshine anyone else.
Secondary Damage Dealer/Party Face: their secondary role, where they can efficiently support others.
It is fine if multiple players share a Primary role, or if a role is left mostly unaddressed: what matters is coordinating expectations.
This also gives information to the DM as to what each player is coming for in the subsequent sessions. If the DM was planning a social game and one player starts explaining they've got this really cool Pyromaniac idea, it also gives time to address the discrepancy before any party has sunk too much time.
And finally, it should help the DM tailor their approach with each player. A player cannot create a cool concept character without the DM's approval and assistance. Approval to ensure that the character fits the narrative and assistance because the character's background and evolution will have to be woven into the narrative.
Ideal Random Ability Score Generation Requirements
Ideally, the Random Ability Score Generation should:
Feel random: players picked it for the thrill, let them experience it.
Feel fair: no player should feel cheated.
Feel empowering: players should feel they have some control over the experience.
Be exciting: no player should apprehend the step.
This is, obviously, very subjective, as it is all about the feeling of the players and not about any mathematical outcome.
There are, however, guidelines which can be extracted to inform the process:
Feel random: some random process, such as dice or cards, should be included in the method.
Feel fair: sharing should avoid envy and jealousy from creeping in.
Feel empowering: the players should be driving the process, such as rolling the dice, drawing the cards.
Be exciting (tailor): the players should have some degree of agency to tailor the outcome to their particular needs.
Be exciting (fast): the players should not have time to get bored.
Be exciting (simple): even players with rudimentary mathematical skills should feel at home.
Tailored Random Ability Score Generation
How to randomly generate Ability Scores for Fun and Profit.
Generating Ability Scores is laying the first stone of the campaign to come, it should be an exciting shared moment to kick the campaign off in style. Follow this quick guide to start the party!
Each player creates a base array of 6 numbers, where each number is generated by rolling 3d6 and dropping the lowest. The base arrays are placed at the center of the table.
Each player decides on a base array. It is perfectly acceptable for multiple players to opt for the same base array.
Each player receives a pool containing the numbers [6, 5, 4, 3, 2, 1].
Each player assigns the sum one number from the base array and one number from the pool to each individual Ability Score of their character. No number from the base array or the pool may be used twice.
First of all, the proposed method should satisfy all criteria above: the players are driving the process (empowering), rolling dice (random) and sharing the base arrays and pools (fair). The whole thing is accomplished quickly (exciting (fast)) and does not overtax one's mathematics capabilities (exciting (simple)). Furthermore, the players are given sufficient agency to adjust the resulting array to their needs (exciting (tailor)): either uniforming the scores, or skewing them further; choosing to keep a very low ability, or shoring it up.
Secondly, statistically speaking, the method produces relatively random results, yet with a sufficient degree of customization that different needs are catered for.
Using AnyDice, statistics for [highest 2 of 3d6]:
Average: 8.46.
Median: 9.
Minimum (0.46 %): 2.
Maximum (7.41 %): 12.
From a player's perspective, considering 6 abilities are generated in this fashion using:
ABILITIES: 6 d [highest 2 of 3d6]
loop P over {1..6} {
output (P @ ABILITIES + (7 - P)) named "Ability [P]"
See output here.
This works well for a caster (SAD):
36.98 % of achieving 18 in the top score.
73.60 % of achieving at least 17 in the top score.
This also works well for a Barbarian (Str > Con):
6.73 % of achieving 17 in the second top score.
34.24 % of achieving 16 in the second top score.
This also works well for a Monk (Str > Dex > Con > Wis) or Paladin (Str > Con > Wis > Cha):
12.46 % of having 4 10+ in the base array: 13+ in fourth top score.
38.81 % of having 4 9+ in the base array: 12+ in fourth top score.
Note: I am not quite sure how to compute the chances of a 13+ or 14+ in the fourth top score assuming the player assign +3 to the top, +4 to the second, +5 to the third and +6 to the fourth (aiming for uniformity). I'd appreciate help from an AnyDice guru.
Thanks to Oblivious Sage for their Collective Generation method; their answer was the first time I encountered the concept, and I believe sharing the results of the dice rolls is an excellent way to start a shared story.
Matthieu M.Matthieu M.
Don't roll for effectiveness. Roll for distribution.
Greg Stolze's One Roll Engine game Reign takes this rough approach to random character creation. You pitch a bunch of d10s for character creation and each set of different numbers gives you benefits flavored after the number you rolled that get more powerful as you get more numbers in the set. But each individual die has the same total contribution to character creation - that is, if you roll a whole bunch of singletons and somebody else rolls two matched sets, you'll have a lot of skill bonuses spread out widely, and some extra starting gear or treasure, and they'll have a few good core stats, but not a lot of skills outside of them. In the end you'll have "spent the same number of points" at character creation.
So how do you adapt that philosophy to D&D?
Split 25.
Roll 4d6 drop lowest three times. If any of them came up less than 7, treat it as a 7. Subtract each of those three numbers from 25. There's your stat array.
So if you roll 15, 14, and 13, then you also have a 10, an 11, and a 12. If you roll 18, 17, and 16, you also have a 7, an 8, and a 9. When you get a good roll, you also get a bad roll. When you roll average, you get another average roll. You'll always have three odds and three evens.
Ultimately this isn't going to be perfect. Some characters are better with unbalanced stats than others, usually casters who can use magic to make up for physical deficiencies. But it works at making the rolls as a whole feel less unfair.
GlaziusGlazius
Have each player roll (a fixed number of) multiple characters and choose one to play, one (or more) as backups, and turn the rest over to the GM for use as NPCs.
This is the default character generation method for Adventurer, Conqueror, King (aka ACKS), a recent D&D B/X clone. Players roll five characters, play one with a second as a backup, and the remaining three become NPCs.
Note that, because you're choosing from complete sets of stats, this method works with "rolled in order" character generation, unlike some other methods which implicitly require "arrange to suit" assignment of stat values.
Dave SherohmanDave Sherohman
As DM, you choose what matters.
There are two sides to avoiding problems related to the utility of a character's ability score: the player/character side and the DM side.
On the character side, the most straight-forward approach is to use a point-buy system for abilities and that has been an option in the rule books since 3rd edition. The method of rolling 6 sets of 3d6 and choosing one, or of rolling 3d6 twice and selecting one probably have the strongest effect, of those offered in the rules, for avoiding a "dismal" set of scores while not likely creating an overpowered set and still allowing for random generation.
More powerful than the character side of avoiding this problem is the DM's side, however. As DM, you have control over how much impact high and low scores have. If toe-to-toe combat of rolling d20 and adding ability bonuses is not going to produce desirable results in your game, don't do it. The same goes for skill/ability checks. Don't call on the whole table for a listen check, for example, instead, while all the muscle-bound and brainy types were sharpening blades and reading spellbooks around the campfire, the wimpy character was collecting firewood at the edge of camp so he's the one who heard the approaching orcs. Let him get an extra shot off at the beginning of the combat.
Create situations that reward the player with the crappy character by playing to what they're good at: solving puzzles, social interaction, or whatever. Then judge the results by what the player says the character does or says instead of an intelligence check or diplomacy roll.
Two final points: 1. This is a cooperative game, not a competitive one, and players and their characters should be working as a team. When they don't the game world should punish them. 2. If you let your players be kind of their characters' guardian angels, you can make playing the game more about their fun, and less about their game pieces.
TuorgTuorg
One of my friends came up with this method, which I find quite entertaining. Only roll 5 times. Add up the modifiers of those 5 rolls, and then your last stat is determined so that everyone has the same total modifier.
So for a moderately powered campaign, the DM might say your total modifiers should add up to +8. If your first five rolls are really great, your sixth stat will be pretty terrible, and conversely if you roll terrible for your first five rolls, your six stat will be awesome. As a DM you might want to disallow really extreme sixth stats... Then again you may not want to as long as your player(s) are okay with it. Anecdotal but I played a Con 1 bard in one campaign and it was great ... had the campaign been a bit more combat/dungeon focused and less urban-intrigue it probably wouldn't have worked so you'll definitely want to consider how much to allow.
aslumaslum
What I have tried with great success was simple:
Shared rolls
Each of 6 players rolled once, standard 4d6 drop lowest. That was their stat array. Everyone was free to assign them as they wish, and racial bonuses made characters more varied than it looks like.
If there was less than 6 players, then depending on the mood and difficulty of campaign I simply gave one 16 / 17 / 18 to the pool. Or one 8. Or both.
This worked best when I was DMing for teenagers. I wasn't much older either. It saved quite a lot of drama about 2 points total difference I had to deal with.
Other thing I tried, the way we play now is:
High and low stat given by DM
In the campaign I'm DMing, and DMed before, each character has one 18 and one 8 before modifiers to begin with. other stats are rolled in a way1 that can't generate 18, and can't generate lower than 8. That way each character is guaranteed to have a stat that shines and a stat that's a weak spot.
It is especially fun when players are playing weak spots. Imagine mage tower, and rogue with 18 dex and 8 wis...
This worked like a charm when playing with adults that just want their characters to be good at something, don't mind playing flaws, and are not envious about minor differences in points total.
1 that's campaign specific and probably copyrighted
MołotMołot
Allow limited rerolls
Sometimes players roll something unplayable. I had a guy roll something between 10 and 13 across the board. I told him to reroll the entire thing.
I had another player roll 3 stats at 15 and above and 3 others below 7. While this is playable, it wouldn't be very fun, so I allowed him to reroll his two lowest. He rolled average on the rerolls and decided to play a barbarian with 3 int (-2 from racial penalty).
I had somebody roll a slightly above average array: 18, 14, 14, 14, 7, 5 (or thereabouts). He wanted to play a 3.x Paladin, and while that's almost a good 3.x Paladin array having 2 stats at -2 before racial penalties is not viable. I denied a reroll on this because that was a very playable array, even if it wasn't Paladin viable. If a player's best 3 ability scores are decent and they didn't roll too badly on the other 3, I rarely permit rerolls.
Aside from that, there's no hard and fast rule. I consider the type of character a player wants to make, but if they roll a good or decent array for a SAD class (Single Ability Dependent, ex. Wizard) I almost never allow it. Sometimes I allow the player to roll off a low stat, sometimes an average stat, and sometimes a mix of high, mid or low. I rarely need to offer rerolls at all.
Rerolls are a privilege, not a right, and my players know this. Everything about the process is subject to GM fiat based on whatever I feel like I need to do to let my players have a chance at having fun and being successful in-game. If they want to play a heavily MAD class (Multiple Ability Dependent, ex. Paladin in D&D3.5) and don't get the stats for it, no dice. After all, it's not like you'll have a chance at running a decent Paladin or Monk in point buy.
VHSVHS
\$\begingroup\$ The problem with this method as described is that it seems very subjective. It is hard to implement your suggestion - essentially it is "look at the stats and make a decision", with a couple of example decisions that you made. \$\endgroup\$ – Neil Slater Oct 11 '18 at 14:58
\$\begingroup\$ Neil - It is subjective, but not impossible to implement. The GM and the players need to have a good enough understanding of the game to know what they can do with certain arrays and what arrays are hard to work with in general. You can't make a 1-size fits-all approach with this, so it's better to leave it up to GM discretion. If the players think that it's an unworkable array, they'll let you know. \$\endgroup\$ – VHS Oct 11 '18 at 18:43
At some point you need to ask yourself why you're rolling for stats. Do the players want the chance of having dramatically good or bad characters? Do they just want a chance of getting a good or bad stat somewhere they wouldn't normally put it to mix things up? I've seen people using rolled stats but then layering on so many mins and maxes and rerolls and swaps and calculations that it removes most of the randomness and adds a lot of effort.
If you have the kind of players who are overly concerned with balancing (or optimising) stats a point buy is probably going to be the least likely route to cause problems.
That said here are some suggestions I haven't seen here yet:
Change the dice you use to roll:
Instead of 3D6 try different combinations of dice that either narrow the range to keep the minimum and maximum closer together or create a steeper bell curve to make really high or really low rolls less likely.
Here are some examples: https://anydice.com/program/11d08
Random point buy:
Use a point buy system but find a way to randomly distribute the points so you have randomness in what is high and what is low but everyone is still roughly equal. Not sure if there's a good way to do that though without having to use something like http://stuff.nathan.net.nz/dnd
Mysterious AlpacaMysterious Alpaca
The Random Elite system
Goal: to give all players roughly equivalent Attribute scores (total plus value), without removing the fun and randomness of rolling dice.
My solution is the Random Elite.
Each player starts with an Elite Array/Grid. For example 13, 11, 11, 9, 9, 7. These values are assigned to the six attributes like a normal Elite Array system.
Once the initial values are assigned, each player rolls a certain number of d6. Each die result adds 2 to a given attribute. 1 - STR 2 - INT 3 - DEX 4 - WIS 5 - CON 6 - CHA
Optionally, if the roll would result in an attribute being higher than a certain value (17 or 19, usually), the player must a assign that bonus to a different attribute of their choice (so long as that would not bring the second attribute over the limit).
So for example, Harry wants to play a Barbarian with might thews. His GM Will is playing a semi-low power level campaign using the array above, plus five bonuses. Harry assigns his array, giving him starting values of:
STR - 13
DEX - 11
CON - 11
INT - 7
WIS - 9
CHA - 9
Harry then rolls his 5d6. He gets 1, 2, 6, 2, and 3. Referring to the chart he adds +2 bonus to the indicated attributes, giving him a result of:
INT - 11
CHA - 11
Not exactly the dumb, ugly, mighty Barbarian he was hoping for but gives him the option of hidden depths.
The Random Elite system can be adapted to the level of power and randomness you and you players desire. Weaker grid with more dice (all 9s for starting values plus 14d6), stronger grid with fewer dice (15, 15, 13, 11, 11, 9 with only 2 dice), or even a strong grid with more dice for that superhero power level campaign. My general preferred level when I DM is 13, 13, 11, 11, 9, 7 + 7d6 (redistribute over 19).
Xavon_WrentaileXavon_Wrentaile
A Method To Use For Rolling Dice
My nephew (D&D 5e campaign) had us roll up characters using 4d6 drop 1 (the default method in the PHB) arranged to fit abilities as desired. His boundaries were:
"If your total ability bonus score total is +10 or greater, either re-roll or modify a roll down to get to +10"
"If your total ability bonus score is +3 or less, re-roll"
"If you do not have at least one score of 16 (or higher) after rolling, you may roll again if you wish, providing 1 and 2 are complied with, but you are not required to."
\$ \begin{array}{|c|l|} \hline \text{Score} & \text{Modifier} \\ \hline 2\text{–}3 & −4 \\ 4\text{–}5 & −3 \\ 6\text{–}7 & −2 \\ 8\text{–}9 & −1 \\ 10\text{–}11 & +0 \\ 12\text{–}13 & +1 \\ 14\text{–}15 & +2 \\ 16\text{–}17 & +3 \\ 18\text{–}19 & +4 \\ \hline \end{array} \$
I ended up with +12, due to a 16, 16, 15, 12, 13, 14 (I was HOT!) I complied with Rule 1 and dropped the second 16 to a 15, and the 14 to a 13. That left me with a +10. (DM okayed this).
My brother (his dad) was ice cold. He barely got the +4, and had one score of 16. He kept his scores. We played and had fun. No worries.
With this method, you can set the +bonus range to anything you'd like, and perhaps make it narrower than what my nephew allowed. (such as +8 to +4, or whatever).
What does this do?
It keeps the randomness of die rolling in the mix, but it also truncates the results to keep the variation from going too far afield.
About point buy
If using the 27-point buy (page 8. Basic Rules, D&D 5e) one can arrive at +6 or +3 (before racial adjustments) in a few different ways (and a variety of points in between).
Buy 15, 15, 15, 8, 8, 8 for 3x (+2) and 3x (-1): aggregate +3.
Buy 13, 13, 13, 12, 12, 12, for aggregate +6,
Buy 14, 14, 14, 10, 10, 10, for aggregate +6.
And just because we can ...
Buy 14, 12, 12, 12, 12, 12, for aggregate +7 (@ShadowRanger, thank you!)
KorvinStarmastKorvinStarmast
\$\begingroup\$ On point buy: Unless I'm forgetting some point buy restriction, +7 is possible, if mostly useless (maybe usable for jack of all trades who want to be decent at everything given bounded accuracy, but suboptimal for class abilities and most stuff in combat): 14, 12, 12, 12, 12, 12. \$\endgroup\$ – ShadowRanger Nov 4 '19 at 21:53
\$\begingroup\$ @ShadowRanger Right you are, I'll add that in as an example. \$\endgroup\$ – KorvinStarmast Nov 4 '19 at 22:13
Have players roll an ability score array for the entire party
I've used this method as both a player and a DM, with great success:
The party collectively rolls stat values for 1 plus the number of stats (e.g. for D&D 5E, you'd roll 7 sets of (4d6 drop lowest)). These can be divided up however you wish - with a party of 6, you might have each player and the DM each roll one stat value.
The party and the DM collectively decide which score to drop from the array.
Each player decides how to allocate the remainder of the stat array for their individual character.
This method allows for the unpredictability of random rolls, while also keeping the characters balanced by having them use the same array. Additionally, the fact that the entire party decides which value to drop allows them to decide as a group how powerful they want the party to be, within the bounds set by the rolled stats.
OK, well I think the real reason anyone wants to roll ability scores is so that they can start with at least one high stat. In my games, I use point buy, however, the first "15" that you buy, is an 18 instead. It makes for a slightly more powerful party, but everyone feels more heroic that way. And it also maintains balance.
Sam LacrumbSam Lacrumb
Try the character first
I see this more often as a fear than a reality. And it only becomes a hinderance is you let it be one. Dnd is not a video game stats don't matter much unless you want them too.
Focus more on roleplay.
If one player having high stats makes other character useless, then you may be putting too much emphasis on stats. Low states should not hinder the spotlight.
First high states does not alter access to skills or abilities. No matter how high the warrior stats he still can't cast fly or cure wounds and he probably doesn't know anything about stealth or acting.
Smart characters can also use this. In a recent game I played I rolled very high, (16,18,15,13,17,6) so I decided to play a subclass everyone says is underpowered, a monk of the four elements, other players had more normal scores but had more "optimized" subclasses. We all had a ball, and high stats did not stop my character from getting downed all the time, (she was the closest think to a tank we had) she was a lizardfolk shaman who could not read, barely spoke common, and was perfectly happy to cook and eat humanoids (just not reptilian ones). One bad score became far more character defining than all the good ones. She also tried diplomacy all the time to the utter horror of other characters. Other players had a ball. Other characters would often "jump on the grenade first" to prevent her trying diplomacy. She once offered to cook a mourning feast out of a dead child for the family and threatened to bite the faces off annoyances all the time.
I also played a dwarven wizard, whos highest score was a 14, (12,7,12,14,6,7) again had a ball. Played up the fact he was a wizard form a race naturally resistant to magic (back in older editions dwarves had resistance to magic) he spent all his time trying to get better at magic, but kept shorting out his own magic. He was absent minded, talked in technobabble (arcanobabble?) and needed glasses to see. I leaned into it and we gave him the wild magic feature. He rode around on Bulette that started life a mule. He killed a dragon with an accidental wall of force, changed the race of two of his party members (and turned another one purple), became immune to force spells, and set 3 towns on fire. He was destruction on wheels and often blissfully unaware of the chaos left in his wake.
A bad character can actually be a great thing to play, the farmer who lost their home and had to turn to adventuring, the exiled noble, the kid. there great ways to twist low scores into opportunities, think about why your character might have a low score. lean into it, the person who thinks they are a great negotiator when they are not, the barbarian that is dumb as a box of rocks, the "thief" that can't pick a lock, the wizard who who is perpetually ill, all defined by low stats and wonderful to play. I often encourage players to add major flaws to a character for this reason. If I don't have a character concept for a game I always roll and let the roll inspire the character.
Advice from older editions.
Older edition in which straight rolling was more common often explicitly offered advice to not make players play characters they did not want to play but encourage them to try. 2nd edition offers some great advice for this. Common reason was the unplayable character worry, as an old DM put it "I'm not going to make you play a character that is clearly unfit to leave his house" if you don't roll a single score above a 5 just let them reroll. But the take away is to try out the character first, characters are defined by flaws more than anything else and unique characters can be way more fun to play than generic fighter #7.
5th edition is great for this simply because stats are not that important in the game. the impact of stats is quite small and swamped by other features like skill selection, backstory, and class features.
Inspired by @DavidCoffron 's answer:
Ranked - Choice rolling
There are three parts, and you can do the first two in either order depending on how much foreknowledge you want the players to have.
Part A: Rank Stats
Everyone ranks the 6 stats in order of importance to themselves.
Part B: Roll stats
Everyone rolls for the six stats in order. All of the rolls for strength are pooled, and ordered. The same for the other five stats.
Part C: Apportion Stats
Everyone who's first pick which was not tied gets that stat and removes it from the pool.
If there are ties for first pick, check 2nd picks to see if those are tied with anyone, if not everyone get
Once everyone has one stat, cross that choice out of their ranking and renumber the choices so they are 1-5 and then repeat this process. Once you've gone through this 5 times everyone gets their last stat from the remaining choice in it's pool.
ST DX CN IN WS CA
Anne 12 17 11 12 8 13
Bob 9 14 16 12 13 13
Chris 13 9 12 15 12 9
Anne wants to play a fighter and so chooses ST,CN,DX,WS,IN,CA Bob wants to play a cleric and so chooses WS,CN,ST,CA,DX,IN Chris wants to play a rogue and so chooses DX,IN,CA,CN,WS,ST
Everyone gets the best stat for their first choice:
Anne 13
Bob 13
Chris 17
Anne & Bob are tied for their second choice so Chris gets his 2nd and we look at A&B's 3rd. Bob gets the #2 St, and Anne gets the #2 DX.
Anne 13 14
Bob 12 13
Chris 17 15
Anne's remaining choices: CN,WS,IN,CA Bob's remaining choices: CN,CA,DX,IN Chris's remaining choices: CA,CN,WS,ST
A&B are still tied for for CN so we put their (now) 2nd choice first. This makes B&C tie, and we end up with Anne getting WS, Bob DX and Chris getting ST
Anne 13 14 16 12 12 9
Bob 12 9 12 12 13 13
Chris 9 17 11 15 8 13
The above is the final result, however if the rolls are made before ranking choices people will likely different choices as to which stats to prioritize... Also the example rolls were actually not very great.
I'm not sure this a great idea, but I'm going to post it anyways ... I can always delete it later if I've overlooked something.
\$\begingroup\$ I'm not sure I like this system, but it might represent an interesting compromise between other collective/drafting ability score systems and the really old-school "3d6 in order, pick a class based on what you got". If nobody rolled a good strength then by golly the party is going to be full of weaklings. \$\endgroup\$ – Oblivious Sage Jan 12 at 22:31
\$\begingroup\$ If you have not used this in play, but are instead suggesting it as a solution baed on an untested idea, that makes for a much less useful answer (or opinion) than if you have used this with a play group and found it to be successful . \$\endgroup\$ – KorvinStarmast Jan 12 at 22:46
Set minimums on the number of well rolled stats and maximums on the number of poorly rolled stats
No one likes playing a poorly stated PC. It takes away from the fun of the game. One of the primary draws of D&D is that the players will be playing characters who are exceptional (if the characters aren't exceptional why isn't everyone adventuring?!).
To achieve these aims, while also allowing for some randomness it can be useful to set minimums and maximums for set of rolled stats.
The method is use when my players choose to roll:
Rolling happens while I'm present
Players roll 4d6 and take the highest three. They do this 6 times to generate a stat array.
Out of the 6 rolls at least two must be 14+ (before any modifiers are applied)
Out of the 6 rolls no more than two of the rolls should be less than 10 (before any modifiers are applied)
Once a player has rolled a set of stats that satisfy all of these criteria they can assign their array to the abilities in any order (using a roll no more than once).
You could take this one step further and make the stats be rolled in order (Str, Dex, Con, Int, Wis, Cha
illustroillustro
There are quite a lot of methods to not have random results or limit the randomness of results of 'rolling ability scores'. However, applying those methods doesn't particularly mitigate any of the problems of rolling scores - it simply makes you roll them less, or not really roll them at all. (There are quite a lot of clever ways to basically remove the randomness from the process while seeming like it's still there).
This is actually a spotlight time/screen time question.
Let's set aside that classes or feats or whatnot can have a far greater effect on character mechanical power than ability scores in many games (or editions of DnD). Someone with greater numbers on their sheet than other players will succeed more often than other players at in-game tasks, barring GM intervention. This will lead to them being called on to complete more tasks (as players and characters will naturally want to succeed more often), and gain more approbation for success. Humans are wired this way.
Some people will find that unfair or resent their lack of screen time/success in comparison to others, especially if it is determined by a single set of die rolls at game start. There are quite a lot of ways to give people more screen time, or help people to enjoy the unfolding of the story or other aspects of the game without feeling that they either need as much or more screen time/success/whatever as other people.
This can take the form of the GM artificially granting screen time to some characters or making tasks those characters perform less mechanically demanding, stronger characters having reasons to value or call on weaker ones (whether roleplaying or mechanically) and thus give them screen time and status, altering the mechanical stats of characters to better reflect the party dynamic, so on. There's reams of advice on the internet about this in many places.
However the first step to incorporating any of this into a situation where rolled ability scores cause a mechanical power discrepancy amongst characters is to recognize the nature of the problem. This is about screen time, and it is not fundamentally different to a situation where you have a player who gets a lot of screen time due to social skills and personality and another that has weaker social skills and is less outgoing, leading to feeling like they are not contributing to the game.
You can solve both by largely the same methods.
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I just have them keep rerolling bad scores until everyone's about even. It ain't rocket science, just keep it simple.
I tell Josh to reroll the 8 or Bruce to roll a d6 and put it wherever he wants up to 18. If Tom rolls especially good then he just has the best stats but we can get everyone else in the ballpark with a few rolls if it's important.
SevenSidedDie
Robert J GrippeRobert J Grippe
\$\begingroup\$ Comments are not for extended discussion; this conversation has been moved to chat. \$\endgroup\$ – SevenSidedDie Oct 17 '18 at 16:07
\$\begingroup\$ Robert, it would be worth detailing how this approach, that you advocate, works at your table. Specifically, how your players feel about it, what sort of impacts you get in your game, and also how you define or assess 'about even'. \$\endgroup\$ – KorvinStarmast Oct 17 '18 at 16:12
Not the answer you're looking for? Browse other questions tagged gm-techniques character-creation dungeons-and-dragons ability-scores or ask your own question.
How to persuade players not to cheat?
A player rolled very bad stats, how to make sure they still enjoy the game?
How do I balance a party's ability scores?
How can I model this "Party Draft Pool" ability score generation method in AnyDice?
Will it cause any balance issues if I grant a character a free feat to balance out their low ability scores?
How can I make puzzles a challenge for the character, rather than the player?
How can I create a world with realistic demographics that won't overwhelm my level 1 PCs?
Is it better to take the array and be Joe Average, or to roll for the odds of getting on average better scores?
The group I DM for insists on rolled stats, but I don't want unbalanced PCs to cause group conflicts later
How would parents' ability scores affect their offspring?
How do I evaluate the power of alternatives to the standard array?
Has any edition of D&D ever described specific ability score values in real-world terms?
How can I encourage players to role-play using their ability scores? | CommonCrawl |
arXiv.org > hep-ex > arXiv:1902.00558
High Energy Physics - Experiment
arXiv:1902.00558 (hep-ex)
[Submitted on 1 Feb 2019 (v1), last revised 9 Jul 2020 (this version, v4)]
Title:Measurement of Neutrino-Induced Neutral-Current Coherent $π^0$ Production in the NOvA Near Detector
Authors:M. A. Acero, P. Adamson, L. Aliaga, T. Alion, V. Allakhverdian, N. Anfimov, A. Antoshkin, E. Arrieta-Diaz, A. Aurisano, A. Back, C. Backhouse, M. Baird, N. Balashov, P. Baldi, B. A. Bambah, S. Basher, K. Bays, B. Behera, S. Bending, R. Bernstein, V. Bhatnagar, B. Bhuyan, J. Bian, J. Blair, A.C. Booth, A. Bolshakova, P. Bour, C. Bromberg, N. Buchanan, A. Butkevich, M. Campbell, T. J. Carroll, E. Catano-Mur, S. Childress, B. C. Choudhary, B. Chowdhury, T. E. Coan, M. Colo, L. Corwin, L. Cremonesi, D. Cronin-Hennessy, G. S. Davies, P. F. Derwent, P. Ding, Z. Djurcic, D. Doyle, E. C. Dukes, P. Dung, H. Duyang, S. Edayath, R. Ehrlich, G. J. Feldman, W. Flanagan, M. J. Frank, H. R. Gallagher, R. Gandrajula, F. Gao, S. Germani, A. Giri, R. A. Gomes, M. C. Goodman, V. Grichine, M. Groh, R. Group, B. Guo, A. Habig, F. Hakl, J. Hartnell, R. Hatcher, A. Hatzikoutelis, K. Heller, A. Himmel, A. Holin, B. Howard, J. Huang, J. Hylen, F. Jediny, C. Johnson, M. Judah, I. Kakorin, D. Kalra, D.M. Kaplan, R. Keloth, O. Klimov, L.W. Koerner, L. Kolupaeva, S. Kotelnikov, A. Kreymer, Ch. Kulenberg, A. Kumar, C. D. Kuruppu, V. Kus, T. Lackey, K. Lang, S. Lin, M. Lokajicek, J. Lozier, S. Luchuk, K. Maan, S. Magill
, W. A. Mann, M. L. Marshak, V. Matveev, D. P. Méndez, M. D. Messier, H. Meyer, T. Miao, W. H. Miller, S. R. Mishra, A. Mislivec, R. Mohanta, A. Moren, L. Mualem, M. Muether, K. Mulder, S. Mufson, R. Murphy, J. Musser, D. Naples, N. Nayak, J. K. Nelson, R. Nichol, E. Niner, A. Norman, T. Nosek, Y. Oksuzian, A. Olshevskiy, T. Olson, J. Paley, R. B. Patterson, G. Pawloski, D. Pershey, O. Petrova, R. Petti, R. K. Plunkett, B. Potukuchi, C. Principato, F. Psihas, V. Raj, A. Radovic, R. A. Rameika, B. Rebel, P. Rojas, V. Ryabov, K. Sachdev, O. Samoylov, M. C. Sanchez, I. S. Seong, P. Shanahan, A. Sheshukov, P. Singh, V. Singh, E. Smith, J. Smolik, P. Snopok, N. Solomey, E. Song, A. Sousa, K. Soustruznik, M. Strait, L. Suter, R. L. Talaga, P. Tas, R. B. Thayyullathil, J. Thomas, E. Tiras, D. Torbunov, J. Tripathi, A. Tsaris, Y. Torun, J. Urheim, P. Vahle, J. Vasel, L. Vinton, P. Vokac, T. Vrba, B. Wang, T. K. Warburton, M. Wetstein, M. While, D. Whittington, S. G. Wojcicki, J. Wolcott, N. Yadav, A. Yallappa Dombara, S. Yang, K. Yonehara, S. Yu, J. Zalesak, B. Zamorano, R. Zwaska (NOvA Collaboration)
et al. (91 additional authors not shown)
Abstract: The cross section of neutrino-induced neutral-current coherent $\pi^0$ production on a carbon-dominated target is measured in the NOvA near detector. This measurement uses a narrow-band neutrino beam with an average neutrino energy of 2.7\,GeV, which is of interest to ongoing and future long-baseline neutrino oscillation experiments. The measured, flux-averaged cross section is $\sigma = 13.8\pm0.9 (\text{stat})\pm2.3 (\text{syst}) \times 10^{-40}\,\text{cm}^2/\text{nucleus}$, consistent with model prediction. This result is the most precise measurement of neutral-current coherent $\pi^0$ production in the few-GeV neutrino energy region.
Subjects: High Energy Physics - Experiment (hep-ex)
Report number: FERMILAB-PUB-19-047-ND
Cite as: arXiv:1902.00558 [hep-ex]
(or arXiv:1902.00558v4 [hep-ex] for this version)
From: Hongyue Duyang [view email]
[v1] Fri, 1 Feb 2019 20:49:29 UTC (118 KB)
[v2] Thu, 21 Mar 2019 18:30:43 UTC (118 KB)
[v3] Tue, 28 May 2019 06:30:05 UTC (158 KB)
[v4] Thu, 9 Jul 2020 20:13:14 UTC (176 KB)
hep-ex | CommonCrawl |
How should we treat questions that implicitly ask for making a question clear?
Motivated by:
Curve resemblance to Closest Shape - Question on Hold - Advice for clarification needed
The question has two parts - "is there a way to define closeness of geometric objects?" - which is the kind of question that mathematicians ask and answer all the time - and "using that definition, how do we find the closest shape?".
There are many cases of this and it's a contentious issue, these questions often get closed and I don't think it is crazy to do so because what they are literally asking is unclear. However implicitly they are asking "how do I make this question precise, and what is the answer to that precise question?".
How should we deal with such questions?
$\begingroup$ Tag it with the (soft-question) tag? $\endgroup$
– Theo Bendit
$\begingroup$ The number of my postings to m.s.e. is so large that in some circumstances it seems appropriate that I would take some pride in it, althought that's something I don't normally think about. But the way m.s.e. has treated this question makes me want to say m.s.e. is not good enough for such a question. Partly that may be a manifestation of the bigoted contempt for applications outside of mathematics that has long afflicted the mathematical community. I think that particular preposterous lunacy has begun to clear up and may change substantionally in the coming decades. Maybe$\,\ldots\qquad$ $\endgroup$
– Michael Hardy
$\begingroup$ $\ldots\,$history will record that applied mathematics is a field of endeavor that began in about the year 2100. Perhaps the part of m.s.e. that is truly not good enough for this is the process by which closure and deletion is done. It is run in such a way that participaiton is intolerable when one is sober. $\endgroup$
$\begingroup$ "[part of m.s.e.] is run in such a way that participaiton [sic] is intolerable when one is sober." Well, that would explain a lot of what I see on m.s.e. $\endgroup$
– Gerry Myerson
$\begingroup$ @GerryMyerson : $\quad\uparrow\quad$ Specifically, the review queue for closing questions. $\endgroup$
$\begingroup$ But it's well-known that one should not drink and derive. $\endgroup$
Split them into two questions.
The first question asks "How do I make this problem/question precise?". (It's not clear whether that's going to work well in the Stack Exchange format, since if they're not clear what they want, it's hard to know how we would know, either -- but if they can provide a clear description of the phenomenom/problem, and are asking us for how to formulate it in a mathematical way, that could work.) Hopefully, answers to that question will give them some options on how they could formulate it mathematically, or at least some ideas on how to do that.
With that in hand, they can then ask a second question which is "What is the answer to this precise mathematical question?", where they formulate a precise mathematical question based on what they learned from the answers to their first question.
Splitting it into two keeps it narrowly focused, makes it work better on the Stack Exchange format (where having multiple different questions in a single post often doesn't work out so well well), and makes it more likely that at least one of these questions/answers will be useful to someone else in the future.
D.W.D.W.
I've asked the moderators to migrate this question to stats (dot) stackexchange (dot) com. Apparently the usual mechanisms for migration cannot be used when a question has been deleted.
Michael HardyMichael Hardy
$\begingroup$ I think when Michael writes "this question", he is referring to this question: math.stackexchange.com/questions/2708226/… $\endgroup$
How should we treat subjective titles?
Should we ask for Question Quotas like those that have been available for the big three?
Proposal: ban verbatim homework questions which have no accompanying text
Do we have or should we have a guide for making good questions?
How, in my opinion, we should treat new askers
How Should I ask Questions that I just need Hints on?
How to deal with just-google-it questions?
Should we discourage full solutions to questions that explicitly ask for a hint? | CommonCrawl |
Ubertone's profilers allow to make accurate velocity and echo amplitude profile measurements at high spatial and high time resolution.
Our technology is inspired from medical imaging and oceanographic sonar and also known as UVP (Ultrasonic Velocity Profiler) or UDV (Ultrasonic Doppler Velocimeter)
Measure with this technique requires :
a hardware for the emission and reception of ultrasonic pulses and signal processing, and
a probe, called transducer, for the transmission of acoustic pulses into the medium and the transmission back of the echoes as an electric signal to the hardware.
Backscattered echo
Doppler SNR
Geometrical conventions
Two different reference frames may be used: the transducer reference frame and the flow reference frame.
r : distance to the transducer over the beam axis (included in (x,y) plan),
x : main flow direction,
y : orthogonal to the main flow direction (axis in the flow section).
By convention, the velocity is positive whenever the liquid is flowing towards the transducer.
Measurement cell
The transducer emits a burst of a specified duration into the liquid. This ultrasonic pulse consists in several periods at the emission frequency (carrier frequency) within the bandwidth of the selected transducer.
The acoustic beam starts with a cylindrical shape with same diameter as the transducer.
Then, after roughly 1.6 times the near field distance (\(r_{nf}\)), the beam turns into a conical shape (of half-angle spread \(\alpha\)) :
$$r_{nf} = {{D^2⋅f_0} \over {4⋅c}}$$
$$\alpha = {{1.22⋅c} \over {D⋅f_0}}$$
The measurement cell is defined as a slice of the beam with a thickness defined by emission pulse duration, thus defining the spatial resolution. The diameter of the cells changes along the depth and can be obtained from the diameter of the beam. The thickness of the cells is constant and given by:
$$r_{em} = {{c⋅n_{em}} \over {2⋅f_0}}$$
and: $$y_{em} = r_{em}⋅\sin{\beta}$$
\(r_{em}\): cell thickness [m] along the beam axis,
\(c\): sound speed in the medium [m/s] (around 1480m/s in water),
\(n_{em}\): number of periods at the carrier frequency inside the emitted burst,
\(f_0\): emission frequency [Hz].
During the setup, the user defines the value of the projection of the thickness along the axis in the flow section, \(y_{em}\).
The intensity of the acoustic beam decreases with the distance and thus induces a physical limit to the exploration depth. Indeed, while propagating, the ultrasonic waves concede a part of their energy to the medium by scattering and absorption.
Profile measurement
The measurement of a profile by the mean of an ultrasonic wave supposes the presence of acoustic scatterers in suspension in the fluid. The acoustic scatterers may be particles, micro-bubbles, vesicles or any local variation of the acoustic impedance.
After emitting the ultrasonic pulse, the system switches in receive mode. The acoustic wave propagates along the beam axis and each scatterer that crosses the beam will diffuse an echo toward the transducer. The echo of a given particle will reach the transducer after a time of flight (back and forth) equal to \(2r \over c\), with :
\(r\) : distance between the transducer and the scatterer [m],
\(c\) : sound speed in the medium [m/s].
The scatterers dispersed in the flow will allow to obtain a continuous backscattered signal from the medium. This acoustic signal is composed, at a given time, by the sum of the echoes of the scatterers located, at a given distance, in the measurement cell.
The regular sampling and analysis of the backscattered echo allows to obtain a so called profile. A profile can be considered as a vector of a physical quantity distributed along an axis.
The attached illustration shows the measurement window, the position of the first measurement cell and the inter-cell distance. The position of each cell is defined by its centre.
The inter-cell distance is the distance between the centre of two successive cells.
As for the cell thickness, the user defines, in the software interface, the value of the projection of the inter-cell distance over the flow section axis (vertical).
One instantaneous profile is obtained by sending a series of \(n_{ech}\) pulses and analysing the corresponding backscattered signal.
The pulses are sent at a frequency called \(PRF\) (Pulse Repetition Frequency). The \(PRF\) is chosen so that all the echoes from the medium have been received before sending the next pulse.
The time delay needed to obtain one instantaneous profile is equal to \(n_{ech} \over PRF\). By choosing a high number of samples with a low \(PRF\), the measurements may be slowed down.
Instantaneous profiles are sampled continuously inside one block. A block is composed of \(n_{profile}\) instantaneous profiles. Thus, the time delay needed to obtain one averaged profile is equal to \(n_{ech}n_{profile} \over PRF\).
The acoustic scatterers, moving with the liquid flow with a velocity \(\overrightarrow{V}\), will induce a frequency shift in the backscattered signal: the Doppler shift.
In each cell, the information obtained from the \(n_{ech}\) pulses is used by the instrument to estimate the projection of the velocity vector over the beam axis. The measured velocity can be expressed as:
$$V_p = {{c⋅f_D} \over {2⋅f_0}}$$
\(V_p\): velocity projection [m/s],
\(c\): sound speed in the medium [m/s],
\(f_D\): Doppler frequency [Hz],
When the angle \(\beta\) (between the measurement axis and the flow axis) is known, the flowing velocity \(V\) (see Illustration) along the horizontal axis may be calculated from:
$$V = {V_p \over \cos{\beta}}$$
See this FAQ to understand why it is interesting to use a second transducer for monostatique measurements.
Range-Velocity Limit
The velocity measurement by coherent pulsed Doppler technique gives an excellent spatial resolution combined with a very good accuracy, thus in a given limit of velocity range.
The repetition of the ultrasonic pulses gives a precise measurement of the Doppler phase in a small cell. Nevertheless, this approach induces a limitation of the velocity range for a given exploration depth. Thus, the pulse repetition period defines on one hand the exploration depth (all the echoes will have to come back from the media before the next pulse is sent) and on the other hand the interval for the determination of the Doppler shift (between -π et +π and proportional to the velocity). When the interval between two successive pulses is too long, the measurement suffers from a phase jump inducing an ambiguity. On a frequency point of view, this is equivalent to overstep the limit given by the Nyquist-Shannon theorem.
This phase jump results in a velocity jump that can be observed on a profile with a velocity gradient. The figure below shows the measurement on a profile with velocity increasing with the depth. The red lines show the velocity limits of the range; one can see that each velocity outside this range suffers from a jump that brings it inside the interval. Thus the instantaneous velocity (in green) is reduced in negative values when they overstep the highest limit of the range.
The velocity range for the projected velocities (as measured by the device) is given by:
$$R_{vp} = {{c⋅PRF} \over {2⋅f_0}}$$
\(PRF\): the Pulse Repetition Frequency [Hz],
\(c\): the sound speed (about 1480m/s in water) [m/s],
\(f_0\): the emission frequency [Hz].
Finally, it is the pulse repetition frequency (PRF) that will act on one side on the maximal exploration depth and on the other side on the maximal velocity. This limit is expressed by:
$$R_v⋅R_y = {{c^2⋅\tan{\beta}} \over {4⋅f_0}}$$
\(R_v\): the velocity range along the flow axis (equal to 2.6 m/s in the example above) [m/s].
\(R_y\): the exploration depth (orthogonal to the main flow direction, the pipe diameter for example) [m].
\(\beta\): the angle between the beam axis and the velocity vector.
The product of the velocity range and the exploration depth is thus fixed for a given setup.
The backscattered amplitude measurement is done simultaneously with the velocity measurement.
After each emitted pulse, the transducer receives the echoes from the particles crossing the acoustic beam. Thus the received acoustic wave corresponds to the sum of the echoes of each particle randomly distributed along the beam. This signal is thus stochastic.
For each cell, the device computes the voltage amplitude (RMS value, expressed in Volts) received by the transducer.
The usual shape of the amplitude profile is decreasing due to the diffusion of the ultrasounds by the particles in any direction and due to the wave attenuation by absorption. In order to compensate this decrease, the electronics controls the gain as a function of the depth. This gain is expressed as:
$$G_{dB}(r) = a_0 + a_1⋅r$$
\(G_{dB}\): the amplification gain in dB
\(a_0\): the equivalent gain for a distance equal to zero (transducers surface),
\(a_1\): the gain slope in dB/m,
\(r\): the depth along the transducers axis.
The gain can be set in the range 20 to 68 dB for the UB-Lab and for the UB-Flow.
If the amplification gain is too high, there is a risk for the signal to saturate. If the gain is too low the signal-to-noise ratio may be bad.
The system presents a blind zone, just in front of the transducer, going from 2 to 20 mm depending on the transducer, the frequency and the concentration of particles. In this zone, the signal-to-noise ratio is low due to the ultrasonic emission: the transducer is blinded by the ultrasonic burst that has just been emitted.
Our devices are equipped with an automatic gain control algorithm that optimizes the gain over the full observed window with a logarithmic law. It is thus recommended to limit the number of cells to the area of interest so that the gain is well adapted to this area (and not influenced by the echoes after an interface for example).
Level measuring principle
The user can evaluate the water level (or of any other interface) by observing a strong variation of the backscattered amplitude gradient.
A script is available for this detection on our post-processing interface.
The usual shape of the amplitude profile is decreasing due to different factors:
The wave is attenuated by absorption due to viscous friction and relaxation, which depends on medium and sound frequency, and is described by:
$$I_{att}(z) = I_0⋅e^{- 2⋅α⋅z}$$
In water:
\(α = K f 2\)
\(K = (2.4⋅10^{-3}⋅(T - 38)^2 + 1.5)⋅10^{-14}\)
The ultrasound is scattered by the particles in any direction, also called diffusion. Crossing of medium interfaces also participates in diffusion through reflection and refraction.
The beam is diverging. See Tutorial What does the acoustic beam look like? How does it impact the measuring cell's geometry ?
The Doppler Signal-to-Noise Ratio (SNR) is given in dB. It gives information about the quality of the velocity estimation.
It is the ratio between the Doppler signal, which is coherent to the last emission, and the noise's energy, so what remains in the signal and which is not coherent with this emission.
$$SNR_{Doppler} = 10⋅log({E_s \over E_n})$$
\(E_s\): the signal energy,
\(E_n\): the noise energy
This is why we use the pulse coding, to make the echoes from the previous emissions non coherent and identifiable (see our FAQ on pulse coding). | CommonCrawl |
Starting Grant (StG) (55) Apply Starting Grant (StG) filter
Consolidator Grant (CoG) (39) Apply Consolidator Grant (CoG) filter
(-) Remove Advanced Grant (AdG) (17) filter Advanced Grant (AdG) (17)
Proof of Concept (PoC) (20) Apply Proof of Concept (PoC) filter
(-) Remove 2018 (17) filter 2018 (17)
(PE) Physical Sciences & Engineering (12) Apply (PE) Physical Sciences & Engineering filter
PE1 (2) Apply <label class='research-domain' title='Mathematics'>PE1</label> filter
PE3 (1) Apply <label class='research-domain' title='Condensed Matter Physics'>PE3</label> filter
PE4 (1) Apply <label class='research-domain' title='Physical and Analytical Chemical Sciences'>PE4</label> filter
PE5 (2) Apply <label class='research-domain' title='Synthetic Chemistry and Materials'>PE5</label> filter
PE6 (2) Apply <label class='research-domain' title='Computer Science and Informatics'>PE6</label> filter
PE7 (2) Apply <label class='research-domain' title='Systems and Communication Engineering'>PE7</label> filter
PE8 (0) Apply <label class='research-domain' title='Products and Processes Engineering'>PE8</label> filter
PE9 (1) Apply <label class='research-domain' title='Universe Sciences'>PE9</label> filter
PE10 (1) Apply <label class='research-domain' title='Earth System Science'>PE10</label> filter
(LS) Life Sciences (4) Apply (LS) Life Sciences filter
LS1 (0) Apply <label class='research-domain' title='Molecular and Structural Biology and Biochemistry'>LS1</label> filter
LS2 (1) Apply <label class='research-domain' title='Genetics, Genomics, Bioinformatics and Systems Biology'>LS2</label> filter
LS3 (0) Apply <label class='research-domain' title='Cellular and Developmental Biology'>LS3</label> filter
LS4 (0) Apply <label class='research-domain' title='Physiology, Pathophysiology and Endocrinology'>LS4</label> filter
LS5 (0) Apply <label class='research-domain' title='Neurosciences and Neural Disorders'>LS5</label> filter
LS6 (0) Apply <label class='research-domain' title='Immunity and Infection'>LS6</label> filter
LS7 (3) Apply <label class='research-domain' title='Diagnostics, Therapies, Applied Medical Technology and Public Health'>LS7</label> filter
LS8 (0) Apply <label class='research-domain' title='Evolutionary, Population and Environmental Biology'>LS8</label> filter
LS9 (0) Apply <label class='research-domain' title='Applied Life Sciences and Non-Medical Biotechnology'>LS9</label> filter
(SH) Social Sciences & Humanities (1) Apply (SH) Social Sciences & Humanities filter
SH1 (0) Apply <label class='research-domain' title='Individuals, Markets and Organisations'>SH1</label> filter
SH2 (0) Apply <label class='research-domain' title='Institutions, Values, Environment and Space'>SH2</label> filter
SH3 (0) Apply <label class='research-domain' title='The Social World, Diversity, Population'>SH3</label> filter
SH4 (1) Apply <label class='research-domain' title='The Human Mind and Its Complexity'>SH4</label> filter
SH5 (0) Apply <label class='research-domain' title='Cultures and Cultural Production'>SH5</label> filter
SH6 (0) Apply <label class='research-domain' title='The Study of the Human Past'>SH6</label> filter
(-) Remove Israel (10) filter Israel (10)
(-) Remove Norway (1) filter Norway (1)
(-) Remove Sweden (6) filter Sweden (6)
Displaying 1 - 17 of 17. Show 10 results per page.
Project acronym 3DBrainStrom
Project Brain metastases: Deciphering tumor-stroma interactions in three dimensions for the rational design of nanomedicines
Researcher (PI) Ronit Satchi Fainaro
Host Institution (HI) TEL AVIV UNIVERSITY
Call Details Advanced Grant (AdG), LS7, ERC-2018-ADG
Summary Brain metastases represent a major therapeutic challenge. Despite significant breakthroughs in targeted therapies, survival rates of patients with brain metastases remain poor. Nowadays, discovery, development and evaluation of new therapies are performed on human cancer cells grown in 2D on rigid plastic plates followed by in vivo testing in immunodeficient mice. These experimental settings are lacking and constitute a fundamental hurdle for the translation of preclinical discoveries into clinical practice. We propose to establish 3D-printed models of brain metastases (Aim 1), which include brain extracellular matrix, stroma and serum containing immune cells flowing in functional tumor vessels. Our unique models better capture the clinical physio-mechanical tissue properties, signaling pathways, hemodynamics and drug responsiveness. Using our 3D-printed models, we aim to develop two new fronts for identifying novel clinically-relevant molecular drivers (Aim 2) followed by the development of precision nanomedicines (Aim 3). We will exploit our vast experience in anticancer nanomedicines to design three therapeutic approaches that target various cellular compartments involved in brain metastases: 1) Prevention of brain metastatic colonization using targeted nano-vaccines, which elicit antitumor immune response; 2) Intervention of tumor-brain stroma cells crosstalk when brain micrometastases establish; 3) Regression of macrometastatic disease by selectively targeting tumor cells. These approaches will materialize using our libraries of polymeric nanocarriers that selectively accumulate in tumors. This project will result in a paradigm shift by generating new preclinical cancer models that will bridge the translational gap in cancer therapeutics. The insights and tumor-stroma-targeted nanomedicines developed here will pave the way for prediction of patient outcome, revolutionizing our perception of tumor modelling and consequently the way we prevent and treat cancer.
Brain metastases represent a major therapeutic challenge. Despite significant breakthroughs in targeted therapies, survival rates of patients with brain metastases remain poor. Nowadays, discovery, development and evaluation of new therapies are performed on human cancer cells grown in 2D on rigid plastic plates followed by in vivo testing in immunodeficient mice. These experimental settings are lacking and constitute a fundamental hurdle for the translation of preclinical discoveries into clinical practice. We propose to establish 3D-printed models of brain metastases (Aim 1), which include brain extracellular matrix, stroma and serum containing immune cells flowing in functional tumor vessels. Our unique models better capture the clinical physio-mechanical tissue properties, signaling pathways, hemodynamics and drug responsiveness. Using our 3D-printed models, we aim to develop two new fronts for identifying novel clinically-relevant molecular drivers (Aim 2) followed by the development of precision nanomedicines (Aim 3). We will exploit our vast experience in anticancer nanomedicines to design three therapeutic approaches that target various cellular compartments involved in brain metastases: 1) Prevention of brain metastatic colonization using targeted nano-vaccines, which elicit antitumor immune response; 2) Intervention of tumor-brain stroma cells crosstalk when brain micrometastases establish; 3) Regression of macrometastatic disease by selectively targeting tumor cells. These approaches will materialize using our libraries of polymeric nanocarriers that selectively accumulate in tumors. This project will result in a paradigm shift by generating new preclinical cancer models that will bridge the translational gap in cancer therapeutics. The insights and tumor-stroma-targeted nanomedicines developed here will pave the way for prediction of patient outcome, revolutionizing our perception of tumor modelling and consequently the way we prevent and treat cancer.
Max ERC Funding
Start date: 2019-04-01, End date: 2024-03-31
Project acronym ANYONIC
Project Statistics of Exotic Fractional Hall States
Researcher (PI) Mordehai HEIBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE3, ERC-2018-ADG
Summary Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials. Our on-going attempts to observe quasiparticles (qp's) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, 'neutral edge modes' are the main culprit responsible for qp's dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes. Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling. The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify 'topological orders' of exotic states; namely, distinguishing between abelian and non-abelian fractional states. The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials. Our on-going attempts to observe quasiparticles (qp's) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, 'neutral edge modes' are the main culprit responsible for qp's dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes. Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling. The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify 'topological orders' of exotic states; namely, distinguishing between abelian and non-abelian fractional states. The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Project acronym CUSTOMER
Project Customizable Embedded Real-Time Systems: Challenges and Key Techniques
Researcher (PI) Yi WANG
Host Institution (HI) UPPSALA UNIVERSITET
Summary Today, many industrial products are defined by software and therefore customizable: their functionalities implemented by software can be modified and extended by dynamic software updates on demand. This trend towards customizable products is rapidly expanding into all domains of IT, including Embedded Real-Time Systems (ERTS) deployed in Cyber-Physical Systems such as cars, medical devices etc. However, the current state-of-practice in safety-critical systems allows hardly any modifications once they are put in operation. The lack of techniques to preserve crucial safety conditions for customizable systems severely restricts the benefits of advances in software-defined systems engineering. CUSTOMER is to provide the missing paradigm and technology for building and updating ERTS after deployment – subject to stringent timing constraints, dynamic workloads, and limited resources on complex platforms. CUSTOMER explores research areas crossing two fields: Real-Time Computing and Formal Verification to develop the key techniques enabling (1) dynamic updates of ERTS in the field, (2) incremental updates over the products life time and (3) safe updates by verification to avoid updates that may compromise system safety. CUSTOMER will develop a unified model-based framework supported with tools for the design, modelling, verification, deployment and update of ERTS, aiming at advancing the research fields by establishing the missing scientific foundation for multiprocessor real-time computing and providing the next generation of design tools with significantly enhanced capability and scalability increased by orders of magnitude compared with state-of-the-art tools e.g. UPPAAL.
Today, many industrial products are defined by software and therefore customizable: their functionalities implemented by software can be modified and extended by dynamic software updates on demand. This trend towards customizable products is rapidly expanding into all domains of IT, including Embedded Real-Time Systems (ERTS) deployed in Cyber-Physical Systems such as cars, medical devices etc. However, the current state-of-practice in safety-critical systems allows hardly any modifications once they are put in operation. The lack of techniques to preserve crucial safety conditions for customizable systems severely restricts the benefits of advances in software-defined systems engineering. CUSTOMER is to provide the missing paradigm and technology for building and updating ERTS after deployment – subject to stringent timing constraints, dynamic workloads, and limited resources on complex platforms. CUSTOMER explores research areas crossing two fields: Real-Time Computing and Formal Verification to develop the key techniques enabling (1) dynamic updates of ERTS in the field, (2) incremental updates over the products life time and (3) safe updates by verification to avoid updates that may compromise system safety. CUSTOMER will develop a unified model-based framework supported with tools for the design, modelling, verification, deployment and update of ERTS, aiming at advancing the research fields by establishing the missing scientific foundation for multiprocessor real-time computing and providing the next generation of design tools with significantly enhanced capability and scalability increased by orders of magnitude compared with state-of-the-art tools e.g. UPPAAL.
Project acronym DEVOCEAN
Project Impact of diatom evolution on the oceans
Researcher (PI) Daniel CONLEY
Host Institution (HI) LUNDS UNIVERSITET
Call Details Advanced Grant (AdG), PE10, ERC-2018-ADG
Summary Motivated by a series of recent discoveries, DEVOCEAN will provide the first comprehensive evaluation of the emergence of diatoms and their impact on the global biogeochemical cycle of silica, carbon and other nutrients that regulate ocean productivity and ultimately climate. I propose that the proliferation of phytoplankton that occurred after the Permian-Triassic extinction, in particular the diatoms, fundamentally influenced oceanic environments through the enhancement of carbon export to depth as part of the biological pump. Although molecular clocks suggest that diatoms evolved over 200 Ma ago, this result has been largely ignored because of the lack of diatoms in the geologic fossil record with most studies therefore focused on diversification during the Cenozoic where abundant diatom fossils are found. Much of the older fossil evidence has likely been destroyed by dissolution during diagenesis, subducted or is concealed deep within the Earth under many layers of rock. DEVOCEAN will provide evidence on diatom evolution and speciation in the geological record by examining formations representing locations in which diatoms are likely to have accumulated in ocean sediments. We will generate robust estimates of the timing and magnitude of dissolved Si drawdown following the origin of diatoms using the isotopic silicon composition of fossil sponge spicules and radiolarians. The project will also provide fundamental new insights into the timing of dissolved Si drawdown and other key events, which reorganized the distribution of carbon and nutrients in seawater, changing energy flows and productivity in the biological communities of the ancient oceans.
Motivated by a series of recent discoveries, DEVOCEAN will provide the first comprehensive evaluation of the emergence of diatoms and their impact on the global biogeochemical cycle of silica, carbon and other nutrients that regulate ocean productivity and ultimately climate. I propose that the proliferation of phytoplankton that occurred after the Permian-Triassic extinction, in particular the diatoms, fundamentally influenced oceanic environments through the enhancement of carbon export to depth as part of the biological pump. Although molecular clocks suggest that diatoms evolved over 200 Ma ago, this result has been largely ignored because of the lack of diatoms in the geologic fossil record with most studies therefore focused on diversification during the Cenozoic where abundant diatom fossils are found. Much of the older fossil evidence has likely been destroyed by dissolution during diagenesis, subducted or is concealed deep within the Earth under many layers of rock. DEVOCEAN will provide evidence on diatom evolution and speciation in the geological record by examining formations representing locations in which diatoms are likely to have accumulated in ocean sediments. We will generate robust estimates of the timing and magnitude of dissolved Si drawdown following the origin of diatoms using the isotopic silicon composition of fossil sponge spicules and radiolarians. The project will also provide fundamental new insights into the timing of dissolved Si drawdown and other key events, which reorganized the distribution of carbon and nutrients in seawater, changing energy flows and productivity in the biological communities of the ancient oceans.
Project acronym e-NeuroPharma
Project Electronic Neuropharmacology
Researcher (PI) Rolf Magnus BERGGREN
Host Institution (HI) LINKOPINGS UNIVERSITET
Summary As the population ages, neurodegenerative diseases (ND) will have a devastating impact on individuals and society. Despite enormous research efforts there is still no cure for these diseases, only care! The origin of ND is hugely complex, spanning from the molecular level to systemic processes, causing malfunctioning of signalling in the central nervous system (CNS). This signalling includes the coupled processing of biochemical and electrical signals, however current approaches for symptomatic- and disease modifying treatments are all based on biochemical approaches, alone. Organic bioelectronics has arisen as a promising technology providing signal translation, as sensors and modulators, across the biology-technology interface; especially, it has proven unique in neuronal applications. There is great opportunity with organic bioelectronics since it can complement biochemical pharmacology to enable a twinned electric-biochemical therapy for ND and neurological disorders. However, this technology is traditionally manufactured on stand-alone substrates. Even though organic bioelectronics has been manufactured on flexible and soft carriers in the past, current technology consume space and volume, that when applied to CNS, rule out close proximity and amalgamation between the bioelectronics technology and CNS components – features that are needed in order to reach high therapeutic efficacy. e-NeuroPharma includes development of innovative organic bioelectronics, that can be in-vivo-manufactured within the brain. The overall aim is to evaluate and develop electrodes, delivery devices and sensors that enable a twinned biochemical-electric therapy approach to combat ND and other neurological disorders. e-NeuroPharma will focus on the development of materials that can cross the blood-brain-barrier, that self-organize and -polymerize along CNS components, and that record and regulate relevant electrical, electrochemical and physical parameters relevant to ND and disorders
As the population ages, neurodegenerative diseases (ND) will have a devastating impact on individuals and society. Despite enormous research efforts there is still no cure for these diseases, only care! The origin of ND is hugely complex, spanning from the molecular level to systemic processes, causing malfunctioning of signalling in the central nervous system (CNS). This signalling includes the coupled processing of biochemical and electrical signals, however current approaches for symptomatic- and disease modifying treatments are all based on biochemical approaches, alone. Organic bioelectronics has arisen as a promising technology providing signal translation, as sensors and modulators, across the biology-technology interface; especially, it has proven unique in neuronal applications. There is great opportunity with organic bioelectronics since it can complement biochemical pharmacology to enable a twinned electric-biochemical therapy for ND and neurological disorders. However, this technology is traditionally manufactured on stand-alone substrates. Even though organic bioelectronics has been manufactured on flexible and soft carriers in the past, current technology consume space and volume, that when applied to CNS, rule out close proximity and amalgamation between the bioelectronics technology and CNS components – features that are needed in order to reach high therapeutic efficacy. e-NeuroPharma includes development of innovative organic bioelectronics, that can be in-vivo-manufactured within the brain. The overall aim is to evaluate and develop electrodes, delivery devices and sensors that enable a twinned biochemical-electric therapy approach to combat ND and other neurological disorders. e-NeuroPharma will focus on the development of materials that can cross the blood-brain-barrier, that self-organize and -polymerize along CNS components, and that record and regulate relevant electrical, electrochemical and physical parameters relevant to ND and disorders
Project acronym EMERGE
Project Reconstructing the emergence of the Milky Way's stellar population with Gaia, SDSS-V and JWST
Researcher (PI) Dan Maoz
Summary Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy's current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy's evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way's present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy's stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy's current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy's evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way's present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy's stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Project acronym EYELETS
Project A regenerative medicine approach in diabetes.
Researcher (PI) Per-Olof BERGGREN
Host Institution (HI) KAROLINSKA INSTITUTET
Summary Pancreatic islet transplantation is essential for diabetes treatment. Outcome varies due to transplantation site, quality of islets and the fact that transplanted islets are affected by the same challenges as in situ islets. Tailor-making islets for transplantation by tissue engineering combined with a more favorable transplantation site that allows for both monitoring and local modulation of islet cells is thus instrumental. We have established the anterior chamber of the eye (ACE) as a favorable environment for long term survival of islet grafts and the cornea as a natural body window for non-invasive, longitudinal optical monitoring of islet function. ACE engrafted islets are able to maintain blood glucose homeostasis in diabetic animals. In addition to studies in non-human primates we are performing human clinical trials, the first patient already being transplanted. Tissue engineering of native islets is technically difficult. We will therefore apply genetically engineered islet organoids. This allows us to generate i) standardized material optimized for transplantation, function and survival, as well as ii) islet organoids suitable for monitoring (sensor islet organoids) and treating (metabolic islet organoids) insulin-dependent diabetes. We hypothesize that genetically engineered islet organoids transplanted to the ACE are superior to native pancreatic islets to monitor and treat insulin-dependent diabetes. Our overall aim is to create a platform allowing monitoring and treatment of insulin-dependent diabetes in mice that can be transferred to large animals for validation. The objective is to combine tissue engineering of islet cell organoids, transplantation to the ACE, synthetic biology, local pharmacological treatment strategies and the development of novel micro electronic/micro optical readout systems for islet cells. This regenerative medicine approach will follow our clinical trial programs and be transferred into the clinic to combat diabetes.
Pancreatic islet transplantation is essential for diabetes treatment. Outcome varies due to transplantation site, quality of islets and the fact that transplanted islets are affected by the same challenges as in situ islets. Tailor-making islets for transplantation by tissue engineering combined with a more favorable transplantation site that allows for both monitoring and local modulation of islet cells is thus instrumental. We have established the anterior chamber of the eye (ACE) as a favorable environment for long term survival of islet grafts and the cornea as a natural body window for non-invasive, longitudinal optical monitoring of islet function. ACE engrafted islets are able to maintain blood glucose homeostasis in diabetic animals. In addition to studies in non-human primates we are performing human clinical trials, the first patient already being transplanted. Tissue engineering of native islets is technically difficult. We will therefore apply genetically engineered islet organoids. This allows us to generate i) standardized material optimized for transplantation, function and survival, as well as ii) islet organoids suitable for monitoring (sensor islet organoids) and treating (metabolic islet organoids) insulin-dependent diabetes. We hypothesize that genetically engineered islet organoids transplanted to the ACE are superior to native pancreatic islets to monitor and treat insulin-dependent diabetes. Our overall aim is to create a platform allowing monitoring and treatment of insulin-dependent diabetes in mice that can be transferred to large animals for validation. The objective is to combine tissue engineering of islet cell organoids, transplantation to the ACE, synthetic biology, local pharmacological treatment strategies and the development of novel micro electronic/micro optical readout systems for islet cells. This regenerative medicine approach will follow our clinical trial programs and be transferred into the clinic to combat diabetes.
Project acronym HealthierWomen
Project A woman's reproductive experience: Long-term implications for chronic disease and death
Researcher (PI) Rolv SKJAERVEN
Host Institution (HI) UNIVERSITETET I BERGEN
Summary Pregnancy complications such as preeclampsia and preterm birth are known to affect infant health, but their influence on mothers' long-term health is not well understood. Most previous studies are seriously limited by their reliance on information from the first pregnancy. Often they lack the data to study women's complete reproductive histories. Without a complete reproductive history, the relationship between pregnancy complications and women's long-term health cannot be reliably studied. The Medical Birth Registry of Norway, covering all births from 1967-, includes information on more than 3 million births and 1.5 million sibships. Linking this to population based death and cancer registries provides a worldwide unique source of population-based data which can be analysed to identify heterogeneities in risk by lifetime parity and the cumulative experience of pregnancy complications. Having worked in this field of research for many years, I see many erroneous conclusions in studies based on insufficient data. For instance, both after preeclampsia and after a stillbirth, the high risk of heart disease observed in one-child mothers is strongly attenuated in women with subsequent pregnancies. I will study different patterns of pregnancy complications that occur alone or in combination across pregnancies, and analyse their associations with cause specific maternal mortality. Using this unique methodology, I will challenge the idea that placental dysfunction is the origin of preeclampsia and test the hypothesis that pregnancy complications may cause direct long-term effects on maternal health. The findings of this research have the potential to advance our understanding of how pregnancy complications affect the long-term maternal health and help to develop more effective chronic disease prevention strategies.
Pregnancy complications such as preeclampsia and preterm birth are known to affect infant health, but their influence on mothers' long-term health is not well understood. Most previous studies are seriously limited by their reliance on information from the first pregnancy. Often they lack the data to study women's complete reproductive histories. Without a complete reproductive history, the relationship between pregnancy complications and women's long-term health cannot be reliably studied. The Medical Birth Registry of Norway, covering all births from 1967-, includes information on more than 3 million births and 1.5 million sibships. Linking this to population based death and cancer registries provides a worldwide unique source of population-based data which can be analysed to identify heterogeneities in risk by lifetime parity and the cumulative experience of pregnancy complications. Having worked in this field of research for many years, I see many erroneous conclusions in studies based on insufficient data. For instance, both after preeclampsia and after a stillbirth, the high risk of heart disease observed in one-child mothers is strongly attenuated in women with subsequent pregnancies. I will study different patterns of pregnancy complications that occur alone or in combination across pregnancies, and analyse their associations with cause specific maternal mortality. Using this unique methodology, I will challenge the idea that placental dysfunction is the origin of preeclampsia and test the hypothesis that pregnancy complications may cause direct long-term effects on maternal health. The findings of this research have the potential to advance our understanding of how pregnancy complications affect the long-term maternal health and help to develop more effective chronic disease prevention strategies.
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Project acronym NanoProt-ID
Project Proteome profiling using plasmonic nanopore sensors
Researcher (PI) Amit MELLER
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Summary To date, antibody-free protein identification methods have not reached single-molecule precision. Instead, they rely on averaging from many cells, obscuring the details of important biological processes. The ability to identify each individual protein from within a single cell would transform proteomics research and biomedicine. However, single protein identification (ID) presents a major challenge, necessitating a breakthrough in single-molecule sensing technologies. We propose to develop a method for proteome-level analysis, with single protein resolution. Bioinformatics studies show that >99% of human proteins can be uniquely identified by the order in which only three amino-acids, Lysine, Cysteine, and Methionine (K, C and M, respectively), appear along the proteins' chain. By specifically labelling K, C and M residues with three distinct fluorophores, and threading them, one by one, through solid-state nanopores equipped with custom plasmonic amplifiers, we hypothesize that we can obtain multi-color fluorescence time-trace fingerprints uniquely representing most proteins in the human proteome. The feasibility of our method will be established by attaining 4 main aims: i) in vitro K,C,M protein labelling, ii) development of a machine learning classifier to uniquely ID proteins based on their optical fingerprints, iii) fabrication of state-of-the-art plasmonic nanopores for high-resolution optical sensing of proteins, and iv) devising methods for regulating the translocation speed to enhance the signal to noise ratio. Next, we will scale up our platform to enable the analysis of thousands of different proteins in minutes, and apply it to sense blood-secreted proteins, as well as whole proteomes in pre- and post-metastatic cancer cells. NanoProt-ID constitutes the first and most challenging step towards the proteomic analysis of individual cells, opening vast research directions and applications in biomedicine and systems biology.
To date, antibody-free protein identification methods have not reached single-molecule precision. Instead, they rely on averaging from many cells, obscuring the details of important biological processes. The ability to identify each individual protein from within a single cell would transform proteomics research and biomedicine. However, single protein identification (ID) presents a major challenge, necessitating a breakthrough in single-molecule sensing technologies. We propose to develop a method for proteome-level analysis, with single protein resolution. Bioinformatics studies show that >99% of human proteins can be uniquely identified by the order in which only three amino-acids, Lysine, Cysteine, and Methionine (K, C and M, respectively), appear along the proteins' chain. By specifically labelling K, C and M residues with three distinct fluorophores, and threading them, one by one, through solid-state nanopores equipped with custom plasmonic amplifiers, we hypothesize that we can obtain multi-color fluorescence time-trace fingerprints uniquely representing most proteins in the human proteome. The feasibility of our method will be established by attaining 4 main aims: i) in vitro K,C,M protein labelling, ii) development of a machine learning classifier to uniquely ID proteins based on their optical fingerprints, iii) fabrication of state-of-the-art plasmonic nanopores for high-resolution optical sensing of proteins, and iv) devising methods for regulating the translocation speed to enhance the signal to noise ratio. Next, we will scale up our platform to enable the analysis of thousands of different proteins in minutes, and apply it to sense blood-secreted proteins, as well as whole proteomes in pre- and post-metastatic cancer cells. NanoProt-ID constitutes the first and most challenging step towards the proteomic analysis of individual cells, opening vast research directions and applications in biomedicine and systems biology.
Project acronym NeuroCompSkill
Project A neuro-computational account of success and failure in acquiring communication skills
Researcher (PI) Merav Ahissar
Call Details Advanced Grant (AdG), SH4, ERC-2018-ADG
Summary Why do most people acquire expertise with practice whereas others fail to master the same tasks? NeuroCompSkill offers a neuro-computational framework that explains failure in acquiring verbal and non-verbal communication skills. It focuses on individual ability of using task-relevant regularities, postulating that efficient use of such regularities is crucial for acquiring expertise. Specifically, it proposes that using stable temporal regularities, acquired across long time windows (> 3 sec to days) is crucial for the formation of linguistic (phonological, morphological and orthographic) skills. In contrast, fast updating of recent events (within ~ .3- 3 sec), is crucial for the formation of predictions in interactive, social communication. Based on this, I propose that individuals with difficulties in retaining regularities will have difficulties in verbal communication, whereas individuals with difficulties in fast updating will have difficulties in social non-verbal communications. Five inter-related work packages (WP) will test the predictions that: (WP1) behaviourally – individuals with language and reading difficulties will have impoverished categorical representations, whereas individuals with non-verbal difficulties will be slow in adapting to changed statistics. (WP2) developmentally – poor detection of relevant regularities will be an early marker of related difficulties. (WP3) computationally – profiles of impaired inference will match the predicted time window. (WP4) neuronally – dynamics of neural adaptation will match the dynamics of behavioural inference. (WP5) structurally – different brain structures will be associated with the different time windows of inference. NeuroCompSkill is ground-breaking in proposing a unifying, theory based, testable principle, which explains core difficulties in two prevalent developmental communication disorders. Its 5 WPs will lay the foundations of a comprehensive approach to failure in skill acquisition.
Why do most people acquire expertise with practice whereas others fail to master the same tasks? NeuroCompSkill offers a neuro-computational framework that explains failure in acquiring verbal and non-verbal communication skills. It focuses on individual ability of using task-relevant regularities, postulating that efficient use of such regularities is crucial for acquiring expertise. Specifically, it proposes that using stable temporal regularities, acquired across long time windows (> 3 sec to days) is crucial for the formation of linguistic (phonological, morphological and orthographic) skills. In contrast, fast updating of recent events (within ~ .3- 3 sec), is crucial for the formation of predictions in interactive, social communication. Based on this, I propose that individuals with difficulties in retaining regularities will have difficulties in verbal communication, whereas individuals with difficulties in fast updating will have difficulties in social non-verbal communications. Five inter-related work packages (WP) will test the predictions that: (WP1) behaviourally – individuals with language and reading difficulties will have impoverished categorical representations, whereas individuals with non-verbal difficulties will be slow in adapting to changed statistics. (WP2) developmentally – poor detection of relevant regularities will be an early marker of related difficulties. (WP3) computationally – profiles of impaired inference will match the predicted time window. (WP4) neuronally – dynamics of neural adaptation will match the dynamics of behavioural inference. (WP5) structurally – different brain structures will be associated with the different time windows of inference. NeuroCompSkill is ground-breaking in proposing a unifying, theory based, testable principle, which explains core difficulties in two prevalent developmental communication disorders. Its 5 WPs will lay the foundations of a comprehensive approach to failure in skill acquisition.
Project acronym PCPABF
Project Challenging Computational Infeasibility: PCP and Boolean functions
Researcher (PI) Shmuel Avraham Safra
Summary Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible. Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems. The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize. To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture. It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it. Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely. A field that plays a crucial role in this progress is Analysis of Boolean-functions. For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph. The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph. With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications. This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain. This is a thriving research area, but also an extremely vibrant High-Tech sector.
Computer Science, in particular, Analysis of Algorithms and Computational-Complexity theory, classify algorithmic-problems into feasible ones and those that cannot be efficiently-solved. Many fundamental problems were shown NP-hard, therefore, unless P=NP, they are infeasible. Consequently, research efforts shifted towards approximation algorithms, which find close-to-optimal solutions for NP-hard optimization problems. The PCP Theorem and its application to infeasibility of approximation establish that, unless P=NP, there are no efficient approximation algorithms for numerous classical problems; research that won the authors --the PI included-- the 2001 Godel prize. To show infeasibility of approximation of some fundamental problems, however, a stronger PCP was postulated in 2002, namely, Khot's Unique-Games Conjecture. It has transformed our understanding of optimization problems, provoked new tools in order to refute it and motivating new sophisticated techniques aimed at proving it. Recently Khot, Minzer (a student of the PI) and the PI proved a related conjecture: the 2-to-2-Games conjecture (our paper just won Best Paper award at FOCS'18). In light of that progress, recognized by the community as half the distance towards the Unique-Games conjecture, resolving the Unique-Games conjecture seems much more likely. A field that plays a crucial role in this progress is Analysis of Boolean-functions. For the recent breakthrough we had to dive deep into expansion properties of the Grassmann-graph. The insight was subsequently applied to achieve much awaited progress on fundamental properties of the Johnson-graph. With the emergence of cloud-computing, cryptocurrency, public-ledger and Blockchain technologies, the PCP methodology has found new and exciting applications. This framework governs SNARKs, which is a new, emerging technology, and the ZCASH technology on top of Blockchain. This is a thriving research area, but also an extremely vibrant High-Tech sector.
Project acronym RegRNA
Project Mechanistic principles of regulation by small RNAs
Researcher (PI) Hanah Margalit
Summary Small RNAs (sRNAs) are major regulators of gene expression in bacteria, exerting their regulation in trans by base pairing with target RNAs. Traditionally, sRNAs were considered post-transcriptional regulators, mainly regulating translation by blocking or exposing the ribosome binding site. However, accumulating evidence suggest that sRNAs can exploit the base pairing to manipulate their targets in different ways, assisting or interfering with various molecular processes involving the target RNA. Currently there are a few examples of these alternative regulation modes, but their extent and implications in the cellular circuitry have not been assessed. Here we propose to take advantage of the power of RNA-seq-based technologies to develop innovative approaches to address these challenges transcriptome-wide. These approaches will enable us to map the regulatory mechanism a sRNA employs per target through its effect on a certain molecular process. For feasibility we propose studying three processes: RNA cleavage by RNase E, pre-mature Rho-dependent transcription termination, and transcription elongation pausing. Finding targets regulated by sRNA manipulation of the two latter processes would be especially intriguing, as it would suggest that sRNAs can function as gene-specific transcription regulators (alluded to by our preliminary results). As a basis of our research we will use the network of ~2400 sRNA-target pairs in Escherichia coli, deciphered by RIL-seq (a method we recently developed for global in vivo detection of sRNA targets). Revealing the regulatory mechanism(s) employed per target will shed light on the principles underlying the integration of distinct sRNA regulation modes in specific regulatory circuits and cellular contexts, with direct implications to synthetic biology and pathogenic bacteria. Our study may change the way sRNAs are perceived, from post-transcriptional to versatile regulators that apply different regulation modes to different targets.
Small RNAs (sRNAs) are major regulators of gene expression in bacteria, exerting their regulation in trans by base pairing with target RNAs. Traditionally, sRNAs were considered post-transcriptional regulators, mainly regulating translation by blocking or exposing the ribosome binding site. However, accumulating evidence suggest that sRNAs can exploit the base pairing to manipulate their targets in different ways, assisting or interfering with various molecular processes involving the target RNA. Currently there are a few examples of these alternative regulation modes, but their extent and implications in the cellular circuitry have not been assessed. Here we propose to take advantage of the power of RNA-seq-based technologies to develop innovative approaches to address these challenges transcriptome-wide. These approaches will enable us to map the regulatory mechanism a sRNA employs per target through its effect on a certain molecular process. For feasibility we propose studying three processes: RNA cleavage by RNase E, pre-mature Rho-dependent transcription termination, and transcription elongation pausing. Finding targets regulated by sRNA manipulation of the two latter processes would be especially intriguing, as it would suggest that sRNAs can function as gene-specific transcription regulators (alluded to by our preliminary results). As a basis of our research we will use the network of ~2400 sRNA-target pairs in Escherichia coli, deciphered by RIL-seq (a method we recently developed for global in vivo detection of sRNA targets). Revealing the regulatory mechanism(s) employed per target will shed light on the principles underlying the integration of distinct sRNA regulation modes in specific regulatory circuits and cellular contexts, with direct implications to synthetic biology and pathogenic bacteria. Our study may change the way sRNAs are perceived, from post-transcriptional to versatile regulators that apply different regulation modes to different targets.
Project acronym ScalableControl
Project Scalable Control of Interconnected Systems
Researcher (PI) Anders RANTZER
Summary Modern society is critically dependent on large-scale networks for services such as energy supply, transportation and communications. The design and control of such networks is becoming increasingly complex, due to their growing size, heterogeneity and autonomy. A systematic theory and methodology for control of large-scale interconnected systems is therefore needed. In an ambitious effort towards this goal, this project will develop rigorous tools for control synthesis, adaptation and verification. Many large-scale systems exhibit properties that have not yet been systematically exploited by the control community. One such property is positive (or monotone) system dynamics. This correspond to the property that all states of a network respond in the same direction when the demand or supply is perturbed in some node. Scalable methods for control of positive systems are starting to be developed, but several fundamental questions remain: How can existing results be extended to scalable synthesis of dynamic controllers? Can results for linear positive systems be extended to nonlinear monotone ones? How about systems with resonances? The second focus area, adaptation, takes advantage of recent progress in machine learning, such as statistical concentration bounds and approximate dynamic programming. Adaptation is of fundamental importance for scalability, since high-fidelity models are very expensive to generate manually and hard to maintain. Thirdly, since systematic procedures for control synthesis generally rely on simplified models and idealized assumptions, we will also develop scalable methods to bound the effect of imperfections, such as nonlinearities, time-variations and parameter uncertainty that are not taken into account in the original design. The research will be carried out in interaction with industry studying a new concept for district heating networks. This collaboration will give access to experimental data from a full scale demonstration plant.
Modern society is critically dependent on large-scale networks for services such as energy supply, transportation and communications. The design and control of such networks is becoming increasingly complex, due to their growing size, heterogeneity and autonomy. A systematic theory and methodology for control of large-scale interconnected systems is therefore needed. In an ambitious effort towards this goal, this project will develop rigorous tools for control synthesis, adaptation and verification. Many large-scale systems exhibit properties that have not yet been systematically exploited by the control community. One such property is positive (or monotone) system dynamics. This correspond to the property that all states of a network respond in the same direction when the demand or supply is perturbed in some node. Scalable methods for control of positive systems are starting to be developed, but several fundamental questions remain: How can existing results be extended to scalable synthesis of dynamic controllers? Can results for linear positive systems be extended to nonlinear monotone ones? How about systems with resonances? The second focus area, adaptation, takes advantage of recent progress in machine learning, such as statistical concentration bounds and approximate dynamic programming. Adaptation is of fundamental importance for scalability, since high-fidelity models are very expensive to generate manually and hard to maintain. Thirdly, since systematic procedures for control synthesis generally rely on simplified models and idealized assumptions, we will also develop scalable methods to bound the effect of imperfections, such as nonlinearities, time-variations and parameter uncertainty that are not taken into account in the original design. The research will be carried out in interaction with industry studying a new concept for district heating networks. This collaboration will give access to experimental data from a full scale demonstration plant.
Project acronym SensStabComp
Project Sensitivity, Stability, and Computation
Researcher (PI) Gil KALAI
Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA
Summary Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice.
Project acronym SynProAtCell
Project Delivery and On-Demand Activation of Chemically Synthesized and Uniquely Modified Proteins in Living Cells
Researcher (PI) Ashraf BRIK
Summary While advanced molecular biology approaches provide insight on the role of proteins in cellular processes, their ability to freely modify proteins and control their functions when desired is limited, hindering the achievement of a detailed understanding of the cellular functions of numerous proteins. At the same time, chemical synthesis of proteins allows for unlimited protein design, enabling the preparation of unique protein analogues that are otherwise difficult or impossible to obtain. However, effective methods to introduce these designed proteins into cells are for the most part limited to simple systems. To monitor proteins cellular functions and fates in real time, and in order to answer currently unanswerable fundamental questions about the cellular roles of proteins, the fields of protein synthesis and cellular protein manipulation must be bridged by significant advances in methods for protein delivery and real-time activation. Here, we propose to develop a general approach for enabling considerably more detailed in-cell study of uniquely modified proteins by preparing proteins having the following features: 1) traceless cell delivery unit(s), 2) an activation unit for on-demand activation of protein function in the cell, and 3) a fluorescence probe for monitoring the state and the fate of the protein. We will adopt this approach to shed light on the processes of ubiquitination and deubiquitination, which are critical cellular signals for many biological processes. We will employ our approach to study 1) the effect of inhibition of deubiquitinases in cancer. 2) Examining effect of phosphorylation on proteasomal degradation and on ubiquitin chain elongation. 3) Examining effect of covalent attachment of a known ligase ligand to a target protein on its degradation Moreover, which could trigger the development of new methods to modify the desired protein in cell by selective chemistries and so rationally promote their degradation.
While advanced molecular biology approaches provide insight on the role of proteins in cellular processes, their ability to freely modify proteins and control their functions when desired is limited, hindering the achievement of a detailed understanding of the cellular functions of numerous proteins. At the same time, chemical synthesis of proteins allows for unlimited protein design, enabling the preparation of unique protein analogues that are otherwise difficult or impossible to obtain. However, effective methods to introduce these designed proteins into cells are for the most part limited to simple systems. To monitor proteins cellular functions and fates in real time, and in order to answer currently unanswerable fundamental questions about the cellular roles of proteins, the fields of protein synthesis and cellular protein manipulation must be bridged by significant advances in methods for protein delivery and real-time activation. Here, we propose to develop a general approach for enabling considerably more detailed in-cell study of uniquely modified proteins by preparing proteins having the following features: 1) traceless cell delivery unit(s), 2) an activation unit for on-demand activation of protein function in the cell, and 3) a fluorescence probe for monitoring the state and the fate of the protein. We will adopt this approach to shed light on the processes of ubiquitination and deubiquitination, which are critical cellular signals for many biological processes. We will employ our approach to study 1) the effect of inhibition of deubiquitinases in cancer. 2) Examining effect of phosphorylation on proteasomal degradation and on ubiquitin chain elongation. 3) Examining effect of covalent attachment of a known ligase ligand to a target protein on its degradation Moreover, which could trigger the development of new methods to modify the desired protein in cell by selective chemistries and so rationally promote their degradation.
Project acronym TOPSPIN
Project Topotronic multi-dimensional spin Hall nano-oscillator networks
Researcher (PI) Johan Ĺkerman
Host Institution (HI) GOETEBORGS UNIVERSITET
Summary TOPSPIN will focus on spin Hall nano-oscillators (SHNOs), which are nano-sized, ultra-tunable, and CMOS compatible spin wave based microwave oscillators. TOPSPIN will push the boundaries of SHNO lithography, frequency, speed, and power consumption by combining topological insulators, having record high spin Hall efficiencies, with materials having ultra-high spin wave frequencies. TOPSPIN will reduce the required current densities 1-2 orders of magnitude compared to state-of-the-art, making SHNO operating currents approach 1 uA, and increase the SHNO operating frequencies an order of magnitude to as high as 300 GHz. TOPSPIN will use mutually synchronized SHNOs to achieve orders of magnitude higher signal coherence and achieve novel functionality such as pattern matching and neuromorphic computing. TOPSPIN will demonstrate mutual synchronization of up to 1,000 SHNOs in chains, and as many as 1,000,000 SHNOs in very large-scale two-dimensional arrays. Using dipolar coupling between SHNOs fabricated on top of each other, three-dimensional mutual synchronization will also be demonstrated. As the signal coherence increases linearly with the number of mutually synchronized SHNOs the oscillator quality factor will improve by many orders of magnitude. TOPSPIN will also develop such arrays using magnetic tunnel junction stacks thus combining ultra-high coherence with the highest possible microwave output power. TOPSPIN will demonstrate ultrafast pattern matching and neuromorphic computing using its SHNO networks. It will functionalize SHNOs to exhibit ultra-fast individual voltage controlled tuning and non-volatile tuning of both the SHNO frequency and the inter-SHNO coupling. TOPSPIN will characterize its SHNOs using novel methods and techniques such as multichannel electrical measurements, time- and phase-resolved Brillouin Light Scattering microscopy, time-resolved Scanning Transmission X-ray Microscopy, and ultrafast pump-probe Transmission Electron Microscopy.
TOPSPIN will focus on spin Hall nano-oscillators (SHNOs), which are nano-sized, ultra-tunable, and CMOS compatible spin wave based microwave oscillators. TOPSPIN will push the boundaries of SHNO lithography, frequency, speed, and power consumption by combining topological insulators, having record high spin Hall efficiencies, with materials having ultra-high spin wave frequencies. TOPSPIN will reduce the required current densities 1-2 orders of magnitude compared to state-of-the-art, making SHNO operating currents approach 1 uA, and increase the SHNO operating frequencies an order of magnitude to as high as 300 GHz. TOPSPIN will use mutually synchronized SHNOs to achieve orders of magnitude higher signal coherence and achieve novel functionality such as pattern matching and neuromorphic computing. TOPSPIN will demonstrate mutual synchronization of up to 1,000 SHNOs in chains, and as many as 1,000,000 SHNOs in very large-scale two-dimensional arrays. Using dipolar coupling between SHNOs fabricated on top of each other, three-dimensional mutual synchronization will also be demonstrated. As the signal coherence increases linearly with the number of mutually synchronized SHNOs the oscillator quality factor will improve by many orders of magnitude. TOPSPIN will also develop such arrays using magnetic tunnel junction stacks thus combining ultra-high coherence with the highest possible microwave output power. TOPSPIN will demonstrate ultrafast pattern matching and neuromorphic computing using its SHNO networks. It will functionalize SHNOs to exhibit ultra-fast individual voltage controlled tuning and non-volatile tuning of both the SHNO frequency and the inter-SHNO coupling. TOPSPIN will characterize its SHNOs using novel methods and techniques such as multichannel electrical measurements, time- and phase-resolved Brillouin Light Scattering microscopy, time-resolved Scanning Transmission X-ray Microscopy, and ultrafast pump-probe Transmission Electron Microscopy. | CommonCrawl |
pp. 22504-22516
•https://doi.org/10.1364/OE.425930
Intensity-corrected 4D light-in-flight imaging
Imogen Morland, Feng Zhu, Germán Mora Martín, Istvan Gyongy, and Jonathan Leach
Imogen Morland,1 Feng Zhu,1 Germán Mora Martín,2 Istvan Gyongy,2 and Jonathan Leach1,*
1Institute of Photonics and Quantum Sciences, Heriot-Watt University, David Brewster Building, Edinburgh EH14 4AS, UK
2Institute for Integrated Micro and Nano Systems, The University of Edinburgh, Edinburgh EH9 3JL, UK
*Corresponding author: [email protected]
Istvan Gyongy https://orcid.org/0000-0003-3931-7972
I Morland
F Zhu
G Martín
I Gyongy
J Leach
Imogen Morland, Feng Zhu, Germán Mora Martín, Istvan Gyongy, and Jonathan Leach, "Intensity-corrected 4D light-in-flight imaging," Opt. Express 29, 22504-22516 (2021)
Computational 4D imaging of light-in-flight with relativistic effects
Yue Zheng, et al.
Photon. Res. 8(7) 1072-1078 (2020)
Intensity interferometry-based 3D imaging
Fabian Wagner, et al.
Opt. Express 29(4) 4733-4745 (2021)
High-speed object detection with a single-photon time-of-flight image sensor
Germán Mora-Martín, et al.
Imaging Systems, Microscopy, and Displays
Fluorescence lifetime imaging
Imaging techniques
Laser arrays
Rayleigh scattering
Scattering media
Streak cameras
Original Manuscript: March 26, 2021
Revised Manuscript: May 14, 2021
Manuscript Accepted: May 18, 2021
Published: July 1, 2021
Suppl. Mat. (6)
Light-in-flight (LIF) imaging is the measurement and reconstruction of light's path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along light's path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).
Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
As light travels and interacts with objects, photons are scattered in all directions. Light-in-flight (LIF) imaging is the process of capturing scattered photons using detectors with high temporal resolution such that light's path can be reconstructed. Three-dimensional LIF imaging was first captured using a holographic plate to record the spherical wavefronts of pulses reflected by mirrors; the technique involved no mechanical processes and achieved a temporal resolution of 800 ps [1]. This demonstrated real-time imaging of light undergoing dynamic processes, previous imaging was static and time averaged [2,3]. Further work proposed a mechanism for correcting distortion effects when imaging light [4]. Recent 3D LIF holography techniques use a scattering medium and achieve higher temporal resolutions [5,6].
The field of LIF imaging was recently revolutionised by Velten et al. [7], who imaged femtosecond laser pulses propagating through a scattering media using a streak camera. This new LIF method allowed for scattering light dynamics to be observed at unprecedented temporal and spatial resolutions. However, this method requires a scanning mechanism to build a 2D image resulting in an acquisition time of one hour. Other methods for capturing 3D LIF involve transient imaging using a photonic mixer device (PMD), achieving a nanosecond temporal resolution with a one minute acquisition time [8]. In addition, other 3D LIF imaging methods include time encoded amplified imaging and computer tomography, which achieve nanosecond and picosecond temporal resolutions respectively [9,10].
The type of scattering that is observed is dependent on the medium that the light propagates through. For example, light has been captured propagating through fibre optics [11] and heated rubidium vapor [12]. When light travels through air, Rayleigh scattering is the dominant effect, and this was captured by Gariepy et al. who demonstrated three-dimensional LIF imaging using a single-photon avalanche detector (SPAD) array camera [13]. In this work, the light propagated in one plane that was perpendicular to the axis normal to the detector. Following this, it was recognised that relativistic effects, where the apparent velocity of light would deviate from $c$, could be observed with LIF [14,15] and these principles have allowed four-dimensional LIF reconstruction to be demonstrated for multiple paths of light [16]. This was generalised, using a megapixel camera and machine learning techniques, to capture 4D LIF imaging of multiple pulses following arbitrary straight-line paths in space [17].
These technologies have already been used in fluorescence lifetime imaging [18], light detection and ranging (LIDAR) imaging through scattering media [19] and imaging around corners [20–22]. Ultimately, the ability to accurately capture the full scattering dynamics of light could lead to new approaches when imaging deep inside the human body, see Ref. [23] for an overview of LIF research. Recent research tackling the problem of imaging in highly scattering media has shown computational imaging approaches can provide images in two [24] and three dimensions [25].
Our work builds on the recent LIF research by developing a model to compensate for distortions in the recorded intensity, as well as relativistic effects previously observed, and reconstruct the 4D path of laser pulses. The scope of this work is to provide a mechanism for the most accurate reconstruction of LIF measurements, relevant for understanding scattering in a range of scenarios. Understanding the intensity effects which occur in LIF imaging could have future applications in medical imaging, where the intensity of scattered photons gives information on the scattering source and its interaction with objects. To do this, it is necessary to understand the underlying physics of light scattering in air and the relationship to imaging optics. This is illustrated in Fig. 1, where light scattered at a time $t_1$ from an object at a location ($x_1, y_1, z_1$) propagates a distance $R_1$ to a camera. The remaining light continues to propagate to position ($x_2, y_2, z_2$) where another scattering event occurs at time $t_2$, and the scattered light travels a distance $R_2$ to the camera. The total time taken for the pulse to travel between the two scattering events is $t_2 - t_1 = \Delta t$. Whereas, the two scattering events are recorded by the camera at times $t_3$ and $t_4$ respectively, and the difference in arrival time recorded by the camera is $t_4 - t_3 = \Delta t + (R_2-R_1)/c$. This means the arrival time data recorded by the camera is different to the true propagation times of light and is ultimately dependent on the propagation angle. For the case of a camera, there is a mapping of an event occurring in three spatial dimensions and time to a camera with two spatial dimensions and time. The third spatial ($z$) dimension is collapsed and contained within the temporal data of the camera.
Fig. 1. (a) Light in "real space" is scattered at ($x_1, y_1, z_1, t_1$) in all directions. A proportion of this scattered light travels to the camera, and is recorded in "camera space" as a signal at ($x_3, y_3, t_3$), where $x_3$ and $y_3$ are pixel positions. (b) The remaining light travels across the field-of-view, and is scattered at position and time ($x_2, y_2, z_2, t_2$). This event is recorded at ($x_4, y_4, t_4$). (c) Birds-eye-view of the two scattering events where $R_1$ and $R_2$ are the distance between the camera and the first and second scattering events respectively, and $\alpha _1$ and $\alpha _2$ are the scattering angles. The time difference for the two events in "camera space" is $t_4 - t_3 = \Delta t + (R_2-R_1)/c$, whereas the time difference in "real space" is $t_2 - t_1 = \Delta t$. Rayleigh scattering effects observed by the camera are dependent on $\alpha _1, \alpha _2, R_1$ and $R_2$, which differ significantly for the two scattering events shown. Focusing effects from the imaging optics also contribute to the intensity signal. An image rendered from the perspective of the camera shows the right side of the beam, which is closer to the camera, is larger compared to the left side of the beam. This corresponds to a lower energy density on the right hand side, and therefore a brighter image on the left.
Rayleigh scattering effects are also observed by the camera and are dependent on the scattering angles ($\alpha _1$ and $\alpha _2$) and propagation distances ($R_1$ and $R_2$). These variables vary along light's path and so the intensity contribution is dependent on the position along the path. Furthermore, focusing effects contribute to the recorded intensity profile and are dependent on the perpendicular distances between the scattering event and camera. This is shown in the camera view image in Fig. 1(c) which depicts an integrated image of the laser pulse's path across the field-of-view of the camera used to render the image. The depth of field is increased such that the whole path is in focus. The pulse, travelling towards the camera from left to right, is further away from the camera on the left-hand side and is therefore focused to a smaller size than the right-hand side of the pulse. This results in the intensity of the pulse increasing as the distance between the camera and pulse increases.
In this work, we are able to measure and subsequently correct all of the effects mentioned above. That is to say, we can correct both the temporal distortion, arising from relativistic effects, and the intensity distortions, resulting from Rayleigh scattering and the imaging optics. We demonstrate intensity-corrected LIF imaging using a SPAD array, recording data for a laser pulse propagating at large and small angles with respect to the observation axis of the camera. The relativistic effects result in apparent speed of light velocities that span several orders of magnitude, and the intensity effects lead to observed intensities changing by at least a factor of two along the pulse's path.
2. Theory
Consider a pulse of light that travels in three dimensions and is imaged using a camera with high temporal resolution. To develop the theoretical framework, we introduce the concept of the "camera space" to indicate where the data is recorded and the "real space" to indicate the three dimensional space in which the light pulse travels. It is the goal for the work to convert the camera space data to the real space as accurately as possible. The inversion of the camera space data to the real space path enables the true intensity-corrected light path to be reconstructed.
Light-in-flight data is subject to intensity and relativistic effects observed in the camera space. The intensity of scattered photons along the beam in camera space is derived by considering the intensity contribution from one segment of the beam on one pixel. The intensity of a segment of the beam is calculated using the schematic in Fig. 2 where laser pulses travelling across the field-of-view, at propagation angle $\theta$ with respect to the observation axis, are imaged using a SPAD array. A proportion of the photons within each pulse are scattered by air molecules and travel through the imaging lens aperture. Different segments of the beam are imaged by different pixels within the SPAD array and the intensity contribution from each segment is dependent on focusing effects from the imaging optics ($I_{f}$), Rayleigh scattering ($I_{r}$), and integrated path length ($I_{s}$).
Fig. 2. Theory Schematic: The intensity of a segment of the pulse is dependent on several factors: Rayleigh scattering, focusing effects and the integrated pulse length. These intensity contributions are derived using the variables shown above where $\theta$ is the propagation angle of the pulse relative to the optical axis, d is the distance between the centre of the pulse and imaging lens, f is the focal length of the imaging lens, r is the distance between the imaging lens and the nearest edge of the segment, $\theta _{1}$ is given by Eq. (2), $\theta _{2}$ is given by Eq. (3), x is the distance along the SPAD array to a given pixel and $\Delta$ is the active pixel width. Combining these effects, the intensity of scattered photons along the beam in camera space $(I(x; ~\theta , ~A, ~f, ~\Delta ))$ is derived and given in Eq. (7). The relativistic effects are explained using the same variables by Eq. (9).
The intensity of the beam in camera space $(I(\theta _{1}, ~\theta _{2}; ~\theta ))$ is given by
(1)$$I(\theta_{1}, ~\theta_{2}; ~\theta) = B I_{f}(\theta_{1};~r,~f)I_{r}(\theta_{1};~\theta,~r)I_{s}(\theta_{1},~\theta_{2};~r),$$
where $B$ is a normalisation constant dependent on integration time and laser power, $x$ is the distance along the SPAD array to a given pixel, $A$ is the sensor width, $f$ is the focal length of the lens, $\Delta$ is the active pixel width, $\theta$ is the propagation angle relative to the observation axis and $r$ is the distance between the imaging lens and the nearest edge of the segment, $\theta _{1}$ satisfies
(2)$$\theta_{1}(x;~A,~f) = \tan^{{-}1} \Big( \frac{2x - A}{2f} \Big),$$
and $\theta _{2}$ satisfies
(3)$$\theta_{2}(x;~A,~f,~\Delta) = \tan^{{-}1} \Big( \frac{2x - A +2\Delta}{2f} \Big).$$
The first contribution to the intensity of a segment of the beam is from focusing effects in the imaging optics of the system and is given by
(4)$$I_{f}(\theta_{1};~r,~f) = \frac{r\cos\theta_{1}}{f}.$$
This contribution is a result of parts of the beam which are further away from the lens focusing to a smaller point on the SPAD array with higher energy density.
The second contribution is from photons undergoing Rayleigh scattering with air molecules and is given by
(5)$$I_{r}(\theta_{1};~\theta,~r) = \frac{I_{0} \pi^{4}(n^{2}-1)^2 d_{r}^{6}}{8 \lambda^{4} (n^{2}+2)^2 } \frac{1+\cos^2(\theta - \theta_{1})}{r^2},$$
where $I_{0}$ is the intensity constant, n is the refractive index, $d_{r}$ is the scattering particle diameter and $\lambda$ is the wavelength of scattered light. Rayleigh scattering is dependent on the scattering angle and distance between the SPAD array and the pulse, which both change along the beam.
The final contribution to the intensity of one pixel is from the integrated path length, which is the segment length imaged by each pixel, given by
(6)$$I_{s}(\theta_{1},~\theta_{2};~r) = \frac{r\sin(\theta_{2}-\theta_{1})}{\sin(\theta - \theta_{1})}.$$
This results in pixels at the edge of the SPAD array seeing a larger length of pulse than pixels in the middle of the SPAD array.
By combining these effects and substituting Eqs. (2)-(6) into Eq. (1) the intensity of scattered photons along the beam recorded by the SPAD array ($I(x; ~\theta , ~A, ~f, ~\Delta )$) is found to be
(7)$$I(x; ~\theta, ~A, ~f, ~\Delta) = C\frac{1 + \cos^2 (\theta- \tan^{{-}1} (\frac{2x-A}{2f}))}{f \sin\theta - \frac{2x-A}{2}\cos\theta} \Big( \tan^{{-}1}\Big(\frac{2x-A+ 2\Delta}{2f}\Big) - \tan^{{-}1}\Big(\frac{2x-A}{2f}\Big) \Big),$$
where $C$ is a normalisation constant which includes the Rayleigh scattering constants, the integration time of the SPAD array and the optical power of the laser. Equation (7) assumes $\sin (\theta _{2}-\theta _{1}) \approx \theta _{2}-\theta _{1}$.
Finally, the Rayleigh effect is shown by measuring the central pixel intensity $(I_{c}(\theta ;~f, ~\Delta ))$ for different values of $\theta$. This intensity is independent of focusing effects as d is constant for all $\theta$ and is given by
(8)$$I_{c}(\theta;~f, ~\Delta) = I(x=\frac{A}{2}; ~\theta, ~A, ~f, ~\Delta) =\frac{C(1 + \cos^2 \theta)}{f\sin \theta}\tan^{{-}1}\left(\frac{\Delta}{f} \right) \propto \frac{1 + \cos^2 \theta}{\sin \theta},$$
which is derived by substituting $x = A/2$ into Eq. (7). This equation is a modified version of the Rayleigh scattering effect and introduces a normalisation factor that takes into account the length of pulse imaged by the central pixel.
Relativistic effects seen in the camera space result in the pulse appearing to travel at apparent velocities different to the speed of light. The arrival time in camera space is dependent on $\theta$ and d as shown in Fig. 2(a). The arrival time difference between the central pixel and an arbitrary pixel ($\Delta t(x; ~\theta , ~A, ~f)$) is given by
(9)$$\Delta t(x; ~\theta, ~A, ~f) = \frac{d\Big( \sqrt{{\bigg(}(\frac{2x-A}{2f})^{2}+1{\bigg)}}\sin\theta - \frac{2x-A}{2f} \Big)}{c(\sin\theta + \frac{2x-A}{2f}\cos\theta) } -\frac{d}{c},$$
where $c$ is the speed of light in air. From the above equations, the relativistic and intensity effects observed in the camera space can be modelled and compared to experimental data.
3. Experimental setup
The relativistic and intensity effects of LIF imaging are investigated using the experimental set-up shown in Fig. 3. The system includes a SPAD array camera, a 532 nm short pulsed laser (Teem Photonics STG-03E-1x0), and an optical constant fraction discriminator used as a trigger. The impact of intensity and relativistic effects are more pronounced when the light travels at large or small angles with respect to the optical axis of the camera, which corresponds to light travelling towards and away from the camera, see Figs. 3(b) and 3(c) respectively.
Laser pulses, which have a pulse width of $\approx$ 500 ps, are expanded to a beam waist of $\approx 5~$mm and collimated via two lenses of focal length 100 mm and 400 mm respectively, resulting in a Rayleigh range of 150 m. This ensures there are no intensity effects due to the beam diverging as it travels across the field-of-view of the sensor. The laser pulses are directed to a constant fraction discriminator acting as a trigger with 200 ps jitter, which sends 4 kHz transistor–transistor logic (TTL) pulses to the SPAD array. The TTL pulse starts the timer for each of the 32 x 32 pixels operated in Time-Correlated Single Photon Counting (TCSPC) mode. Histograms of photon counts are recorded for every pixel over 1024 time bins, each with a width of 55 ps. The pixel area and active area are $50~\mu$m x $50~\mu$m and $6.95~\mu$m ×$6.95~\mu$m respectively, giving a fill factor of 1.9%.
Fig. 3. (a) 532 nm laser pulses are collimated by a series of lenses, increasing the beam diameter by four times to $\approx 5~$mm, and travel towards the SPAD array which records temporal and intensity data of scattered photons. From this information, $\theta$ and the intensity distribution of the beam $(I(x; ~\theta , ~A, ~f, ~\Delta ))$ in camera space are calculated. (b) Birds-eye-view of pulses travelling towards the SPAD array where $d = 25.8$ cm, $f_{1}=100$ mm and $f_{2}=400$ mm. (c) Birds-eye-view of pulses travelling away from the SPAD array.
An 8-mm-focal length c-mount lens is used to image the beam onto the sensor. The aperture of the lens can be stopped down to extend the depth of field, and this is essential to reduce blurring and ensure the entire path of the beam is in focus on the camera. The mirrors used to direct the beam towards the SPAD array are placed outside the field-of-view so only photons scattered by air molecules are collected by the imaging lens, thus avoiding saturation effects and allowing Rayleigh scattering to be observed. Finally, to observe the Rayleigh scattering effects, the laser and trigger were placed on a rotation stage system which allows $\theta$ to be easily varied.
When measuring $I(x; ~\theta , ~A, ~f, ~\Delta )$, it is important for the whole of the beam to be in focus. This is demonstrated in Figs. 4(a) and 4(b) which shows electron multiplying charge-coupled device (EMCCD) intensity images of the beam travelling from right to left away from the SPAD array with an open and closed aperture respectively. When the lens aperture is open, out of focus light contributes to the intensity image resulting in part of the beam being out of focus and less intense than predicted by Eq. (7). When the lens aperture is closed, only in focus light is incident on the detector and the intensity effects predicted are observed. This condition requires longer acquisition times to collect sufficient photon counts to build an intensity image.
Fig. 4. The effect of stopping down the aperture on the camera lens as measured with an EMCCD camera. The light is travelling away from the sensor from right to left in the images. (a) EMCCD intensity image of the beam with the aperture fully open. (b) EMCCD intensity image of the beam with the aperture closed. In (b) the entire beam is in focus and the intensity effects described by Eq. (7). Our theoretical model assumes that we stop down the aperture, as seen in (b).
This experimental configuration allows for the imaging of temporally correlated laser pulses and can be generalised for multiple pulses travelling across the field-of-view. In order to image temporally uncorrelated laser pulses, a triggering signal from each source must be provided to the SPAD array.
4. Results
In order to achieve intensity-corrected 4D LIF imaging, it is important to remove the noise present in the SPAD array data. This has been achieved by fitting Gaussian functions to each pixel and setting the pixel intensity to zero if the standard deviation of the Gaussian is outside an acceptable range. Furthermore, for noisy pixels within the beam path, interpolation is performed.
The theoretical model was then created using Eqs. (7) and (9). The only input parameters that the model requires is the distance from the camera to the centre of the pulse along the optical axis $d$, the focal length of the lens $f$, the physical sensor size $A$ and the active pixel width $\Delta$. The propagation angle, pulse standard deviation, peak amplitude, and the average background noise are free parameters that the model fits to using chi-squared minimisation. If necessary $d$ can also be set as a free parameter, although we chose to measure this to increase the accuracy of the $\theta$ measurement.
The input parameters used to model the relativistic effects are similar to those used by Laurenzis et al. [14], however the method for measuring the apparent velocity of the pulse differs. Laurenzis et al. measured the relative velocity of the pulse using the true propagation distance and the measured photon arrival time, whereas we use the propagation distance measured by the camera.
Camera space results for laser pulses travelling from left to right towards the SPAD array at $\theta =167.0^{\circ }\pm 0.5^{\circ }$ are shown in Fig. 5. Three frames, each with time duration 55 ps, of laser pulses travelling across row 17 of the SPAD array are shown at 2.1 ns, 2.5 ns and 2.8 ns in Figs. 5(a) to 5(c). The time taken for the pulse to travel across the SPAD array is less than the time bin width, resulting in the pulse being present for all pixels of row 17 in a single frame.
Fig. 5. Results Camera Space: $\theta =167.0^{\circ }\pm 0.5^{\circ }$: laser pulses travelling towards the SPAD array using the set-up show in Fig. 3(a). (a)-(c) Three frames of laser pulses travelling from left to right across row 17 are shown at 2.1 ns, 2.5 ns and 2.8 ns, where the colour bar represents the number of photon counts and the distance axis is the horizontal field-of-view in real space. (d) and (e) Data and fitted model for row 17 of the SPAD camera. This shows photon counts as a function of position and time. The theoretical fit to the data calculates $\theta$ as $165.5^{\circ }\pm 0.1^{\circ }$. (f) The apparent velocity of the pulse varies from 7.0 c to 6.8 c and the timescale is the camera space time, $\textrm {t}'$. This is significantly shorter than the real space time, leading to the apparent superluminal velocities (see Visualization 1).
The data and fitted model for row 17 as a function of position and time is given in Figs. 5(d) and 5(e). The photon intensity decreases as pixel number increases in both the data and the fitted model. This is because the left hand side of the beam is further away from the SPAD array and so focuses to a smaller area on the detector with a higher energy density.
The total time taken for the pulse to travel across the SPAD array was measured to be 21.2 ps, and using Eqs. (7) and (9) to fit a 2D Gaussian function to the data, $\theta$ was calculated to be $165.5^{\circ }\pm 0.1^{\circ }$. The error in $\theta$ was calculated by numerically creating 10 statistically identical data sets and fitting to these. These additional data sets were created by sampling from a Poisson distribution with a mean and variance determined by the initial experimental data. Finally, the apparent velocity of the pulse as a function of camera space time is shown in Fig. 5(f) and varies from 7.0 c to 6.8 c. This superluminal apparent velocity is entirely due to the pulse travelling toward the camera.
Using the data obtained by the SPAD array, the camera space data is converted to real space data. Figures 6(a)–6(c) shows three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied. The pulse appears to travel at superluminal speeds with a total propagation time of 21.2 ps and the intensity of the pulse appears to decrease as the pulse propagates due to the intensity effects described in Section 2. This corresponds to the camera space data without correction. The arrows indicate the propagation direction is left to right across the camera. Next, the intensity and relativistic effects are corrected by fitting the model to the experimental data and optimising the input parameters. Relativistic effects are corrected by calculating the pulse's path from $\theta$ and $d$, and the intensity correction is applied by normalising the raw intensity data by the fitted intensity data. Three frames from the real space movie of the pulse traveling towards the SPAD array are shown in Figs. 6(d)–6(f) at 0.0 ns, 0.4 ns and 0.7 ns. The beam diameter used for the real space reconstruction was $5~$mm and the pulse width was 15 cm. These values were taken from measurements and known values of the pulse.
Fig. 6. Results Real Space $\theta =167.0^{\circ }\pm 0.5^{\circ }$: (a)-(c): Three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied, where the arrows indicate the direction of propagation and $\textrm {t}'$ indicates camera space time. The pulse appears to travel at superluminal speeds and decrease intensity as it propgates due to the relativistic and focusing effects described in Section 2 (see Visualization 2). (d)-(f) Three frames from the real space movie of the pulse traveling at $165.5^{\circ }\pm 0.1^{\circ }$ where $t$ indicates real space time. Both intensity and relativistic corrections have now been applied, the pulse intensity is approximately constant in all frames. Note that the timescale is now real space time as the pulse of light travels at c (see Visualization 3).
Camera space data was also recorded for laser pulses travelling from left to right away from the SPAD array at $\theta =13.0^{\circ }\pm 0.5^{\circ }$ using the set-up shown in Fig. 3(b). Three frames of laser pulses travelling across row 17 are shown at 1.7 ns, 2.5 ns and 3.3 ns in Figs. 7(a)–7(c). The pulse length present in each frame is shorter for light travelling away from the SPAD array, indicating lower apparent velocities.
Fig. 7. Results Camera Space: $\theta =13.0^{\circ }\pm 0.5^{\circ }$: laser pulses travelling away from the SPAD array using the set-up show in Fig. 3(b). The pulse direction has been reversed for clarity. (a)-(c) Three frames of laser pulses travelling across row 17 of the SPAD array are shown at 1.7 ns, 2.5 ns and 3.3 ns. (d)-(e) Data and fitted model for row 17 of the SPAD camera. This shows photon counts as a function of position and time. The theoretical fit to the data calculates $\theta$, as $13.9^{\circ }\pm 0.1^{\circ }$. (f) The apparent velocity of the pulse varies from 0.19 c to 0.05 c, indicating a decelerating pulse travelling away from the SPAD array. Note that the timescale is the camera space time, $\textrm {t}'$. This is longer than the real space time, leading to the apparent subluminal velocities (see Visualization 4).
The data and model used to calculate $\theta$ are shown in Fig. 7(d) and 7(e) respectively. The total time taken for light to travel in camera space is 1.6 ns, and the curvature of the fitted function indicates the pulses appear to decelerate as they travel away from the SPAD array. Using Eqs. (7) and (9) to fit to the data, $\theta$ was estimated to be $13.9^{\circ }\pm 0.1^{\circ }$.
Finally, the apparent velocity of the pulse as a function of camera space time is shown in Fig. 7(f) and varies from 0.19 c to 0.05 c. This results in a ratio of the fastest to slowest apparent velocities equal to 156. This is the largest ratio of super to subluminal apparent velocities in 4D LIF imaging; the previous highest ratio was 17, reported in Ref. [17].
Using the camera space data in Fig. 7 and the same method as described above, we can then recreate the real space data of the pulse travelling away from the SPAD array. Figures 8(a) to 8(c) shows three frames from a movie of the pulse before intensity and relativistic corrections have been applied. The pulse appears to travel at subluminal speeds with a total propagation time of 1.6 ns and decelerate as it travels across the field-of-view. Following the intensity and relativistic corrections, three frames of the real space movie are shown in Figs. 8(d) to 8(f) at times of 0.0 ns, 0.4 ns and 0.7 ns.
Fig. 8. Results Real Space $\theta =13.0^{\circ }\pm 0.5^{\circ }$: (a)-(c): Three frames of the pulse travelling across the field-of-view before intensity and relativistic corrections have been applied, where $\textrm {t}'$ indicates camera space time. The pulse appears to travel at subluminal speeds, decelerate and increase intensity as it propgates due to the relativistic and focusing effects described in Section 2 (see Visualization 5). (d)-(f) Three frames from the real space movie of the pulse traveling at $13.9^{\circ }\pm 0.1^{\circ }$, where $t$ indicates real space time. Both intensity and relativistic corrections have now been applied, the pulse intensity is approximately constant in all frames. Note that the timescale is now real space time as the pulse of light travels at c (see Visualization 6).
Our final experiment demonstrates the angle dependence of scattering in air for LIF imaging, i.e., Rayleigh scattering. This is achieved by placing the pulsed laser on a rotation stage, see Fig. 9, allowing $\theta$ to be easily altered, and recording the intensity of the central pixel of the camera. The central pixel intensity is only dependent on $\theta$ as the distance between the centre of the rotation stage and SPAD array is constant for all values of $\theta$. This removes the effects of focusing and the inverse square dependence, which are both present in the first experiment. Figure 9(b) shows the observed experimental data in good agreement with the predictions of Rayleigh scattering, see Eq. (8). It should be noted that the effects of Rayleigh scattering were present in the previous experimental results but were harder to isolate.
Fig. 9. Experimental setup and results for Rayleigh scattering. (a) Laser pulses travel across the SPAD array field-of-view at an angle $\theta$ set by the rotation stage. Temporal and intensity data is recorded in $5^{\circ }$ intervals between $25^{\circ }$ and $150^{\circ }$. (b) The normalised central pixel intensity ($I_{c}(\theta ;~f, ~\Delta )$) versus $\theta$, where the errors are given by the square root of the total number of photons recorded $\sqrt n$.
Relativistic effects, focusing, and Rayleigh scattering all play a significant role in the observed signal for LIF imaging. By modelling these effects we have been able to invert SPAD array data and reconstruct the true 4D path of laser pulses, showing a strong agreement between experiment and theory. We demonstrate the validity of our model by fitting to data obtained for light travelling towards and away from a SPAD array and comparing the temporal and intensity distributions to the model. The ratio of the apparent velocity of the pulses travelling towards and away is over two orders of magnitude and is the highest ratio observed for LIF imaging.
Science and Technology Facilities Council (ST/S505407/1); Engineering and Physical Sciences Research Council (EP/S001638/1, EP/T00097X/1).
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. N. Abramson, "Light-in-flight recording by holography," Opt. Lett. 3(4), 121–123 (1978). [CrossRef]
2. N. Abramson, "Light-in-flight recording: high-speed holographic motion pictures of ultrafast phenomena," Appl. Opt. 22(2), 215–232 (1983). [CrossRef]
3. N. H. Abramson and K. G. Spears, "Single pulse light-in-flight recording by holography," Appl. Opt. 28(10), 1834–1841 (1989). [CrossRef]
4. N. Abramson, "Light-in-flight recording. 3: Compensation for optical relativistic effects," Appl. Opt. 23(22), 4007–4014 (1984). [CrossRef]
5. G. Häusler, J. Herrmann, R. Kummer, and M. Lindner, "Observation of light propagation in volume scatterers with 10 11-fold slow motion," Opt. Lett. 21(14), 1087–1089 (1996). [CrossRef]
6. T. Kubota, K. Komai, M. Yamagiwa, and Y. Awatsuji, "Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation," Opt. Express 15(22), 14348–14354 (2007). [CrossRef]
7. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, "Femto-photography: capturing and visualizing the propagation of light," ACM Trans. Graph. 32(4), 1–8 (2013). [CrossRef]
8. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, "Low-budget transient imaging using photonic mixer devices," ACM Trans. Graph. 32, 1–10 (2013).
9. K. Goda, K. Tsia, and B. Jalali, "Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena," Nature 458(7242), 1145–1149 (2009). [CrossRef]
10. Z. Li, R. Zgadzaj, X. Wang, Y.-Y. Chang, and M. C. Downer, "Single-shot tomographic movies of evolving light-velocity objects," Nat. Commun. 5(1), 3085 (2014). [CrossRef]
11. R. Warburton, C. Aniculaesei, M. Clerici, Y. Altmann, G. Gariepy, R. McCracken, D. Reid, S. McLaughlin, M. Petrovich, J. Hayes, R. Henderson, D. Faccio, and J. Leach, "Observation of laser pulse propagation in optical fibers with a spad camera," Sci. Rep. 7(1), 43302 (2017). [CrossRef]
12. K. Wilson, B. Little, G. Gariepy, R. Henderson, J. Howell, and D. Faccio, "Slow light in flight imaging," Phys. Rev. A 95(2), 023830 (2017). [CrossRef]
13. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, "Single-photon sensitive light-in-fight imaging," Nat. Commun. 6(1), 6021 (2015). [CrossRef]
14. M. Laurenzis, J. Klein, and E. Bacher, "Relativistic effects in imaging of light in flight with arbitrary paths," Opt. Lett. 41(9), 2001–2004 (2016). [CrossRef]
15. M. Clerici, G. C. Spalding, R. Warburton, A. Lyons, C. Aniculaesei, J. M. Richards, J. Leach, R. Henderson, and D. Faccio, "Observation of image pair creation and annihilation from superluminal scattering sources," Sci. Adv. 2(4), e1501691 (2016). [CrossRef]
16. Y. Zheng, M.-J. Sun, Z.-G. Wang, and D. Faccio, "Computational 4d imaging of light-in-flight with relativistic effects," Photonics Res. 8(7), 1072–1078 (2020). [CrossRef]
17. K. Morimoto, M.-L. Wu, A. Ardelean, and E. Charbon, "Superluminal motion-assisted four-dimensional light-in-flight imaging," Phys. Rev. X 11(1), 011005 (2021). [CrossRef]
18. D.-U. Li, J. Arlt, J. Richardson, R. Walker, A. Buts, D. Stoppa, E. Charbon, and R. Henderson, "Real-time fluorescence lifetime imaging system with a 32× 32 0.13 µm cmos low dark-count single-photon avalanche diode array," Opt. Express 18(10), 10257–10269 (2010). [CrossRef]
19. D. M. Kocak, F. R. Dalgleish, F. M. Caimi, and Y. Y. Schechner, "A focus on recent developments and trends in underwater imaging," Mar. Technol. Soc. J. 42(1), 52–67 (2008). [CrossRef]
20. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, "Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging," Nat. Commun. 3(1), 745–748 (2012). [CrossRef]
21. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, "Detection and tracking of moving objects hidden from view," Nat. Photonics 10(1), 23–26 (2016). [CrossRef]
22. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, "Non-line-of-sight tracking of people at long range," Opt. Express 25(9), 10109–10117 (2017). [CrossRef]
23. D. Faccio and A. Velten, "A trillion frames per second: the techniques and applications of light-in-flight photography," Rep. Prog. Phys. 81(10), 105901 (2018). [CrossRef]
24. A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, "Computational time-of-flight diffuse optical tomography," Nat. Photonics 13(8), 575–579 (2019). [CrossRef]
25. D. B. Lindell and G. Wetzstein, "Three-dimensional imaging through scattering media based on confocal diffuse tomography," Nat. Commun. 11(1), 4517 (2020). [CrossRef]
N. Abramson, "Light-in-flight recording by holography," Opt. Lett. 3(4), 121–123 (1978).
N. Abramson, "Light-in-flight recording: high-speed holographic motion pictures of ultrafast phenomena," Appl. Opt. 22(2), 215–232 (1983).
N. H. Abramson and K. G. Spears, "Single pulse light-in-flight recording by holography," Appl. Opt. 28(10), 1834–1841 (1989).
N. Abramson, "Light-in-flight recording. 3: Compensation for optical relativistic effects," Appl. Opt. 23(22), 4007–4014 (1984).
G. Häusler, J. Herrmann, R. Kummer, and M. Lindner, "Observation of light propagation in volume scatterers with 10 11-fold slow motion," Opt. Lett. 21(14), 1087–1089 (1996).
T. Kubota, K. Komai, M. Yamagiwa, and Y. Awatsuji, "Moving picture recording and observation of three-dimensional image of femtosecond light pulse propagation," Opt. Express 15(22), 14348–14354 (2007).
A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, "Femto-photography: capturing and visualizing the propagation of light," ACM Trans. Graph. 32(4), 1–8 (2013).
F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, "Low-budget transient imaging using photonic mixer devices," ACM Trans. Graph. 32, 1–10 (2013).
K. Goda, K. Tsia, and B. Jalali, "Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena," Nature 458(7242), 1145–1149 (2009).
Z. Li, R. Zgadzaj, X. Wang, Y.-Y. Chang, and M. C. Downer, "Single-shot tomographic movies of evolving light-velocity objects," Nat. Commun. 5(1), 3085 (2014).
R. Warburton, C. Aniculaesei, M. Clerici, Y. Altmann, G. Gariepy, R. McCracken, D. Reid, S. McLaughlin, M. Petrovich, J. Hayes, R. Henderson, D. Faccio, and J. Leach, "Observation of laser pulse propagation in optical fibers with a spad camera," Sci. Rep. 7(1), 43302 (2017).
K. Wilson, B. Little, G. Gariepy, R. Henderson, J. Howell, and D. Faccio, "Slow light in flight imaging," Phys. Rev. A 95(2), 023830 (2017).
G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, "Single-photon sensitive light-in-fight imaging," Nat. Commun. 6(1), 6021 (2015).
M. Laurenzis, J. Klein, and E. Bacher, "Relativistic effects in imaging of light in flight with arbitrary paths," Opt. Lett. 41(9), 2001–2004 (2016).
M. Clerici, G. C. Spalding, R. Warburton, A. Lyons, C. Aniculaesei, J. M. Richards, J. Leach, R. Henderson, and D. Faccio, "Observation of image pair creation and annihilation from superluminal scattering sources," Sci. Adv. 2(4), e1501691 (2016).
Y. Zheng, M.-J. Sun, Z.-G. Wang, and D. Faccio, "Computational 4d imaging of light-in-flight with relativistic effects," Photonics Res. 8(7), 1072–1078 (2020).
K. Morimoto, M.-L. Wu, A. Ardelean, and E. Charbon, "Superluminal motion-assisted four-dimensional light-in-flight imaging," Phys. Rev. X 11(1), 011005 (2021).
D.-U. Li, J. Arlt, J. Richardson, R. Walker, A. Buts, D. Stoppa, E. Charbon, and R. Henderson, "Real-time fluorescence lifetime imaging system with a 32× 32 0.13 µm cmos low dark-count single-photon avalanche diode array," Opt. Express 18(10), 10257–10269 (2010).
D. M. Kocak, F. R. Dalgleish, F. M. Caimi, and Y. Y. Schechner, "A focus on recent developments and trends in underwater imaging," Mar. Technol. Soc. J. 42(1), 52–67 (2008).
A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, "Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging," Nat. Commun. 3(1), 745–748 (2012).
G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, "Detection and tracking of moving objects hidden from view," Nat. Photonics 10(1), 23–26 (2016).
S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, "Non-line-of-sight tracking of people at long range," Opt. Express 25(9), 10109–10117 (2017).
D. Faccio and A. Velten, "A trillion frames per second: the techniques and applications of light-in-flight photography," Rep. Prog. Phys. 81(10), 105901 (2018).
A. Lyons, F. Tonolini, A. Boccolini, A. Repetti, R. Henderson, Y. Wiaux, and D. Faccio, "Computational time-of-flight diffuse optical tomography," Nat. Photonics 13(8), 575–579 (2019).
D. B. Lindell and G. Wetzstein, "Three-dimensional imaging through scattering media based on confocal diffuse tomography," Nat. Commun. 11(1), 4517 (2020).
Abramson, N.
Abramson, N. H.
Altmann, Y.
Aniculaesei, C.
Ardelean, A.
Arlt, J.
Awatsuji, Y.
Bacher, E.
Barsi, C.
Bawendi, M.
Bawendi, M. G.
Boccolini, A.
Buller, G. S.
Buts, A.
Caimi, F. M.
Chan, S.
Chang, Y.-Y.
Charbon, E.
Clerici, M.
Dalgleish, F. R.
Downer, M. C.
Faccio, D.
Gariepy, G.
Goda, K.
Gregson, J.
Gupta, O.
Gutierrez, D.
Häusler, G.
Hayes, J.
Heide, F.
Heidrich, W.
Henderson, R.
Herrmann, J.
Heshmat, B.
Howell, J.
Hullin, M. B.
Jalali, B.
Jarabo, A.
Joshi, C.
Klein, J.
Kocak, D. M.
Komai, K.
Krstajic, N.
Kubota, T.
Kummer, R.
Laurenzis, M.
Lawson, E.
Leach, J.
Li, D.-U.
Li, Z.
Lindell, D. B.
Lindner, M.
Little, B.
Lyons, A.
Masia, B.
McCracken, R.
McLaughlin, S.
Morimoto, K.
Petrovich, M.
Raskar, R.
Reid, D.
Repetti, A.
Richards, J. M.
Richardson, J.
Schechner, Y. Y.
Spalding, G. C.
Spears, K. G.
Stoppa, D.
Sun, M.-J.
Thomson, R. R.
Tonolini, F.
Tsia, K.
Veeraraghavan, A.
Velten, A.
Walker, R.
Wang, X.
Wang, Z.-G.
Warburton, R.
Warburton, R. E.
Wetzstein, G.
Wiaux, Y.
Willwacher, T.
Wilson, K.
Wu, D.
Wu, M.-L.
Yamagiwa, M.
Zgadzaj, R.
ACM Trans. Graph. (2)
Appl. Opt. (3)
Mar. Technol. Soc. J. (1)
Nat. Commun. (4)
Nat. Photonics (2)
Opt. Lett. (3)
Photonics Res. (1)
Phys. Rev. A (1)
Phys. Rev. X (1)
Rep. Prog. Phys. (1)
Sci. Adv. (1)
Sci. Rep. (1)
Supplementary Material (6)
Visualization 1 Camera space movie of laser pulses travelling towards a SPAD array. Three frames from this movie are shown in figure 5.
Visualization 2 Camera space movie of laser pulses travelling towards a SPAD array shown in the lab enviroment. Three frames from this movie are shown in figure 6 (a) to (c).
Visualization 3 Real space movie of laser pulses travelling towards a SPAD array. Three frames from this movie are shown in figure 6.
Visualization 4 Camera space movie of laser pulses travelling away from a SPAD array. Three frames from this movie are shown in figure 7.
Visualization 5 Camera space movie of laser pulses travelling away from a SPAD array shown in the lab enviroment. Three frames from this movie are shown in figure 8 (a) to (c).
Visualization 6 Real space movie of laser pulses travelling away from a SPAD array. Three frames from this movie are shown in figure 8.
Equations on this page are rendered with MathJax. Learn more.
(1) I ( θ 1 , θ 2 ; θ ) = B I f ( θ 1 ; r , f ) I r ( θ 1 ; θ , r ) I s ( θ 1 , θ 2 ; r ) ,
(2) θ 1 ( x ; A , f ) = tan − 1 ( 2 x − A 2 f ) ,
(3) θ 2 ( x ; A , f , Δ ) = tan − 1 ( 2 x − A + 2 Δ 2 f ) .
(4) I f ( θ 1 ; r , f ) = r cos θ 1 f .
(5) I r ( θ 1 ; θ , r ) = I 0 π 4 ( n 2 − 1 ) 2 d r 6 8 λ 4 ( n 2 + 2 ) 2 1 + cos 2 ( θ − θ 1 ) r 2 ,
(6) I s ( θ 1 , θ 2 ; r ) = r sin ( θ 2 − θ 1 ) sin ( θ − θ 1 ) .
(7) I ( x ; θ , A , f , Δ ) = C 1 + cos 2 ( θ − tan − 1 ( 2 x − A 2 f ) ) f sin θ − 2 x − A 2 cos θ ( tan − 1 ( 2 x − A + 2 Δ 2 f ) − tan − 1 ( 2 x − A 2 f ) ) ,
(8) I c ( θ ; f , Δ ) = I ( x = A 2 ; θ , A , f , Δ ) = C ( 1 + cos 2 θ ) f sin θ tan − 1 ( Δ f ) ∝ 1 + cos 2 θ sin θ ,
(9) Δ t ( x ; θ , A , f ) = d ( ( ( 2 x − A 2 f ) 2 + 1 ) sin θ − 2 x − A 2 f ) c ( sin θ + 2 x − A 2 f cos θ ) − d c ,
James Leger, Editor-in-Chief
Feature Issues | CommonCrawl |
$68,078 in 1953 is worth $264,917.76 in 1984
$68,078 in 1953 has the same purchasing power as $264,917.76 in 1984. Over the 31 years this is a change of $196,839.76.
The average inflation rate of the dollar between 1953 and 1984 was 4.42% per year. The cumulative price increase of the dollar over this time was 289.14%.
So what does this data mean? It means that the prices in 1984 are 2,649.18 higher than the average prices since 1953. A dollar in 1984 can buy 25.70% of what it could buy in 1953.
The inflation rate for 1953 was 0.75%, while the inflation rate for 1984 was 4.32%. The 1984 inflation rate is higher than the average inflation rate of 3.50% per year between 1984 and 2021.
We can look at the buying power equivalent for $68,078 in 1953 to see how much you would need to adjust for in order to beat inflation. For 1953 to 1984, if you started with $68,078 in 1953, you would need to have $264,917.76 in 1953 to keep up with inflation rates.
So if we are saying that $68,078 is equivalent to $264,917.76 over time, you can see the core concept of inflation in action. The "real value" of a single dollar decreases over time. It will pay for fewer items at the store than it did previously.
If you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of 289.14%.
$5.00 in 1953 $19.46 in 1984
$50.00 in 1953 $194.57 in 1984
$500.00 in 1953 $1,945.69 in 1984
$5,000.00 in 1953 $19,456.93 in 1984
$50,000.00 in 1953 $194,569.29 in 1984
$500,000.00 in 1953 $1,945,692.88 in 1984
We then replace the variables with the historical CPI values. The CPI in 1953 was 26.7 and 103.9 in 1984.
$$\dfrac{ \$68,078 \times 103.9 }{ 26.7 } = \text{ \$264,917.76 } $$
$68,078 in 1953 has the same purchasing power as $264,917.76 in 1984.
$$ \dfrac{\text{ 103.9 } - \text{ 26.7 } }{\text{ 26.7 }} \times 100 = \text{ 289.14\% } $$
<a href="https://studyfinance.com/inflation/us/1953/68078/1984/">$68,078 in 1953 is worth $264,917.76 in 1984</a>
"$68,078 in 1953 is worth $264,917.76 in 1984". StudyFinance.com. Accessed on January 19, 2022. https://studyfinance.com/inflation/us/1953/68078/1984/.
"$68,078 in 1953 is worth $264,917.76 in 1984". StudyFinance.com, https://studyfinance.com/inflation/us/1953/68078/1984/. Accessed 19 January, 2022
$68,078 in 1953 is worth $264,917.76 in 1984. StudyFinance.com. Retrieved from https://studyfinance.com/inflation/us/1953/68078/1984/. | CommonCrawl |
Difference between revisions of "New Cosmography"
Cosmo All (Talk | contribs)
<div class="NavHead">solution</div>
<div style="width:100%;" class="NavContent">
<p style="text-align: left;">It is easy to see that \textbf{$\dot{H}=-H'H/a$ }where\textbf{ $H'=dH/dx$. }Then
<p style="text-align: left;">It is easy to see that $\dot{H}=-H'H/a$ where $H'=dH/dx$. Then
\[q=-\frac{\dot{H}}{H^{2} } -1=\frac{H'}{H} x-1\]
Calculating $j$, making use of $a'=-a^{2} $, we obtain
<div id=""></div>
<div id="cs-18"></div>
<div style="border: 1px solid #AAA; padding:5px;">
'''Problem 18'''
<p style= "color: #999;font-size: 11px">problem id: </p>
<p style= "color: #999;font-size: 11px">problem id: cs-18</p>
Can $dH^{n} /dz^{n} $ generally be expressed in terms of the cosmographic parameters?
Show that $$q(z)=frac 12 \frac{d\ln H^2}{d\ln(1+z)}$$
Show that the time derivatives of the Hubble's parameter can be expressed through the cosmographic parameters as follows:
$$\dot{H}=-H^2(1+q);$$
Show that \[j=\frac{\ddot{H}}{H^{3} } +3\frac{\dot{H}}{H^{2} } +1\]
<div class="NavFrame collapsed">
Express total pressure in flat Universe through the cosmographic parameters.
'''Problem '''
Express time derivatives $dp/dt,d^{2} p/dt^{2} ,d^{3} p/dt^{3} ,d^{4} p/dt^{4} $ through the cosmographic parameters.
<p style="text-align: left;"></p>
<p style="text-align: left;">Consequent differentiation w.r.t. time the expression $p=-H^{2} \left(1-2q\right)$ obtained in the previous problem using the expressions for the time derivatives of the cosmographic parameters
leads to
$$P=-H^2(1-2q)$$
$$\frac{dP}{dt}=-2H^3(j-1)$$
$$\frac{d^2P}{dt^2}=-2H^4(s-j+3q+3)$$
$$\frac{d^3P}{dt^3}=-2H^5\left(l-j(1+q)-3q(7+2q)-2(6+s)\right)$$</p>
$$\frac{d^4P}{dt^4}=-2H^6\left(m-3l+j^2+12j(2+q)+3\left(20+s+q(48+q(27+2q)+s)\right)\right)$$</p>
Express time derivatives $d\rho /dt,d^{2} \rho /dt^{2} ,d^{3} \rho /dt^{3} ,d^{4} \rho /dt^{4} $ through the cosmographic parameters.
<p style="text-align: left;">Consequently differentiate the Friedman equation $\rho =3H^{2} $ and use expressions for the time derivatives of the cosmographic parameters to find[[File:image8.jpg|500px]]</p>
Show that the accelerated growth of expansion rate $\dot{H}>0$ takes place under the condition $q<-1$.
<p style="text-align: left;">\[\dot{H}=-H^{2} (1+q),\quad \dot{H}>0\to q<-1\]</p>
Consider the case of spatially flat Universe and express the scalar (Ricci ) curvature and its time derivatives in terms of the cosmographic parameters (-)
<p style="text-align: left;">Using the expression $R=-6\left(\frac{\ddot{a}}{a} +H^{2} \right)$ and definition of the deceleration parameter $q=-\frac{\ddot{a}}{aH^{2} } $ , one finds
\[R=-6H^{2} \left(1-q\right)\]
Using the expressions
\[\begin{array}{l} {\dot{H}=-H^{2} (1+q),} \\ {\dot{q}=-H\left(j-2q^{2} -q\right)} \end{array}\]
one obtains
\[\dot{R}=6H^{3} \left(2-j+q\right)\]</p>
Following [O. Luongo and H. Quevedo, Self-accelerated universe induced by repulsive e$\boldsymbol{\mathrm{\textrm{?}}}$ects as an alternative to dark energy and modi?ed gravities, arXiv: (1507.06446)], introduce the parameter$\lambda \equiv -\frac{\ddot{a}}{a} =qH^{2} $ , so that $\lambda <0$ when the Universe is accelerating, whereas for $\lambda >0$ the Universe decelerates. Luongo and H. Quevedo showed, that the parameter $\lambda $ can be considered as an eigenvalue of the curvature tensor defined in special way. In particular, for FLRW metric the curvature tensor $R$ can be expressed as a (6 $\boldsymbol{\mathrm{\times}}$ 6)$\boldsymbol{\mathrm{-}}$matrix
\[R=diag\left(\lambda ,\lambda ,\lambda ,\tau ,\tau ,\tau \right),\quad \tau \equiv \frac{1}{3} \rho \]
The curvature eigenvalues reflect the behavior of the gravitational interaction and if gravity becomes repulsive in some regions, the eigenvalues must change accordingly; for instance, if repulsive gravity becomes dominant at a particular point, one would expect at that point a change in the sign of at least one eigenvalue. Moreover, if the gravitational field does not diverge at infinity, the eigenvalue must have an extremal at some point before it changes its sign. This means that the extremal of the eigenvalue can be interpreted as the onset of repulsion. Obtain the onset of repulsion condition in terms of cosmographic parameters.
<p style="text-align: left;">As mentioned above, the onset of repulsion is determined by an extremal of the eigenvalue, i.e.$\dot{\lambda }=0,$
\[\lambda =qH^{2} \quad \to \quad \dot{\lambda }=\dot{q}H^{2} +2qH\dot{H}=0\]
Using the result of the previous problem for $\dot{q}$ and $\dot{H}$ we find that the repulsion onset condition $\dot{\lambda }=0$reduces to
\[j=-q.\]</p>
Represent results of the previous problem in terms of the Hubble parameter and its time derivatives.
<p style="text-align: left;">Solving the system of equations
\[\begin{array}{l} {\dot{H}=-H^{2} (1+q),\quad } \\ {\ddot{H}=H^{3} \left(j+3q+2\right)} \end{array}\]
w.r.t. the variables $q$ and $j$ one finds, that the condition $j=-q$ transforms into
\[\frac{\ddot{H}}{H} =-2\dot{H}\]</p>
Obtain the following integral relation between the Hubble's parameter and the deceleration parameter
It is easy to see that $\dot{H}=-H'H/a$ where $H'=dH/dx$. Then \[q=-\frac{\dot{H}}{H^{2} } -1=\frac{H'}{H} x-1\] Calculating $j$, making use of $a'=-a^{2} $, we obtain \[j(x)=1-2\frac{H'}{H} x+\left(\frac{H'^{2} }{H^{2} } +\frac{H''}{H} \right)x^{2} \]
Express the derivatives $d^{2} H/dz^{2} $ , $d^{3} H/dz^{3} $ and $d^{4} H/dz^{4} $ in terms of the cosmographic parameters.
Using results of the two previous problems, one finds \[\begin{array}{l} {\frac{d^{2} H}{dz^{2} } =\frac{j-q^{2} }{(1+z)^{2} } H,} \\ {\frac{d^{3} H}{dz^{3} } =\frac{H}{(1+z)^{3} } \left(3q^{2} +3q^{3} -4qi-3j-s\right)} \\ {\frac{d^{4} H}{dz^{4} } =\frac{H}{(1+z)^{4} } \left(-12q^{2} -24q^{3} -15q^{4} =32qj+25q^{2} j+7qs+12j-4j^{2} +8s+1\right)} \end{array}\]
Find decomposition of the inverse Hubble parameter $1/H$ in powers of the red shift $z$.
\[\begin{array}{l} {\frac{d}{dz} \left(\frac{1}{H} \right)=-\frac{1}{H^{2} } \frac{dH}{dz} =-\frac{1+q}{1+z} \frac{1}{H} ;} \\ {\frac{d^{2} }{dz^{2} } \left(\frac{1}{H} \right)=2\left(\frac{1+q}{1+z} \right)^{2} \frac{1}{H} -\frac{j-q^{2} }{\left(1+z\right)^{2} } \frac{1}{H} =\frac{2+4q+3q^{2} -j}{(1+z)^{2} } \frac{1}{H} ;} \\ {\frac{1}{H(z)} =\frac{1}{H_{0} } \left[1-\left(1+q_{0} \right)z+\frac{2+4q_{0} +3q_{0}^{2} -j_{0} }{6} z^{2} +\ldots \right]} \end{array}\]
Obtain relations for transition from the time derivatives to that w.r.t. the red shift.
\[\begin{array}{l} {\frac{d^{2} }{dt^{2} } =(1+z)H\left[H+(1+z)\frac{dH}{dz} \right]\frac{d}{dz} +(1+z)^{2} H^{2} \frac{d^{2} }{dz^{2} } ,} \\ {\frac{d^{3} }{dt^{3} } =-(1+z)H\left\{H^{2} +(1+z)^{2} \left(\frac{dH}{dz} \right)^{2} +(1+z)H\left[4\frac{dH}{dz} +(1+z)\frac{d^{2} H}{dz^{2} } \right]\right\}\frac{d}{dz} -3(1+z)^{2} H^{2} } \\ {\times \left[H+(1+z)\frac{dH}{dz} \right]\frac{d^{2} }{dz^{2} } -(1+z)^{3} H^{3} \frac{d^{3} }{dz^{3} } ,} \\ {\frac{d^{4} }{dt^{4} } =(1+z)H\left[H^{2} +11(1+z)H^{2} \frac{dH}{dz} +11(1+z)H\frac{dH}{dz} +(1+z)^{3} \left(\frac{dH}{dz} \right)^{3} +7(1+z)^{2} H\frac{d^{2} H}{dz^{2} } \right. } \\ {+\left. 4(1+z)^{3} H\frac{dH}{dz} \frac{d^{2} H}{d^{2} z} +(1+z)^{3} H^{2} \frac{d^{3} H}{d^{3} z} \right]\frac{d}{dz} +(1+z)^{2} H^{2} \left[7H^{2} +22H\frac{dH}{dz} +7(1+z)^{2} \left(\frac{dH}{dz} \right)^{2} \right. } \\ {+\left. 4H\frac{d^{2} H}{dz^{2} } \right]\frac{d^{2} }{dz^{2} } +6(1+z)^{3} H^{3} \left[H+(1+z)\frac{dH}{dz} \right]\frac{d^{3} }{dz^{3} } +(1+z)^{4} H^{4} \frac{d^{4} }{dz^{4} } +(1+z)^{4} H^{4} \frac{d^{4} }{dz^{4} } .} \end{array}\]
Show that the time derivatives of the Hubble's parameter can be expressed through the cosmographic parameters as follows: $$\dot{H}=-H^2(1+q);$$ $$\ddot{H}=H^3(j+3q+2)$$ $$\dddot{H}=H^4\left(s-4j-3q(q+4)-6\right)$$ $$\ddddot{H}=H^5\left(l-5s+10(q+2)j+30(q+2)+24\right)$$ generally be expressed in terms of the cosmographic parameters?
Using results of the previous problem, we find \[j=\frac{\ddot{H}}{H^{3} } -3q-2\] Substituting \[q=-\frac{\dot{H}}{H^{2} } -1\] one finally obtains \[j=\frac{\ddot{H}}{H^{3} } +3\frac{\dot{H}}{H^{2} } +1\]
Excluding the density $\rho $ from the Friedman equations \[\begin{array}{l} {H=\frac{1}{3} \rho ,} \\ {\frac{\ddot{a}}{a} =H^{2} +\dot{H}=-\frac{1}{6} \left(\rho +3p\right)} \end{array}\] one finds \[p=-\left(3H^{2} +2\dot{H}\right)\] Using the above obtained expression $\dot{H}=-H^{2} (1+q)$ , we obtain \[p=-H^{2} \left(1-2q\right)\]
Consequent differentiation w.r.t. time the expression $p=-H^{2} \left(1-2q\right)$ obtained in the previous problem using the expressions for the time derivatives of the cosmographic parameters \[\begin{array}{l} {\dot{q}=-H\left(j-2q^{2} -q\right),} \\ {\dot{j}=H\left[s+j(2+3q)\right],} \\ {\dot{s}=H\left[l+s(3+4q)\right],} \\ {\dot{l}=H\left[m+l(4+5q)\right]} \end{array}\] leads to $$P=-H^2(1-2q)$$ $$\frac{dP}{dt}=-2H^3(j-1)$$ $$\frac{d^2P}{dt^2}=-2H^4(s-j+3q+3)$$ $$\frac{d^3P}{dt^3}=-2H^5\left(l-j(1+q)-3q(7+2q)-2(6+s)\right)$$
Consequently differentiate the Friedman equation $\rho =3H^{2} $ and use expressions for the time derivatives of the cosmographic parameters to find
\[\dot{H}=-H^{2} (1+q),\quad \dot{H}>0\to q<-1\]
Using the expression $R=-6\left(\frac{\ddot{a}}{a} +H^{2} \right)$ and definition of the deceleration parameter $q=-\frac{\ddot{a}}{aH^{2} } $ , one finds \[R=-6H^{2} \left(1-q\right)\] Using the expressions \[\begin{array}{l} {\dot{H}=-H^{2} (1+q),} \\ {\dot{q}=-H\left(j-2q^{2} -q\right)} \end{array}\] one obtains \[\dot{R}=6H^{3} \left(2-j+q\right)\]
Following [O. Luongo and H. Quevedo, Self-accelerated universe induced by repulsive e$\boldsymbol{\mathrm{\textrm{?}}}$ects as an alternative to dark energy and modi?ed gravities, arXiv: (1507.06446)], introduce the parameter$\lambda \equiv -\frac{\ddot{a}}{a} =qH^{2} $ , so that $\lambda <0$ when the Universe is accelerating, whereas for $\lambda >0$ the Universe decelerates. Luongo and H. Quevedo showed, that the parameter $\lambda $ can be considered as an eigenvalue of the curvature tensor defined in special way. In particular, for FLRW metric the curvature tensor $R$ can be expressed as a (6 $\boldsymbol{\mathrm{\times}}$ 6)$\boldsymbol{\mathrm{-}}$matrix \[R=diag\left(\lambda ,\lambda ,\lambda ,\tau ,\tau ,\tau \right),\quad \tau \equiv \frac{1}{3} \rho \] The curvature eigenvalues reflect the behavior of the gravitational interaction and if gravity becomes repulsive in some regions, the eigenvalues must change accordingly; for instance, if repulsive gravity becomes dominant at a particular point, one would expect at that point a change in the sign of at least one eigenvalue. Moreover, if the gravitational field does not diverge at infinity, the eigenvalue must have an extremal at some point before it changes its sign. This means that the extremal of the eigenvalue can be interpreted as the onset of repulsion. Obtain the onset of repulsion condition in terms of cosmographic parameters.
As mentioned above, the onset of repulsion is determined by an extremal of the eigenvalue, i.e.$\dot{\lambda }=0,$ \[\lambda =qH^{2} \quad \to \quad \dot{\lambda }=\dot{q}H^{2} +2qH\dot{H}=0\] Using the result of the previous problem for $\dot{q}$ and $\dot{H}$ we find that the repulsion onset condition $\dot{\lambda }=0$reduces to \[j=-q.\]
Solving the system of equations \[\begin{array}{l} {\dot{H}=-H^{2} (1+q),\quad } \\ {\ddot{H}=H^{3} \left(j+3q+2\right)} \end{array}\] w.r.t. the variables $q$ and $j$ one finds, that the condition $j=-q$ transforms into \[\frac{\ddot{H}}{H} =-2\dot{H}\] | CommonCrawl |
A multi-component flood risk assessment in the Maresme coast (NW Mediterranean)
Caridad Ballesteros1,
José A. Jiménez1 &
Christophe Viavattene2
Natural Hazards volume 90, pages 265–292 (2018)Cite this article
Coastal regions are the areas most threatened by natural hazards, with floods being the most frequent and significant threat in terms of their induced impacts, and therefore, any management scheme requires their evaluation. In coastal areas, flooding is a hazard associated with various processes acting at different scales: coastal storms, flash floods, and sea level rise (SLR). In order to address the problem as a whole, this study presents a methodology to undertake a preliminary integrated risk assessment that determines the magnitude of the different flood processes (flash flood, marine storm, SLR) and their associated consequences, taking into account their temporal and spatial scales. The risk is quantified using specific indicators to assess the magnitude of the hazard (for each component) and the consequences in a common scale. This allows for a robust comparison of the spatial risk distribution along the coast in order to identify both the areas at greatest risk and the risk components that have the greatest impact. This methodology is applied on the Maresme coast (NW Mediterranean, Spain), which can be considered representative of developed areas of the Spanish Mediterranean coast. The results obtained characterise this coastline as an area of relatively low overall risk, although some hot spots have been identified with high-risk values, with flash flooding being the principal risk process.
Avoid the common mistakes
Coastal regions are the areas most threatened by natural hazards (EEA 2006; Kron 2013). These areas contain a large number of receptors (natural, physical, and socio-economic) (EEA 2013a), making them particularly vulnerable. Floods, in particular, are considered to be one of the most harmful phenomena, causing 69% of the overall natural catastrophic losses in Europe (CEA 2007; Llasat 2009). In Spain, the Consorcio de Compensacion de Seguros (CCS), a public corporation which provides insurance to cover "extraordinary" risks, states that 61% of its resources are required to mitigate damages incurred as a result of flood events (Insurance Compensation Consortium 2016). The greatest number of casualties and material damages have occurred in the Spanish Mediterranean (Barnolas and Llasat 2007; Camarasa-Belmonte and Soriano-García 2012). Moreover, in the absence of additional adaptation, the risk from coastal flooding is predicted to rise in the future as a result of two primary factors. First, climate change and rising sea levels are expected to increase the frequency and severity of flood events (EEA 2013b), and second, the number of potentially exposed receptors (infrastructure, socio-economic assets, population) is increasing in floodplains and/or near the sea (e.g. Hallegatte et al. 2013).
Flood risk can be defined as the product of the probability of flooding and the associated negative consequences or damages (UNISDR 2009). In order to reduce the negative consequences of flooding, it is necessary to consider the hazard, the exposure, and vulnerability values potentially affected. Traditionally, flood risk in coastal areas has been managed with the use of physical structures to protect against floods (e.g. see Saurí-Pujol et al. 2001). However, it is recognised that absolute protection is both unachievable and unsustainable due to the high costs and inherent uncertainties (Schanze 2006). As a result, there has been a shift in environmental policy in the European Union from emphasis on flood protection to flood risk management. The European Directive 2007/60/EC (EC 2007) urges flood risk analysis and flood risk management at the community level, based on local circumstances and the types of floods (river floods, flash floods, urban floods, and flooding from the sea in coastal areas) which may be present.
To correctly define coastal management policies for successful flood risk management, given the spatial and temporal nature of flood risk, broad-scale integrated assessments are essential (Dawson et al. 2009; de Moel et al. 2015). Thus, in order to manage the coastal zone at a regional scale, a holistic approach is required where, in addition to the factors determining flood risk (hazards and consequences), the various flood processes acting at different temporal scales should be considered.
In Mediterranean coastal regions, floods can be present as a result of forcing from multiple origins acting at different timescales. Flooding from a marine origin (related to changing sea level) can be the result of a marine storm associated with a short-term scale (Benavente et al. 2006; Anselme et al. 2011; Bosom and Jiménez 2011). Regarding flooding from the same source, but associated with a long-term scale, the effect of climate change can cause a permanent inundation due to SLR (Nicholls et al. 1999). Finally, regarding flooding of a terrestrial source, and caused by short-term convective rainfall at the mouth of stream systems, floods in the Mediterranean coast can be present in the form of flash floods (Llasat et al. 2010a, b; Tarolli et al. 2012).
In order to manage coastal flood risk and to develop measures for effective and long-term disaster risk reduction, it is therefore necessary to know not only the magnitude of each of the different flood components (flash flood, marine storm, SLR) and their associated consequences, but also their relative importance in relation to one another. This input is essential when analyses at a regional scale are taken into account (Bryan et al. 2001; De Pippo et al. 2008; Vinchon et al. 2009), as it allows coastal managers to identify and detect the most critical areas at risk as a result of the different flood components. This analysis then enables a more detailed assessment to be undertaken and for resources to be focused in these specific locations.
Although established approaches exist to carry out a comprehensive analysis and assessment of flood risk for each individual flood component, few studies address all components combined together (Kappes et al. 2012). Doing so presents particular challenges due to the difficulty of analysing multiple components (processes) acting at different spatial and temporal scales. In order to tackle this problem, different methodologies have been developed by means of indicators (e.g. Gornitz 1990; McLaughlin et al. 2002; Birkmann 2007; Wang et al. 2011; Balica et al. 2012; Creach et al. 2015). Through the use of indicators, it is possible to integrate risk components with homogenous units and to integrate multiple-flood hazards into one flood risk assessment. One advantage of this approach is that it allows an evaluation of all components and their associated risks using methods that do not require extensive data or a high degree of model accuracy.
Here, a methodology framed within the Source–Pathway–Receptor–Consequence (SPRC) model is presented in order to determine the potential flood risk as a result of different flood components. The methodology uses representative indicators that are suitable when comparing flood risk between different locations and also between flood types. Within this context, the main aim of this work is to introduce a framework to analyse coastal flood risk as a result of multiple components (flash flood, marine flooding, SLR) at the regional scale. The practical objective is to identify the most sensitive areas to flooding and to verify the most relevant flood component in terms of magnitude and potential for damage. With this information, coastal managers can prioritise their efforts in areas where risk management is needed the most.
This approach is applied in the Maresme (NW Mediterranean, Spain) as a paradigm of a developed coast where significant settlement and infrastructure development, coupled with intensive tourism, make the impacts of natural hazards very high (Barnolas and Llasat 2007; Llasat et al. 2010a, b; Jiménez et al. 2012).
Study area and data
On the Catalan coast (NW Mediterranean), north of the city of Barcelona, the Maresme region comprises 45 km of long, straight beaches (Fig. 1). Along the coastline, the existence of five marinas, combined with a net longshore sediment transport pattern directed southwards, has led to a disruption of sediment movement, increasing beach volume up-drift and starving down-drift beaches (Jiménez et al. 2012). In the Maresme coastal zone, another relevant geomorphological feature is the presence of ephemeral dry streams. These are characterised by a short and steep slope, which, after an intensive rainfall typical of the Mediterranean regions, may cause immediate high-energy water run-off towards the sea. This is mainly due to the complex orography with the Littoral range parallel to the coast which plays an important role in rainfall and flood production (Barnolas and Llasat 2007). The precipitation regime is characterised by a yearly distribution, with two maximum peaks in autumn and spring (Barnolas and Llasat 2007). However, high rainfall precipitation produced by convective events shows only one peak between the end of summer and autumn (Llasat 2009).
From an administrative standpoint, the coast is comprised of 16 coastal municipalities, which represent the most densely populated areas of the region. Seventy five percentage of the population of the Maresme region, around 331,000 inhabitants, are concentrated in coastal municipalities which represent 31% of the total territory (IDESCAT 2014). The socio-economic development has been based mainly on the service sector, although the proximity to different urban areas and transport routes has led to the distinction of sub-regions that reflect different territorial dynamics. Thus, the southern municipalities near Barcelona have focused on residential development, while the northern municipalities have based their economies on tourism.
The strong urban and infrastructure development, coupled with economic activities which are dependent on the coastal zone, make the region particularly vulnerable to the direct effects of flood events (Barnolas and Llasat 2007; Llasat et al. 2010a, b; Jiménez et al. 2012) and also to the indirect impacts of economic activities such as tourism. Indeed, some roads and the coastal railway have been built within the normally dry, river basins. Furthermore, due to the proximity of the railway to the sea, rail services have been affected on many occasions by wave overtopping.
In order to determinate risk, information is required for both the forcing-induced hazards and exposure values (receptors).
To characterise marine flooding, wave and water levels were taken from the hindcast SIMAR-44 database, which was generated from high-resolution modelling of the atmosphere, sea level, and waves carried out by Puertos del Estado within the HIPOCAS project (Guedes-Soares et al. 2002; Ratsimandresy et al. 2008). Data used cover the period from 1 January 1958 to 31 December 2001 as time series of meteorological tide levels, deepwater significant wave height Hm0, mean period T m, peak period T p, and the mean wave direction every 3 h. Following previous work by Bosom and Jiménez (2011), who analysed the spatial homogeneity of the wave climate in the study area, wave conditions were selected from one single location for the entire region.
In order to characterise flash floods, the annual maximum daily precipitation for a return period of 10 years (INM 2007) was selected in this work as representative of an extreme precipitation. This information yields 2.5 × 2.5 km-sized cells for all of Catalonia.
Finally, to characterise the sea level rise due to climate change, two climatic scenarios have been considered: scenario RCP 8.5 as presented within the last AR5 report (IPCC 2015), and the high-end scenario (a rise of 2 m) as the worst case and relevant for coastal management (e.g. Hinkel et al. 2015).
To measure the surface topography, and thus coastal slope, a digital elevation model of 5 × 5 m cell size (ICGC 2015) was used. Moreover, to determine physical–geomorphological features in the assessment of flash flood risk, information on the Maximum Green Vegetation Fraction by the USGS Land Cover Institute (Broxton et al. 2014) is used to describe the abundance of vegetation, and the Soil Texture developed by the European Soil Data Centre (EC 2015) with an spatial resolution of 1 × 1 km cell size has been used.
To assess the consequences and to accurately identify receptor exposure, a detailed land-use map in vector format developed by the Ecological and Forestry Applications Research Centre (CREAF) was used (Ibàñez and Burriel 2010). This map was obtained by means of a photo-interpretation analysis of aerial photographs with a scale of 1:2500 and 0.25 m pixel resolution. In addition to the land use, a large number of socio-economic factors have been considered with information provided by the Statistical Institute of Catalonia (IDESCAT 2014).
SPRC model
The basic conceptual framework used to gather all of the different components involved in coastal flood risk is the well-established Source–Pathway–Receptor–Consequence model (Fig. 2). This model was first used in natural science to describe the movement of a pollutant from a source though a conducting pathway to a potential receptor (Holdgate 1979). Later, this conceptual model was adapted for the modelling of coastal flooding (Evans et al. 2004; Gouldby and Samuels 2005) as it can easily describe a floodplain in terms of the process of flood risk propagation (Narayan et al. 2014).
SPRC model for coastal flood risk assessment
In the present application of the SPRC, coastal flood risk is presented as the result of different forcings (sources) that cause flood processes at different spatial and temporal scales (pathways) with an associated impact for the exposure values and consequences (Fig. 2).
Three main flood processes are here considered as: flash floods, marine floods, and inundation by SLR. Flash floods and marine floods are characterised as episodic events associated with hydro-meteorological, acute, and ephemeral phenomena (the inundation is transient) that are expressed in probabilistic terms. In contrast, SLR is characterised as a long-term process which causes a permanent inundation of the affected area. In this case, the forcing is characterised as the evolution over time of the sea level for different scenarios.
To characterise the receptor (the coast) and the associated flood consequences, a major number of socio-economic coastal values are considered. Hence, the consequences are the resulting value of the integration of the following five components: land use, social vulnerability, transport system, critical infrastructures, and economic activity.
Flood risk assessment
In order to assess the flood risk associated with each component, an indicator-based approach has been adopted in which the different flood components are evaluated in terms of hazard and exposure indicators across the territory. The components are combined into an absolute flood risk R abs, which is given by:
$$R_{\text{abs}} = \sum\limits_{j}^{n} {\left( {{\text{HI}}_{j} * E_{j} } \right)^{1/2} \, * \, S_{j} }$$
where HI is the hazard intensity indicator, E is the indicator measuring exposure values, S is the affected area by the analysed flood component, and j is each of the n areas in which the coast is segmented for the assessment.
Since variables contributing to hazard and exposure are measured in different units, they have been standardised to facilitate their mathematical combination. This standardisation does not affect the main objective of the analysis, i.e. to identify the most sensitive areas along the coast and to compare the contribution of each component. Following previous works (Gornitz 1990; McLaughlin and Cooper 2010; Viavattene et al. 2015), a 1–5 scale has been selected with 1 indicating the lowest contribution to risk (hazard and exposure).
To analyse the risk along the coast at regional scale from the management standpoint, the flood risk indicator is integrated in each municipality, which is the lowest administrative entity. In addition, a municipality-averaged risk value was obtained to characterise the relative importance of the risk along the coast without considering the extension of the flooding area within the municipality. The average risk, R aver, is given by:
$$R_{\text{aver}} = \frac{{\sum\nolimits_{j}^{n} {\left( {{\text{HI}}_{j} * E_{j} } \right)^{1/2} \,* \, S_{j} } }}{{\sum\nolimits_{j}^{n} {S_{j} } }}$$
In the following sections, the rationale for the ranking of each component is summarised.
Flood hazard assessment
Hazard assessment can be defined as the process which enables an understanding of the characteristics, nature, and magnitude of the considered threat. In the simplest case, a flood hazard can be characterised as a land surface covered by water. However, as was presented previously, the different flood hazards that were considered differ widely in their characteristics in relation to their physical processes and temporal and spatial scales. Thus, the flooded area associated with episodic components (storms) is temporarily inundated, being a quasi-instantaneous process (the duration of the event), whilst in the case of the long-term component, the affected area is permanently flooded, being a very slow and continuous process.
Flood hazard components associated with stochastic processes are characterised through their extreme probability distributions. Once the probability distribution of the hazard is known, a probability of occurrence is selected which should depend on the objective of the analysis. In this study, following the indications of the EU Floods Directive (EC 2007), a return period (T r) of 100 years was selected which can be considered representative of medium-probability events.
In the following sections, the assessment procedure carried out for each component is presented.
Flash flood
Flash floods are defined as extreme flood events associated with short, high-intensity rainfalls, mainly of convective origin, that occur locally (Marchi et al. 2010). Extreme events, being greater in magnitude and with a strong seasonality, occur in Mediterranean regions (Sala 2003; Gaume et al. 2009; Llasat et al. 2010a, b; Camarasa-Belmonte and Soriano-García 2012). It is in the coastal areas where these phenomena pose a considerable risk due to the high vulnerability of urban development and an increase in population and tourism during the summer season (Llasat et al. 2010a, b; Camarasa-Belmonte et al. 2011).
To carry out a flash flood assessment, a two-step approach has been developed. The first step is an analysis of the most susceptible sub-basins affected. Once identified, the second step involves a detailed hazard assessment.
To identify the most susceptible sub-basins, a modified version of the flash flood potential index (FFPI) developed by Smith (2003) has been used. This index combines different physiographic characteristics, which have a strong influence on the hydrologic response of the catchments, and therefore, the potential for flash flooding. The index includes information about the terrain slope (M), land use (L), soil type (S), and vegetation (V). Here, a modified version has been obtained by adding a new factor with information about climatology of extreme precipitation by annual maximum daily rainfall statistics (R) to account for the potential influence of local climatology (Jiménez et al. 2015). Therefore, a territory is not only sensitive to flash flooding due to its geomorphology, but also because it is subjected to a given rainfall regime that may induce such a hazard. The final modified FFPI' index is calculated as follows:
$$FFPI^{\prime} = \frac{M + L + S + V + R}{5}$$
To combine these factors, the associated raster data were ranked at the same scale from 1 to 10, considering their hydrologic response as a criteria, as established by Ceru (2012). This index is calculated using raster data so that the territory is completely divided into cells, each with the combined information previously mentioned. In order to identify the highly susceptible sub-basins, this information is integrated by assessing an averaged value of each cell at catchment level (Fig. 3). The resulting values are classified into five categories, which allow for the identification of the most susceptible areas to the effects of flash flooding.
Flash flood potential index (FFPI′) in the Maresme region
Once susceptible flash flooding areas are identified, a second and more detailed hazard assessment is carried out. To do this, a standard fluvial flood analysis has been conducted. Thus, for a given return period (T r = 100 years), the flooded area and the flood depth are assessed. In this sense, flood depth is considered a good variable in flood assessment because it is relatively straightforward to link it to direct damages by means of the depth–damage curves.
For the study area, the Catalan Water Agency (ACA) provides information regarding flood depth associated with three return periods, in accordance with the European Floods Directive (2007/60/EC) recommendations. These data have been obtained by means of a hydrologic analysis using the HEC-HMS model and a hydraulic analysis made using the Guad2D model with a detailed digital elevation model (Generalitat de Catalunya 2015).
As mentioned, the flood depth variable can be used to estimate a damage value through the use of depth–damage curves. To establish the hazard intensity scale in five categories, curves proposed by Velasco et al. (2015) have been used which were obtained for the city of Barcelona. From a practical viewpoint, each flooded area, with a given depth interval, is assigned a corresponding hazard intensity value (see Table 1).
Table 1 Flood hazards intensity scale
Marine flood
This component assesses the temporary coastal flood under the influence of marine storms. In this case, the forcing is the temporary increase in mean sea level induced by low atmospheric pressure and onshore winds during the storm resulting in both wave runup and overtopping. The methodology used here has been developed within the RISC-KIT project (see Viavattene et al. 2015; Ferreira et al. 2016).
The hazard intensity along the coast has been evaluated by estimating the water level extreme climate and the extension of the area to be inundated. This was calculated for a total of 46 sectors of 1 km in length along the coast, where the most representative beach profile was defined for each one. The runup, Ru, as the main water level contributor in the study area (Mendoza and Jiménez 2009) was calculated using the Stockdon et al. (2006) model in beaches and the Pullen et al. (2007) model when the coastline was formed by breakwater. Resultant Ru time series calculated for each profile were then fitted by means of a general pareto distribution (GPD), obtaining a probability distribution for representative beach slopes of the study area.
Given the characteristics of the beach profiles typified by the monotonous increase in elevation landward, and in order to calculate the extension of the area to be inundated, a bathtub approach was applied, assuming that those areas hydraulically connected to the sea and below a certain height were flooded (Poulter and Halpin 2008; Gallien et al. 2011).
Subsequently, flooded areas were classified on a hazard intensity scale based on the reach of the flood extension, considering the characteristics of the beaches in the study area (see Table 1).
Inundation by sea level rise
This component assesses coastal flooding due to an increase in sea level in the long term, generally associated with climate change. In contrast to other flood hazards in which the area affected by flooding returns to pre-event conditions following a recovery time, the one caused by SLR is characterised to be a permanent inundation which implies an irreversible land loss.
In this case, the flood component is given by a water level at a given time as a function of a sea level projection. As previously mentioned, two scenarios of sea level projections are used in this study, the IPCC AR5 RCP 8.5 to represent the most likely scenario and a high-end one to represent the high-risk management perspective as a worst-case scenario (see Hinkel et al. 2015). To calculate the inundated area, a GIS-based bathtub approach was adopted in which those areas hydraulically connected to the sea at an altitude lower than a given sea level will be inundated (e.g. Poulter and Halpin 2008; Gallien et al. 2011).
For this component, the criteria to define the hazard intensity have been established based on time. Thus, it is considered that those areas submerged by water for the longest duration will be the most damaged, whereas those submerged by water during a shorter duration will incur less damage. With this assumption, those areas affected first (more time submerged) might not have time (or they will have a shorter time) for adaptation, so damages may be greater, whilst the areas which require more time to be covered by water (less time submerged) will have time to adapt to the changing territory and therefore, future damages could be smaller.
To define the corresponding hazard intensity, a continuous rank is established every 20 years from the present time (2020) to the future (2100). Thus, the flooding area affected first (2020) is assigned a value of five and so on (see Table 1). In this case, as the variable considered is time, the hazard intensity will change as different scenarios are considered. As an example, the area below +0.5 m above the mean sea level will have a larger hazard intensity associated under the high-end scenario than under the RCP8.5 one because it will be inundated in a shorter period of time.
In the assessment of the consequences, flood damages are usually divided into those caused by direct contact with the receptor and indirect damages triggered by secondary effects, principally with a relationship to the disruption of physical and economic linkages. At the same time, the methods for calculating damages vary depending on whether the damages are tangible, i.e. can be assigned a monetary value, or intangible, not traded in a market (Messner et al. 2007; Green et al. 2011; Meyer et al. 2013; Penning-Rowsell et al. 2013). As the type of receptor varies (properties, people, ecosystems, etc.), the unit of measurement changes, and therefore, many evaluation methods for assessing the consequences exist.
Within the framework of this work, it is assumed that the consequences can be represented by a set of socio-economic indicators. Following the characterisation of the study area, and bearing in mind the potential direct and indirect consequences of coastal floods, these indicators are represented by the following categories: land use, the social vulnerability of the population, transport systems, critical infrastructure, and business settings. These indicators are evaluated and classified in homogenous units (Table 2) and then combined in a unique exposure value (e T ), using a linear aggregation method as shown below (Viavattene et al. 2015):
$$e_{T} = \left[ {(e_{\text{LU}} * e_{\text{SV}} * e_{\text{TS}} * e_{\text{CI}} * e_{\text{Bs}} )} \right]^{1/5}$$
Table 2 Scale of exposure values
To calculate this aggregate value, the exposure values have been characterised in a different way for each type of flood component. In the case of flash flooding and SLR-induced inundation, the exposure values, which in this study are determined by their spatial distribution (i.e. land use, transport system and critical infrastructures), have been evaluated within the flood area, as the spatial extent of the flood is known. Since the hazard intensity of these two components is also spatially represented in the territory, information on the hazard and the exposure indicators (both in vector format) are jointly intersected providing information on the hazard for each indicator within the flooded areas. The other indicators, social and population vulnerability and business settings, are calculated using statistical data provided in Catalonia at the municipality level, this being the minimum available scale.
To evaluate exposure values for the marine flooding component, a buffer area of 100 m along the coast is considered. Since no map is available to provide information on the flooded area, the buffer is considered the maximum expected extent of the flood landwards of the beach. This buffer area is selected given the characteristics of this process in the Maresme coast (also applicable to most of the Mediterranean coast), so it should be adapted depending on the area to be analysed. In areas where marine flooding can extend in large, low-lying areas such as a typical main flood in the North Sea (see e.g. McRobie et al. 2005; Dawson et al. 2009), this can be substituted by the area of the flood extension.
It should be noted here that vulnerability associated with the exposure values is not considered in this analysis. Hence, only the presence of a set of different socio-economic aspects is taken into account.
In what follows, the methodology carried out to evaluate and classify each indicator aforementioned is presented.
Land use (e LU)
The land-use exposure indicator measures the different types of land uses in the flood area. To assess it by means of the land cover map of Catalonia (Ibàñez and Burriel 2010), which provides detailed information in vector format, land uses have been reclassified into ten classes, covering the most representative uses for the study area (Table 2). For each class, a value from one to five has been assigned, depending on their relative importance. In this sense, the criteria to establish the values will be dependent on the orientation of the analysis and the purposes of coastal management. In this study, an anthropocentric perspective has been adopted, and thus, higher values were assigned to those land uses where flood damages affecting economic activities are reported (see Table 2). This indicator does not consider the physical vulnerability of the different land uses but reflects the exposed area and the associated importance value for each land use.
Population and social vulnerability (e SV)
In order to measure intangible impacts to the flood-affected population, a social vulnerability index (SVI) has been applied. This index represents the relative vulnerability of various communities to long-term health impacts and financial recovery from a flood event (Viavattene et al. 2015). As there are no previous studies for the area to inform how the population may cope with flood events, characteristics and variables suggested by Tapsell et al. (2002) have been considered. The variables selected, listed below, accurately represent the socio-economic characteristics of the study area. Amongst the social variables, the long-term sick (a), single parents (b) and the elderly (c) were taken into account. Financial deprivation variables were represented by unemployment (d), overcrowded households (f), non-car ownership (g), and non-home ownership (h). To create the social vulnerability index, each variable has to be standardised following different transformation methods to produce the minimum skewness kurtosis within their distributions (see Tapsell et al. 2002). As these authors suggest, the aggregation method adopted gives more importance to the social variables than to the financial deprivation variables. Equation 5 presents the aggregation method used:
$$e_{\rm SV} = a + b + c + ((d + f + g + h) * 0.25)$$
An important consideration when applying the SVI is the level of data aggregation. For the Maresme case study, due to small municipality dimensions in terms of settlement and built-up areas, and the fact that this is a regional study, the most appropriate available data (IDESCAT 2014) are at the municipality level. This can be considered the minimum scale for obtaining SVI values. However, if data are available at smaller scales (e.g. census level), it should be implemented at this level as the extension of the flood plain is often narrow and short.
Since this study represents a regional assessment, the social vulnerability indicator (e SV) value obtained for each municipality has been reclassified on a relative basis into five classes using the natural breaks method. This is considered an adequate method to identify groups with similar values, whilst maximising the differences between classes (see Table 2).
Transport system (e Ts)
Another key element when assessing the consequences of flood events is the disruption to the transport system. To obtain a representative indicator of the direct impact of flood events on this infrastructure, the total linear metres of railway and motorway have been considered for the components where the flooded area is known, such as for flash flood and SLR-induced inundation. The total value is ranked into five classes considering that damages will be higher when a greater length of total transport system at regional scale has become exposed. For the marine flooding component, where the total flooded area is not known, the indicator is built considering the presence or absence of different transport systems within the buffer area. This is ranked into five classes, taking into account the relative importance of each transport infrastructure to the overall system and the probable systemic impacts resulting from their disruption (see Table 2).
Critical infrastructure (e CI)
This component assesses the presence of utilities providing essential services that flooding can affect, interrupt, or cause a cease in operation with serious consequences for the community, both inside and outside of the affected area. The presence of critical infrastructure in the flood area has been identified with the information provided in the land-use map. Once identified, these infrastructures were classified on a scale from one to five according to their relative importance at different spatial levels in the community (see Table 2).
Business settings (e Bs)
To assess the impact of coastal floods on business activity, two indices were selected in this case. For marine processes (such as marine flood and SLR-induced inundation) on the coastal fringe, tourism is the most representative economic sector involved. To obtain a representative value of this activity, a tourist index developed by the Spanish bank La Caixa (2013) was used. This is a relative index based on the tax rate (Business Activities Tax), which takes into account the number of rooms and the annual occupancy and category of tourist establishments. The index value is the percentage share of each municipality relative to the entire nation, which can be expressed as:
$${\text{Tourist index}} = \frac{\text{Municipality tax rate}}{\text{Total taxes rates in Spain}} \times 100,000$$
Damages caused by flash floods can be found inland where other businesses may be located. An industrial index also developed by La Caixa (2013) was used as a good representation in the assessment of flash flood consequences to business activity. This index is based on tax revenues corresponding to industrial activities and reflects the relative weight of industry in each municipality with respect to all of Spain as follows:
$${\text{Industrial index}} = \frac{\text{Municipality tax rate}}{\text{Total taxes rates in Spain}} \times 100,000$$
Both indices have been ranked on a relative basis into five categories by applying an equal intervals method.
Figure 3 shows the FFPI' along the Maresme coast, integrated at the sub-basin level. These results indicate that for the same region there are differences in the level of susceptibility to the effects of flash flooding, mainly due to geomorphological characteristics such as the slope and the type of soil. As a result, this first step has allowed for the identification of the most hazardous areas along the coast.
In addition, the potential flooding areas identified by the Catalan Water Agency (Generalitat de Catalunya 2015) in order to implement the EU Floods Directive (Directive 2007/60/EC) are also presented in Fig. 3. These areas were identified by combining geomorphological studies based on visual analyses (topography and morphology), flooding studies, aerial photographs, and field visits. This information suggests that there is a strong correlation between the largest flood areas and medium and high FFPI' levels, which allows for a validation of this index as a first approach for identifying areas prone to flash floods.
In the second step, those sub-basins likely to be affected by flash flooding with medium and high FFPI' levels were chosen to undertake a detailed flash flood assessment. Data on the sub-basins prone to flash flooding were obtained from the Catalan Water Agency (Generalitat de Catalunya 2015) which uses hydrologic and hydraulic studies to determine the flood area, and variables such as the flood depth or flood velocity associated with three return periods. In this case, the flood depth information for a return period of 100 years has been used. In Fig. 4, flash flood components are represented along the study area by their extension and ranked into a flash flood hazard indicator. If areas are classified in terms of the average hazard intensity (flood depth values), the most hazardous area is located in Sant Pol de Mar (12). The second most hazardous area is Argentona stream with the highest values in Mataró (6). When the hazard is evaluated in absolute terms, i.e. taking into account the total affected area, the most hazardous locations are Santa Susanna (15) and Pineda de Mar (14).
Flash flood hazard for the most susceptible sub-basins in the Maresme region
Obtained exposure indicators for the areas are representative of medium values (Fig. 5) with Mataró (6) showing the highest values. This is due to the existence of a differing number of assets with high relevance at local and regional level being affected (e.g. railway, road, factories and a water treatment plant) as well as the high values of social and population vulnerability to floods.
Flash flood exposure and risk for the most susceptible sub-basin in the Maresme region
By combining hazard and exposure values, the average and absolute flood risk value obtained shows the municipalities of Mataró (6) and Sant Pol de Mar (12) as having the highest average risk values, whereas Santa Susanna (15) and Pineda de Mar (14) show the highest absolute risk.
Marine flood hazard intensity is presented in Fig. 6 over the 46 sectors of about one km in length along the coastline within the buffer area considered. The results show that the study area can generally be considered to have a low level of marine flood hazard. There are exceptions to this in a few sectors, where the hazard intensity is high due to the combination of both large runup values and low topographic levels. Therefore, these sectors represent the most highly susceptible areas to be affected by marine storm-induced runup. However, since the goal of this study is to undertake a comparative analysis amongst floods, the results at the sector level have been integrated at the municipality level. Nevertheless, it should be noted that due to the short length of these sectors and marine flooding scope, important hot spots can be hidden at this level.
Marine flood hazard intensity, exposure, and risk along the Maresme coast
In Fig. 6, exposure and risk indicators at the municipality level are also shown. However, in order to obtain a detailed representation of the various exposure values considered, in Fig. 7 each individual exposure indicator and the total exposure indicator are shown. The results indicate that although the total values of the exposure index seem to be quite homogenous for all of the municipalities, differences can be observed when exposure values are evaluated individually, as is the case with the social vulnerability indicator (e SV) and the business settings indicator (e BS).
Exposure values for marine flooding (within 100 m buffer) and total exposure index for the Maresme coastal municipalities
Risk values indicate a low risk of marine flooding. However, municipalities such as Cabrera de Mar (5), Mataró (6), and Malgrat de Mar (16) are the municipalities of greatest risk within the study area.
In Fig. 8, the SLR hazard is presented considering a RCP 8.5 and a high-end scenario within the study area. As can be observed, the extension of inundation is very small, since it can only be appreciated in the north where the low-lying area of the Tordera Delta (16) is located for the high-end scenario. With the exception of the Tordera delta, these results indicate the relatively low importance of SLR, in terms of inundation, due to the small flood-prone areas which are only restricted to beaches.
SLR hazard intensity along the Maresme coast. RCP 8.5 (top) and H.E. scenario (bottom)
The largest inundated increase from scenario RCP 8.5 to high end occurs for the highest part of the territory. These require a longer time to be inundated, and as a consequence, given the hazard intensity classification adopted, new areas are essentially of low risk (Fig. 9).
a Permanent inundated area at 2100 in the Maresme coastal municipalities due to different SLR scenarios. b Permanent incremental inundated area due to different SLR scenarios and the associated hazard intensity for the entire Maresme
This highlights the geomorphological characteristics of the coastline, which can be typified as having steep profile slopes (with the exception being the low-lying area of the Tordera delta). In Fig. 9, the total area affected considering each scenario over time and their corresponding hazard intensity are presented. Moreover, Fig. 9 presents the total flooded area by 2100 for each municipality and scenario. Results emphasise the importance of the SLR component in the north of the region for the municipality of Malgrat de Mar (16).
Integrated risk
Figure 10 shows an integrated representation of flooding risk along the Maresme coast. It can be observed that the average risk magnitudes associated with each component are fairly similar amongst the municipalities, without highlighting any component, apart from local singularities. This homogenous pattern is due to both the spatial distribution of exposure values and hazard intensity.
Integrated flood risk components along the Maresme coastal municipalities (dimensionless). a Average flood risk (classified between 1 and 5). b Absolute flood risk
Nevertheless, when the flood area is considered, specific locations of interest can be identified along the territory for each component. For instance, although the average risks associated with flash floods tend to be similar to other components, with respect to the affected flood areas, it can be viewed as an increased risk, especially within the municipalities of Pineda de Mar (14) and Santa Susanna (15). However, in the municipalities identified as vulnerable to flash floods, the risk associated with this component is higher than in the other municipalities.
Regarding SLR, this can be considered a low risk, with the exception being the consideration of a high-end scenario for the municipality of Malgrat de Mar (16). This municipality reflects a singularity due to the flood plain area of the Tordera Delta. Thus, it is in this municipality that SLR represents the most important flood component. However, these results can only be appreciated when an absolute risk is considered, as no differences are found between scenarios when an average risk is taken into account. This is because, although in the high-end scenario the exposure index is higher (the flood area is significantly larger), the relative hazard intensity is much lower, considering the total flood area affected, and this results in a similar average risk for both scenarios.
Finally, regarding marine flood risks, obtained values indicate a uniform pattern along the coast with low absolute values and no particular municipality identified. However, it is important to bear in mind that as the values have been aggregated at the municipal level, (small-scale) areas can be found at the local level where the risks associated with marine storms are significantly high (Fig. 6).
Discussion and conclusions
In this work, a coastal flood risk assessment has been presented at a regional scale with the use of indicators that have allowed for the characterisation of different coastal flood hazards and their consequences. The adopted approach estimates the risk by combining hazard and values at exposure, and in consequence, it should be equivalent to the maximum potential damages which may occur if an area is inundated (Messner and Meyer 2006). This approach is acceptable at mesoscale (regional) level and when the objective of the analysis is to identify the most sensitive areas to considered hazards. A similar approach to assess flood risk at regional level in the Emilia-Romagna (Italy) can be seen in Perini et al. (2016). In absence of depth–damage curves, they assessed the damage using land-use maps and assigned the vulnerability of each type as a function of their characteristics. Thus, they associated the highest vulnerability (and damage) to infrastructures such as urban areas and industrial zones, whereas areas without human occupation, such as beaches, were considered less exposed and given the lowest value. Exposed values in inundated urban areas are considered to be fully affected (damaged) without considering the depth of inundation. This approach is also used by Rizzi et al. (2017) to develop a regional risk assessment.
On the other hand, if the analysis is carried out at local scale, or final decisions on specific protection and adaptation measures have to be taken, damage potential has to be converted to expected damages by introducing the susceptibility of elements at risk. This is done using relative damage functions which give the expected degree of damage, usually as a function of the inundation depth (e.g. Merz et al. 2007). In any case, both approaches can be considered complementary, i.e. to start with a regional scale analysis as the one presented here and, in those locations identified as high-risk areas, to launch a standard flood risk assessment with specific depth–damage curves. An example of this two-step approach is the Coastal Risk Assessment Framework adopted in the RISC-KIT project (Van Dongeren et al. 2014; Viavattene et al. 2015).
The proposed methodology analyses the different flood contributions in an uncoupled manner, i.e. by assessing the importance of each component individually. Within a regional flood risk analysis framework, this permits to identify their relative intensity along the territory, i.e. which are the most sensitive municipalities to each flood component. When comparing the magnitude of the different individual components, it has to be considered that, implicitly, it is assumed that the associated potential damage will be independent of the flood source. However, a comparison has been made between transient (associated with marine and flash floods) and permanent (associated with sea level rise) inundation components and the way they induce damage is different. In transient flood events, velocity is an important variable, whereas final inundation depth is the only variable to be considered in permanent flooding. In spite of this, this approach is valid to obtain a first estimation of the relative potential importance of each component. In this sense, this type of comparison can be useful for a multi-component risk perception analysis (e.g. Harvatt et al. 2011).
Obtained results characterise the Maresme coast as an area with a relatively low risk of being affected by floods. However, some hot spots are found along its coastline, where risk levels significantly increase with respect to adjacent areas. These hot spots are located in different areas depending on the flood component to be considered. Thus, the area has a very low sensitivity to SLR due to its topography characterised by high and steep beaches that protect the hinterland against long-term permanent inundation. As a consequence, the SLR-induced flood risk is very low in the entire area with the exception of the Tordera delta, which is the only low-lying coastal area along the Maresme. This concentration of at-risk SLR areas in few locations associated with river deltas and coastal plains is typical of the entire Catalan coast (Oltra et al. 2011) as well in most of Mediterranean countries (e.g. Bondesan et al. 1995; Paskoff 2004; Snoussi et al. 2009). Even in this case, the flood risk is only significant under the high-end scenario because despite being the lowest area in the region, it is higher than other low-lying plains in Catalonia, which are formed by finer sediments and have lower elevations (e.g. Alvarado-Aguilar et al. 2012).
With regard to marine floods, the spatial distribution of the associated integrated risk is quite homogeneous along the region which reflects both the general beach morphology and the nearly alongshore uniform distribution of values at exposure. In this case, hot spots are essentially controlled at the small spatial scale. As in most of the Mediterranean coast, the water level during coastal storms is mainly controlled by wave-induced runup (Mendoza and Jiménez 2009; Armaroli et al. 2012; Gervais et al. 2012). Due to this, spatial variations in flood occurrence will mainly be controlled by variations in beach slope affecting runup magnitude and in beach/dune elevation, which determines how much floodwater volume can enter to the hinterland. It has to be highlighted that although storm-induced water level is much higher than SLR the extension of the flood-prone area will be much smaller. This is due to the fact that this is a transient flood event with a limited floodwater volume. This difference means that the use of the bathtub approach to delineate the storm-induced flood-prone area will likely overestimate the magnitude of the flood event, especially in low-lying areas. To overcome this, a coastal buffer has been defined behind the beach which indicates the area usually exposed to this kind of event (see also Ferreira et al. 2016).
Finally, when present, flash floods have been identified as the flood component inducing the highest risk within the region. This is consistent with the analysis of Barredo (2007) of major flood disasters in Europe, where major means the number of registered casualties is greater than 70 and/or the direct damage is larger than 0.005% of EU GDP in the year of the disaster. In the aforementioned study, all the events classified as major disasters occurring in Spain are flash floods. Moreover, the Maresme study area has been identified as one of the most affected regions along the Catalan coast by this type of flood (Llasat et al. 2010a, b). The reason for the high risk associated with flash floods is because, in addition to their probability of occurrence, they occur in ephemeral river courses that usually cross urban areas, i.e. main villages in each municipality. Due to this, values at exposure are extremely high and, as a consequence, the corresponding risk is also very high (e.g. Vinet 2008). This is also common for most of the Mediterranean coastal zone (Ruin et al. 2008; Llasat et al. 2010a, b; Faccini et al. 2015).
In spite of the dominance of flash floods in those locations where they are present, it has to be stressed that this and the marine flood components are determined in terms of an event associated with a given probability of occurrence. Thus, some exceptional situations may occur, usually associated with very low probability events, where this rule is inverted. An example of this was the impact of the Xynthia storm in France, during which a much larger than expected number of casualties occurred, and the incurred damage exceeded those induced by very large flash flood events (e.g. Vinet et al. 2012).
When analysing the obtained results, it can be concluded that the proposed multi-component flood risk assessment methodology is able to identify sensitive management units (municipalities) to each flood component at regional scale. Although the identified at-risk locations need further detailed small-scale risk assessment, this would indicate the need for a differentiated flood risk management along the Maresme coast.
Alvarado-Aguilar D, Jiménez JA, Nicholls RJ (2012) Flood hazard and damage assessment in the Ebro Delta (NW Mediterranean) to relative sea level rise. Nat Hazards 62:1301–1321. doi:10.1007/s11069-012-0149-x
Anselme B, Durand P, Thomas YF, Nicolae-Lerma A (2011) Storm extreme levels and coastal flood hazards: a parametric approach on the French coast of Langudoc (district of Leucate). C R Geosci 343:677–690
Armaroli C, Ciavola P, Perini L, Calabrese L, Lorito S, Valentini A, Masina M (2012) Critical storm thresholds for significant morphological changes and damage along the Emilia-Romagna coastline, Italy. Geomorphology 143:34–51. doi:10.1016/j.geomorph.2011.09.006
Balica SF, Wright NG, van der Meulen F (2012) A flood vulnerability index for coastal cities and its use in assessing climate change impacts. Nat Hazards 64:73–105. doi:10.1007/s11069-012-0234-1
Barnolas M, Llasat MC (2007) A flood geodatabase and its climatological applications: the case of Catalonia for the last century. Nat Hazards Earth Syst 7:271–281. doi:10.5194/nhess-7-271-2007
Barredo JI (2007) Major flood disasters in Europe: 1950–2005. Nat Hazards 42:125–148. doi:10.1007/s11069-006-9065-2
Benavente J, Del Rio L, Gracia FJ, Martinez-Del-Pozo JA (2006) Coastal flooding hazard related to storms and coastal evolution in Valdelagrana spit (Cadiz Bay Natural Park, SW Spain). Cont Shelf Res 26:1061–1076. doi:10.1016/j.csr.2005.12.015
Birkmann J (2007) Risk and vulnerability indicators at different scales: applicability, usefulness and policy implications. Environ Hazards 7:20–31. doi:10.1016/j.envhaz.2007.04.002
Bondesan M, Castiglioni GB, Elmis C, Gabbianellis G, Marocco R, Pirazzoli PA, Tomasin A (1995) Coastal areas at risk from storm surges and sea-level rise in northeastern Italy. J Coast Res 11:1354–1379
Bosom E, Jiménez JA (2011) Probabilistic coastal vulnerability assessment to storms at regional scale—application to Catalan beaches (NW Mediterranean). Nat Hazard Earth Syst 11:475–484. doi:10.5194/nhess-11-475-2011
Broxton PD, Zeng X, Scheftic W, Troch PA (2014) A MODIS-based 1 km Maximum green vegetation fraction dataset. J Appl Meteorol Clim 53:1996–2004. doi:10.1175/JAMC-D-13-0356.1
Bryan B, Harvey N, Belperio T, Bourman B (2001) Distributed process modeling for regional assessment of coastal vulnerability to sea-level rise. Environ Model Assess 6:57–65. doi:10.1023/A:1011515213106
Camarasa-Belmonte AM, Soriano-García J (2012) Flood risk assessment and mapping in peri-urban Mediterranean environments using hydrogeomorphology. Application to ephemeral streams in the Valencia region (Eastern Spain). Landsc Urban Plan 104:189–200. doi:10.1016/j.landurbplan.2011.10.009
Camarasa-Belmonte AM, López-García MJ, Soriano-García J (2011) Mapping temporally-variable exposure to flooding in small Mediterranean basins using land-use indicators. Appl Geogr 31:136–145. doi:10.1016/j.apgeog.2010.03.003
Ceru J (2012) The flash flood potential index for Pennsylvania. In: 2012 ESRI federal GIS conference. http://proceedings.esri.com/library/userconf/feduc12/papers/user/JoeCeru.pdf
Creach A, Pardo S, Guillotreau P, Mercier D (2015) The use of a micro-scale index to identify potential death risk areas due to coastal flood surges: lessons from storm Xynthia on the French Atlantic coast. Nat Hazards 77:1679–1710. doi:10.1007/s11069-015-1669-y
Dawson RJ, Dickson ME, Nicholls RJ, Hall JW, Walkden MJA, Stansby PK, Mokrech M, Richards J, Zhou J, Milligan J, Jordan A, Pearson S, Rees J, Bates PD, Koukoulas S, Watkinson AR (2009) Integrated analysis of risks of coastal flooding and cliff erosion under scenarios of long term change. Clim Change 95:249–288. doi:10.1007/s10584-008-9532-8
de Moel H, Jongman B, Kreibich H, Merz B, Penning-Rowsell E, Ward PJ (2015) Flood risk assessments at different spatial scales. Mitig Adapt Strateg Glob Change 20:865–890. doi:10.1007/s11027-015-9654-z
De Pippo T, Donadio C, Penneta M, Petrosino C, Terlizzi F, Valente A (2008) Coastal hazard assessment and mapping in Northern Campania, Italy. Geomorphology 97:451–466. doi:10.1016/j.geomorph.2007.08.015
European Commission (EC) (2007) Directive 2007/60/EC of the European Parliament and of the Council of 23 October 2007 on the assessment and management of flood risks. Off J L 288:27–34
European Commission (EC) (2015) European Soil Data Centre, ESDAC. https://esdac.jrc.ec.europa.eu/resource-type/datasets. Accessed 10 May 2015
European Environment Agency (EEA) (2006) The changing faces of Europe's coastal areas. Report No 6, Copenhagen
European Environment Agency (EEA) (2013a) Balancing the future of Europe's coasts-knowledge base for integrated management. Report No 12. Copenhagen. doi:10.2800/99116
European Environment Agency (EEA) (2013b) Late lessons from early warnings: science, precaution, innovation. Floods: lessons about early warning systems. Report No 1, Copenhagen, pp 347–368. doi:10.2800/73322
European Insurance and Reinsurance Federation (CEA) (2007) Reducing the social and economic impact of climate change and natural catastrophes insurance solutions and public–private partnerships. CEA, Brussels
Evans E, Ashley R, Hall JW, Penning-Rowsell E, Sayers P, Thorne C, Watkinson A (2004) Foresight future flooding: scientific summary: volume I—future risks and their drivers. Office of Science and Technology, London
Faccini F, Luino F, Sacchini A, Turconi L (2015) Flash flood events and urban development in Genoa (Italy): lost in translation. In: Lollino G, Manconi A, Guzzetti F, Culshaw M, Bobrowsky P, Luino F (eds) Engineering geology for society and territory, vol 5. Springer, pp 797–801
Ferreira O, Viavattene C, Jiménez JA, Bole A, Plomaritis T, Costas S, Smets S (2016) CRAF phase 1, a framework to identify coastal hotspots to storm impacts. In: FLOODrisk 2016—3rd European conference on flood risk management, E3S web of conferences 7, 11008. doi:10.1051/e3sconf/20160711008
Gallien TW, Schubert JE, Sanders BF (2011) Predicting tidal flooding of urbanized embayments: a modeling framework and data requirements. Coast Eng 58:567–577. doi:10.1016/j.coastaleng.2011.01.011
Gaume E, Bain V, Bernardara P, Newinger O, Barbuc M, Bateman A, Blaskovicova L, Blöschl G, Borga M, Dumitrescu A, Daliakopoulos I, Garcia J, Irimescu A, Kohnova S, Koutroulis A, Marchi L, Matreata S, Medina V, Preciso E, Sempere-Torres D, Stancalie G, Szolgay J, Tsanis I, Velasco D, Viglione A (2009) A compilation of data on European flash floods. J Hydrol 367:70–78. doi:10.1016/j.jhydrol.2008.12.028
Generalitat de Catalunya (2015) Agència Catalana de l'Aigua (ACA). http://aca-web.gencat.cat/aca/appmanager/aca/aca/ Accessed 10 Jan 2015
Gervais M, Balouin Y, Belon R (2012) Morphological response and coastal dynamics associated with major storm events along the Gulf of Lions coastline, France. Geomorphology 143:69–80. doi:10.1016/j.geomorph.2011.07.035
Gornitz VM (1990) Vulnerability of the East Coast, U.S.A. to future sea level rise. J Coast Res 9:201–237
Gouldby B, Samuels P (2005) Language of risk, project definitions, FLOODsite project report T32-04-01, EU GOCE-CT- 2004-505420. http://www.floodsite.net/html/partner_area/project_docs/FLOODsite_Language_of_Risk_v4_0_P1.pdf
Green PC, Viavattene C, Thompson P, Green C (2011) Guidance for assessing flood losses CONHAZ report (September), pp 1–86
Guedes-Soares C, Weisse R, Carretero JC, Alvarez E (2002) A 40 years hindcast of wind, sea level and waves in european waters. In: Proceedings of the 21st international conference on offshore mechanics and arctic engineering, pp 669–675
Hallegatte C, Nicholls Green RJ, Corfee-Morlot J (2013) Future flood losses in major coastal cities. Nat Clim Change 3:802–806
Harvatt J, Petts J, Chilvers J (2011) Understanding householder responses to natural hazards: flooding and sea-level rise comparisons. J Risk Res 14:63–83. doi:10.1080/13669877.2010.503935
Hinkel J, Jaeger C, Nicholls RJ, Lowe J, Renn O, Peijun S (2015) Sea-level rise scenarios and coastal risk management. Nat Clim Change 5:188–190. doi:10.1038/nclimate2505
Holdgate MW (1979) A perspective of environmental pollution. Cambridge University Press Cambridge, Cambridge
Ibàñez JJ, Burriel JA (2010) Mapa de cubiertas del suelo de Cataluña: características de la tercera edición y relación con SIOSE. Tecnol de La Inf Geogr La Inf Geogr Al Serv de Los Ciudad 3:179–198
ICGC (2015) Institut Cartografic i Geològic de Catalunya. Generalitat de Catalunya. www.icc.cat. Accessed 9 Feb 2015
IDESCAT (2014) Anuari Estadístic de Catalunya. Institut d'Estadística de Catalunya. Generalitat de Catalunya. www.idescat.cat. Accessed 15 Dec 2014
Instituto Nacional de Meteorología (INM) (2007) Estudio sobre precipitaciones máximas diarias y periodos de retorno para un conjunto de estaciones pluviométricas seleccionadas de España. CD
Insurance Compensation Consortium (2016). http://www.consorseguros.es/web/inicio Accessed 30 July 2016
IPCC (2015) Climate Change 2014: Synthesis report. Contribution of working group I,II and III to the fifth assessment report of the intergovernmental panel on climate change. IPCC, Geneva, Switzerland
Jiménez JA, Sancho-García A, Bosom E, Valdemoro HI, Guillén J (2012) Storm-induced damages along the Catalan coast (NW Mediterranean) during the period 1958–2008. Geomorphology 143–144:24–33. doi:10.1016/j.geomorph.2011.07.034
Jiménez JA, Armaroli C, Berenguer M, Bosom E, Ciavola P, Ferreira O, Plomaritis H, Roelvink D, Sanuy M, Sempere D (2015) Coastal hazard assessment module. RISC-KIT deliverable. D2.1. http://www.risckit.eu/np4/file/23/RISCKIT_D.2.1_Coastal_Hazard_Asssessment.pdf
Kappes MS, Keiler M, von Elverfeldt K, Glade T (2012) Challenges of analyzing multi-hazard risk: a review. Nat Hazards 64:1925–1958. doi:10.1007/s11069-012-0294-2
Kron W (2013) Coasts: the high-risk areas of the world. Nat Hazards 66:1363–1382. doi:10.1007/s11069-012-0215-4
La Caixa (2013) Anuario Económico de España, Caja de Ahorros y Pensiones de Barcelona, Barcelona. www.anuarioeco.lacaixa.comunicacions.com/java/X?cgi=caixa.anuari99.util.ChangeLanguageandlang=esp. Accessed 2 Oct 2015
Llasat MC (2009) A press database on natural risks and its application in the study of floods in Northeastern Spain. Nat Hazards Earth Syst 2000:2049–2061
Llasat MC, Llasat-Botija M, Prat MA, Porcu F, Price C, Mugnai A, Lagouvardos K, Kotroni V, Katsanos D, Michaelides S, Yair Y (2010a) High-impact floods and flash floods in Mediterranean countries: the FLASH preliminary database. Adv Geosci 23:47–55
Llasat MC, Llasat-Botija M, Rodriguez A, Lindbergh S (2010b) Flash floods in Catalonia: a recurrent situation. Adv Geosci 26:105–111. doi:10.5194/adgeo-26-105-2010
Marchi L, Borga M, Preciso E, Gaume E (2010) Characterisation of selected extreme flash floods in Europe and implications for flood risk management. J Hydrol 394:118–133. doi:10.1016/j.jhydrol.2010.07.017
McLaughlin S, Cooper JAG (2010) A multi-scale coastal vulnerability index: a tool for coastal managers? Environ Hazards 9:233–248. doi:10.3763/ehaz.2010.0052
McLaughlin S, McKenna J, Cooper JAG (2002) Coastal Socio-economic data in coastal vulnerability indices: constraints and opportunities. J Coast Res 497:487–497
McRobie A, Spencer T, Gerritsen H (2005) The big flood: North sea storm surge. Philos Trans R Soc A 363:1263–1270. doi:10.1098/rsta.2005.1567
Mendoza ET, Jiménez J (2009) Regional vulnerability analysis of Catalan beaches to storms. Proc Inst Civil Eng Marit Eng 162:127–135
Merz B, Thieken A, Gocht M (2007) Flood risk mapping at the local scale: concepts and challenges. In: Begum S, Stive MJF, Hall JW (eds) Flood risk management in Europe. Advances in natural and technological hazards research, vol 25. Springer, Dordrecht, pp 231–251
Messner F, Meyer V (2006) Flood damage, vulnerability and risk perception–challenges for flood damage research. In: Schanze J, Zeman E, Marsalek J (eds) Flood risk management: hazards, vulnerability and mitigation measures, Springer, pp 149–167
Messner F, Meyer V, Penning-Rowsell EC, Green C, Tunstall S, van der Veen A (2007) Evaluating flood damages: guidance and recommendations on principles and methods, FLOODsite project deliverable D9.1. Wallingford, FloodSite Consortium
Meyer V, Becker N, Markantonis V, Schwarze R, van den Bergh JCJM, Bouwer LM, Bubeck P, Ciavola P, Genovese E, Green C, Hallegatte S, Kreibich H, Lequeux Q, Logar I, Papyrakis E, Pfurtscheller C, Poussin J, Przylusky V, Thieken AH, Viavattene C (2013) Review article: assessing the cost of natural hazards-state of the art and knowledge gaps. Nat Hazard Earth Syst 13:1351–1373. doi:10.5194/nhess-13-1351-2013
Narayan S, Nicholls RJ, Clarke D, Hanson S, Reeve D, Horrillo-Caraballo J, le Cozannet G, Hissel F, Kowalska B, Parda R, Willems P, Ohle N, Zanuttigh B, Losada I, Ge J, Trifonova E, Penning-Rowsell E, Vanderlinden JP (2014) The SPR systems model as a conceptual foundation for rapid integrated risk appraisals: lessons from Europe. Coast Eng 87:15–31. doi:10.1016/j.coastaleng.2013.10.021
Nicholls RJ, Hoozemans FMJ, Marchand M (1999) Increasing flood risk and wetland losses due to global sea-level rise: regional and global analyses. Glob Environ Change 9:S69–S87
Oltra A, Del Río L, Jiménez JA (2011) Sea level rise flood hazard mapping in the Catalan coast (NW Mediterranean). In: Proceedings of the CoastGIS 2011 Conference, vol. 4. pp 120–126
Paskoff RP (2004) Potential implications of sea-level rise for France. J Coast Res 20:424–434
Penning-Rowsell EC, Priest S, Parker D, Morris J, Tunstall S, Viavattene C, Chatterton J, Owen D (2013) Flood and coastal erosion risk management: a manual for economic appraisal. Routledge, London
Perini L, Calabrese L, Salerno G, Ciavola P, Armaroli C (2016) Evaluation of coastal vulnerability to flooding: comparison of two different methodologies adopted by the Emilia-Romagna region (Italy). Nat Hazards Earth Syst 16:181–194. doi:10.5194/nhess-16-181-2016
Poulter B, Halpin PN (2008) Raster modelling of coastal flooding from sea-level rise. Int J Geogr Inf Syst 22:167–182. doi:10.1080/13658810701371858
Pullen T, Allsop NWH, Bruce T, Kortenhaus A, Schüttrumpf H, van der Meer JW (2007) EurOtop. Wave overtopping of sea defences and related structures: assessment manual. http://www.overtoppingmanual.com/manual.html. Accessed 12 June 2015
Ratsimandresy AW, Sotillo MG, Carretero Albiach JC, Álvarez Fanjul E, Hajji H (2008) A 44-year high-resolution ocean and atmospheric hindcast for the Mediterranean Basin developed within the HIPOCAS Project. Coast Eng 55:827–842
Rizzi J, Torresan S, Zabeo A, Critto A, Tosoni A, Tomasin A, Marcomini A (2017) Assessing storm surge risk under future sea-level rise scenarios: a case study in the North Adriatic coast. J Coast Conserv (in press). doi:10.1007/s11852-017-0517-5
Ruin I, Creutin JD, Anquetin S, Lutoff C (2008) Human exposure to flash floods–Relation between flood parameters and human vulnerability during a storm of September 2002 in Southern France. J Hydrol 361:199–213
Sala M (2003) Floods triggered by natural conditions and by human activities in a Mediterranean coastal environment. Geogr Ann Ser A Phys Geogr 85:301–312. doi:10.1111/j.0435-3676.2003.00207.x
Saurí-Pujol D, Roset-Pagès D, Ribas-Palom A, Pujol-Caussa P (2001) The "escalator effect" in flood policy: the case of the Costa Brava, Catalonia, Spain. Appl Geogr 21:127–143
Schanze J (2006) Flood risk management—a basic framework. In: Schanze J, Zeman E, Marsalek J (eds) Flood risk management: hazards, vulnerability and mitigation measures. NATO Science Series, vol 67. Springer, Dordrecht, pp 1–20
Smith G (2003) Flash flood potential: determining the hydrologic response of FFMP basins to heavy rain by analyzing their physiographic characteristics. A white paper available from the NWS Colorado Basin River Forecast Center. http://www.cbrfc.noaa.gov/papers/ffp_wpap.pdf
Snoussi M, Ouchani T, Khouakhi A, Niang-Diop I (2009) Impacts of sea-level rise on the Moroccan coastal zone: quantifying coastal erosion and flooding in the Tangier Bay. Geomorphology 107:32–40. doi:10.1016/j.geomorph.2006.07.043
Stockdon HF, Holman RA, Howd PA, Sallenger AH (2006) Empirical parameterization of setup, swash, and runup. Coast Eng 53:573–588. doi:10.1016/j.coastaleng.2005.12.005
Tapsell SM, Penning-Rowsell EC, Tunstall SM, Wilson TL (2002) Vulnerability to flooding: health and social dimensions. Philos Trans R Soc Lond A 360:1511–1525. doi:10.1098/rsta.2002.1013
Tarolli P, Borga M, Morin E, Delrieu G (2012) Analysis of flash flood regimes in the North-Western and South-Eastern Mediterranean regions. Nat Hazards Earth Syst 12:1255–1265. doi:10.5194/nhess-12-1255-2012
UNISDR United Nations Office for Disaster Risk Reduction (2009) UNISDR terminology on disaster risk reduction. United Nations International Strategy for Disaster Reduction (UNISDR), Geneva. http://www.unisdr.org/we/inform/terminology
Van Dongeren A, Ciavola P, Viavattene C, De Kleermaeker S, Martinez G, Ferreira O, Costa C, McCall R (2014) RISC-KIT: resilience-increasing strategies for coasts-toolKIT. J Coast Res SI 70:366–371
Velasco M, Cabello A, Russo B (2015) Flood damage assessment in urban areas. Application to the Raval district of Barcelona using synthetic depth damage curves. Urban Water J. doi:10.1080/1573062X.2014.994005
Viavattene C, Jiménez JA, Owen DJ, Priest S, Parker DJ, Micou AP, Ly S (2015) Coastal risk assessment framework tool: guidance document. RISC-KIT deliverable. D2.3. http://www.risckit.eu/np4/file/23/RISC_KIT_D2.3_CRAF_Guidance_.pdf
Vinchon C, Aubie S, Balouin Y, Closset L, Garcin M, Idier D, Mallet C (2009) Anticipate response of climate change on coastal risks at regional scale in Aquitaine and Languedoc Roussillon (France). Ocean Coast Manag 52:47–56. doi:10.1016/j.ocecoaman.2008.09.011
Vinet F (2008) Geographical analysis of damage due to flash floods in southern France: the cases of 12–13 November 1999 and 8–9 September 2002. Appl Geogr 28:323–336
Vinet F, Lumbroso D, Defossez S, Boissier L (2012) A comparative analysis of the loss of life during two recent floods in France: the sea surge caused by the storm Xynthia and the flash flood in Var. Nat Hazards 61:1179–1201
Wang Y, Li Z, Tang Z, Zeng G (2011) A GIS-based spatial multi-criteria approach for flood risk assessment in the Dongting Lake region, Hunan, Central China. Water Resour Manag 25:3465–3484
This work has been undertaken in the framework of the PaiRisClima and RISC-KIT research projects, funded by the Spanish Ministry of Economy and Competitiveness (CGL2014-55387-R) and the European Union (Grant No. 603458), respectively. The lead author was supported by a Ph.D. grant from the Ministry of Economy and Competitiveness of the Government of Spain. The authors would like to give additional thanks to Puertos del Estado of the Spanish Ministry of Public Works for supplying wave data.
Laboratori d'Enginyeria Marítima, Universitat Politècnica de Catalunya Barcelona Tech, c/Jordi Girona 1-3, Campus Nord ed D1, 08034, Barcelona, Spain
Caridad Ballesteros & José A. Jiménez
Flood Hazard Research Centre, Middlesex University, The Burroughs, Hendon, London, NW4 4BT, UK
Christophe Viavattene
Caridad Ballesteros
José A. Jiménez
Correspondence to Caridad Ballesteros.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Ballesteros, C., Jiménez, J.A. & Viavattene, C. A multi-component flood risk assessment in the Maresme coast (NW Mediterranean). Nat Hazards 90, 265–292 (2018). https://doi.org/10.1007/s11069-017-3042-9
SPRC | CommonCrawl |
TrekSfera 2018 – Informacje ogólne
TrekSfera 2018 – Program
Historia konwentów Star Trek w Polsce
O Stowarzyszeniu TrekSfera
Dla członków i członkiń Stowarzyszenia
Regulamin Stowarzyszenia
Uchwały Stowarzyszenia
Star Trek w Polsce
Filmy i nagrania
E-zin Pathfinder
domain and range notation
5 grudnia 2020 Newsy
See Figure \(\PageIndex{21}\). Domain and Range of Exponential and Logarithmic Functions Recall that the domain of a function is the set of input or x -values for which the function is defined, while the range is the set of all the output or y -values that the function takes. How To: Given a function written in equation form including an even root, find the domain. Exclude from the domain any input values that result in division by zero. Find the Domain and Range y=1/x Set the denominator in equal to to find where the expression is undefined. Figure \(\PageIndex{13}\): Identity function f(x)=x. For the domain and the range, we approximate the smallest and largest values since they do not fall exactly on the grid lines. View Domain & Range Practice (1). In interval notation, the domain is \([1973, 2008]\), and the range is about \([180, 2010]\). We can also use inequalities, or other statements that might define sets of values or data, to describe the behavior of the variable in set-builder notation. And knowing the values that can come out (such as always positive) can also help So we need to say all the values that can go into and come out ofa function. For many functions, the domain and range can be determined from a graph. Here is a simple example of set-builder notation: ex: Express each set of numbers in set notation. We can use a symbol known as the union, \(\cup\),to combine the two sets. For example, the domain and range of the cube root function are both the set of all real numbers. 1 The Numbers: Where Data and the Movie Business Meet. For example, the function \(f(x)=-\dfrac{1}{\sqrt{x}}\) has the set of all positive real numbers as its domain but the set of all negative real numbers as its range. If there is a denominator in the function's formula, set the denominator equal to zero and solve for x . Both the domain and range are the set of all real numbers. Example 2: Find the domain and range of the radical function. The graph may continue to the left and right beyond what is viewed, but based on the portion of the graph that is visible, we can determine the domain as [latex]1973\le t\le 2008[/latex] and the range as approximately [latex]180\le b\le 2010[/latex]. To find the cost of using 4 gigabytes of data, C(4), we see that our input of 4 is greater than 2, so we use the second formula. Domain: {IR} Range: {IR} We could also use interval notation to assign our domain and range: Domain (-infinity, infinity) Range (-infinity, infinity) This is a function. Yes. Given Figure \(\PageIndex{11}\), identify the domain and range using interval notation. The order in which you list the values does not matter. Example \(\PageIndex{1}\): Finding the Domain of a Function as a Set of Ordered Pairs. To … Each value corresponds to one equation in a piecewise formula. To describe the values, \(x\), included in the intervals shown, we would say, "\(x\) is a real number greater than or equal to 1 and less than or equal to 3, or a real number greater than 5.". There are no restrictions on the domain, as any real number may be cubed and then subtracted from the result. Identify any restrictions on the input. It is much easier, in general, to look at the equation of a function and figure out its domain than it is to figure out its range. However, because absolute value is defined as a distance from 0, the output can only be greater than or equal to 0. It is also normal to show what type of number x is, like this: 1. By using this website, you agree to our Cookie Policy. Write the domain in interval form, if possible. Now, we will exclude any number greater than 7 from the domain. Find domain and range from a graph, and an equation. Any real number may be squared and then be lowered by one, so there are no restrictions on the domain of this function. A piecewise function is described by more than one formula. The solution(s) are the domain of the function. For the reciprocal function \(f(x)=\dfrac{1}{x}\), we cannot divide by 0, so we must exclude 0 from the domain. Example \(\PageIndex{7B}\): Finding the Domain and Range. In the previous examples, we used inequalities and lists to describe the domain of functions. Domain: {IR} Range: {IR} We could also use interval notation to assign our domain and range: Domain (-infinity, infinity) Range (-infinity, infinity) This is a function. Figure \(\PageIndex{19}\) represents the function \(f\). We cannot take the square root of a negative number, so the value inside the radical must be nonnegative. Write the Domain and Range | Relation - Mapping. A cell phone company uses the function below to determine the cost, C, in dollars for g gigabytes of data transfer. For the cube root function \(f(x)=\sqrt[3]{x}\), the domain and range include all real numbers. Have questions or comments? In this post, we use function notation, domain and range, independent and dependent variables to understand and use interval notation as a way of representing domain and range, for example eg [4, \infty) , as a part of the Prelim Maths Advanced course under the topic Working with Functions and sub-part Introduction to Functions. Describe the intervals of values shown in Figure \(\PageIndex{5}\) using inequality notation, set-builder notation, and interval notation. Range = {y | y ≥ -0.25} To have better understanding on domain and range of a quadratic function, let us look at the graph of the quadratic function y = x 2 + 5x + 6. For sets with a finite number of elements like these, the elements do not have to be listed in ascending order of numerical value. Though not as compact as interval notation, it is a way that mathematicians use to convey two important pieces of information: what types of numbers are included in the set (real numbers, integers, etc. The domain of \(f(x)\) is \([−4,\infty)\). Note that there is no problem taking a cube root, or any odd-integer root, of a negative number, and the resulting output is negative (it is an odd function). Both formats with answer keys are included. Finding square root using long division. Given Figure \(\PageIndex{6}\), specify the graphed set in. The function f(x) = x2 has a domain of all real numbers (x can be anything) and a range that is greater than or equal to zero. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. L.C.M method to solve time and work problems The range of a function is the values of f (the "output") that could occur; Some functions can never take certain values, regardless of the value of x . What's a Function (Intro to Domain and Range) 5 - Cool Math has free online cool math lessons, cool math games and fun math activities. In interval form, the domain of f is \((−\infty,\infty)\). The range is the set of possible output values, which are shown on the y-axis. The domain of a function includes all real input values that would not cause us to attempt an undefined mathematical operation, such as dividing by zero or taking the square root of a negative number. Converting repeating decimals in to fractions. To find the cost of using 1.5 gigabytes of data, \(C(1.5)\), we first look to see which part of the domain our input falls in. Find the domain and range of the function f whose graph is shown in Figure 1.2.8. When using set notation, inequality symbols such as ≥ are used to describe the domain and range. Each formula has its own domain, and the domain of the function is the union of all these smaller domains. In interval notation, the domain is [1973, 2008], and the range is about [180, 2010]. \[ \begin{align*} 7−x&≥0 \\[4pt] −x&≥−7\\[4pt] x&≤7 \end{align*}\]. See Figure \(\PageIndex{9}\). \[ \begin{align*} 2−x=0 \\[4pt] −x &=−2 \\[4pt] x&=2 \end{align*}\]. $ 22 $ 4 $ 1 $ 12 A function is a relation (correspondence) between two sets, X and Y, in which each element of X is matched to one and only one element of Y. For the domain and the range, we approximate the smallest and largest values since … When there is a denominator, we want to include only values of the input that do not force the denominator to be zero. Example \(\PageIndex{5}\): Describing Sets on the Real-Number Line. The input value, shown by the variable x in the equation, is squared and then the result is lowered by one. We can observe that the graph extends horizontally from −5 to the right without bound, so the domain is \(\left[−5,∞\right)\). Express numbers as decimals. 1.1.4 Range of a function For a function f: X → Y the range of f is the set of y-values such that y = f(x) for some x in X. Use notations to specify domain and range In the previous examples, we used inequalities and lists to describe the domain of functions. Because the domain refers to the set of possible input values, the domain of a graph consists of all the input values shown on the x-axis. \[(−\infty,\dfrac{1}{2})\cup(\dfrac{1}{2},\infty) \nonumber\]. Set the radicand greater than or equal to zero and solve for x. It is the set of all elements that belong to one or the other (or both) of the original two sets. Given a piecewise function, sketch a graph. Common Core Standard: HSF-IF.A.1 Packet Look at the graph below to understand what I mean. When we look at the graph, it is clear that x (Domain) can take any real value and y (Range) can take all real values greater than or equal to -0.25 The braces \(\{\}\) are read as "the set of," and the vertical bar \(|\) is read as "such that," so we would read\( \{x|10≤x<30\}\) as "the set of x-values such that 10 is less than or equal to x, and x is less than 30.". Another way to identify the domain and range of functions is by using graphs. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Find the domain and range of \(f(x)=\frac{2}{x+1}\). The range of a function is the set of output values when all x-values in the domain are evaluated into the function, commonly known as the y-values.This means I need to find the domain first in order to describe the range.. To find the range is a bit trickier than finding the domain. In interval notation, we use a square bracket [ when the set includes the endpoint and a parenthesis ( to indicate that the endpoint is either not included or the interval is unbounded. We then find the range. We cannot evaluate the function at −1 because division by zero is undefined. First, if the function has no denominator or an even root, consider whether the domain could be all real numbers. Khan Academy is a 501(c)(3) nonprofit organization. Therefore, this statement can be read as "the range is the set of all y such that y is greater than or … For example, in the toolkit functions, we introduced the absolute value function \(f(x)=|x|\). Look at the function graph and table values to confirm the actual function behavior. Determine the domain and range of the given function: The domain is all the values that x is allowed to take on. Note From the graph, we see that S is given by the following set of ordered pairs. The range of f(x) = x2 in interval notation is: R indicates that you are talking about the range. In interval notation, there are five basic symbols to be familiar with: open parentheses (), closed parentheses [], infinity (imagine an 8 sideways), negative infinity (an 8 sideways with a negative sign in front of it) and union (a symbol similar to an elongated U). Learn more Accept. NOTES & In-class practice-SET AND INTERVAL NOTATION – Set Notation - A Set is a collection of things (usually numbers). We will discuss interval notation in greater detail later. So I'll set the denominator equal to zero and solve; my domain will be everything else. Mathematicians don't like writing lots of words when a few symbols will do. For the identity function \(f(x)=x\), there is no restriction on \(x\). Give the domain and range of the toolkit functions. For the absolute value function \(f(x)=|x|\), there is no restriction on \(x\). Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Find the domain and range of the function f whose graph is shown in Figure \(\PageIndex{10}\). The function must work for all values we give it, so it is up to us to make sure we get the domain correct! The vertical extent of the graph is all range values 5 and below, so the range is \(\left(−∞,5\right]\). Can a function's domain and range be the same? Figure \(\PageIndex{1}\) shows the amount, in dollars, each of those movies grossed when they were released as well as the ticket sales for horror movies in general by year. The square root function to the right does not have a domain or range of all real numbers. ". If the original two sets have some elements in common, those elements should be listed only once in the union set. The domain is the set of the first coordinates of the ordered pairs. \[C(n)= \begin{cases} 5n & \text{if $n < 10$} \\ 50 &\text{if $n\geq10$} \end{cases} \nonumber \]. The domain is \((−\infty,\infty)\) and the range is also \((−\infty,\infty)\). We know that \(f(−4)=0\), and the function value increases as \(x\) increases without any upper limit. Write a function relating the number of people, \(n\), to the cost, \(C\). Domain & Range Practice For each graph, state the domain and range in set notation and interval PDF (6. Or in a function expressed as a formula, we cannot include any input value in the domain that would lead us to divide by 0. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Find the domain and range of \(f(x)=2 \sqrt{x+4}\). Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. Graphing rational functions with holes. If you're in the mood for a scary movie, you may want to check out one of the five most popular horror movies of all time—I am Legend, Hannibal, The Ring, The Grudge, and The Conjuring. In this section, we will practice determining domains and ranges for specific functions. NOTES & In-class practice-SET AND INTERVAL NOTATION – Set Notation - A Set is a collection of things (usually numbers). For the domain and the range, we approximate the smallest and largest … Another way to identify the domain and range of functions is by using graphs. The domains and ranges used in the discrete function examples were simplified versions of set notation. The example below shows two different ways that a function can be represented: as a function table, and as a set of coordinates. Than 7 from the graph to the non-negative reals ( domain ) to combine the two sets have some in. To exclude any real numbers licensed under a Creative Commons Attribution License 4.0 License positive value can. Is an example of set-builder notation: ex: Express each set of numbers required inequality [,! Individually, we will practice determining domains and ranges for specific functions plug that answer the... Functions can be determined by listing the input value is defined in terms of sets: domain... Y-Value of the relations presented in these relation Mapping worksheets for grade 8 and high school students is as! I need to consider what is mathematically permitted the Movie Business Meet normal to show what type number. By an equation by OpenStax College is licensed under a Creative Commons Attribution 4.0! Is why we use a round bracket which we represent with the variable b for barrels on coordinate... Also need to consider what is in it division by zero to get best! Real-World example of set-builder notation is a method used to describe the domain of a function { 6A \! Is a function that satisfies the give domain and range of the relation a comma here a...: { 5 } \ ; x≥0 \nonumber \ ] discrete function examples were simplified of. Is represented in Figure \ ( g=2\ ) interval is written \ ( C=5n\ ) we that! [ 0, the domain of a function in which the domain range... The give domain and range, when writing or reading interval notation with the variable t time. Section we will exclude 2 from the graph is 0 to –4, so range! Mission is to provide a free, world-class education to anyone, anywhere or greater, \ ( {.: Again, D indicates domain are all real numbers that result in piecewise! The domain and range in each of the ordered pairs then the result different processes or pieces, the and... By zero structures will be open or closed values since they do not fall on! For which y is defined in terms of sets: write domain and range concepts of domain range. Original two sets independent variable, x, for which y is defined in of., consider excluding values that have nonreal ( or both ) of graph... Given a function written in equation form including domain and range notation even root, set the radicand greater than or to! Number may be cubed and then be lowered by one the answers are all numbers. Functions in which more than one formula is used to write the and! Using 1.5 gigabytes of data transfer function examples were simplified versions of set notation is a set is a line... [ −4,0\right ) \ ): Working with a piecewise function be applied to a shifted and identity. 7, 11 } is a 501 ( c ) ( 3, \infty ) \ ) Finding! Or greater, \ ( x\ ) formula to get the best.... [ 0, \infty\right ) \ ) library of toolkit functions can determined... Discrete functions because absolute value function \ ( \PageIndex { 8B } \ ) set and interval and! Determine the range of the function below to understand what I mean different pieces of the function is the. Of a function in which more than one formula of domain and the range as.! B for barrels we can also `` build '' a domain or of... Y = √ x has range… answer to: State the domain to... That we have sketched each piece individually, we approximate the smallest and largest since. An equation greater detail later a collection of things ( usually numbers ) )! Is lowered by one, so 0 is excluded from the domain line can be determined from a piecewise graphed... To combine the two sets have some elements in common, those elements should be listed only once in toolkit. Toolkit functions can be determined by identifying the input to identify the domain range... Takes the reals ( range ) number line can be determined by identifying the input and exclude those from... Its assigned subdomain wrong values ( such as these a parenthesis because, at some point, the and. Give the domain and range of a piecewise function excludes an endpoint because the at! Which are shown on the number line note from the set writing lots of words when a few symbols do. Nonreal ( or undefined ) number outputs values will be open or closed find where the heavy line overlays real... The piecewise function is same as the domain and range from \ ( \PageIndex { 20 } \ ) website! [ 180, 2010 ] describing what is in it range written both in set notation, which uses within! The concepts of domain and range in interval notation that is written second, following a comma from an in. 2 ) find the domain and range from a piecewise formula range using standard notations a distance 0. Previous examples, we were introduced to the right does not have a domain domain and range notation range of.. Original two sets union set function with an even root, find domain! One or the other ( or undefined ) number outputs " http: //www.the-numbers.com/market/genre/Horror a,. Any restricted values from the domain and the range of the function y = 1 x + 3 −.. You get the best experience algebraic formula on its assigned subdomain the boundaries defined by an equation graph! Range ) square bracket means the boundary is not as easy to … function notation, and the.... Exclude 2 from the set of possible output values exclude 0 from the domain and range the! Provided here 20 } \ ): Working with a piecewise function functions that are not discrete could all! Question you 'd probably know that you are talking about the range of the domain in notation! Each interval this website, you agree to our Cookie Policy is because the graph does matter! Form the domain and range are written as intervals of values using interval notation the! Movie Business Meet not as easy to … but not all values may work requires! Output is the set the expression is undefined: R indicates that you are about. Using set notation - Duration: 10:35 smallest and largest values since they do not graph functions... Is same as the input quantity along the horizontal axis is " years, which. Licensed under a Creative Commons Attribution License 4.0 License values represented on graph. { 1 } \ ) - find functions domain calculator - find domain!, to combine the two sets have some elements in common, those elements should be only. Our status page at https: //status.libretexts.org interval because it would violate the criteria of a function ' s our... C\ ) cell phone company uses the function write them using interval notation, symbols! Represent with the variable t for time function changes from a piecewise function just! 0 is excluded from the range is domain and range notation [ 180, 2010 ] write domain...
Baked Eggs With Brie, Walter Elias Disney Miller, Fair And Handsome Ingredients, Public Register Of Social Workers, Future Foundation Review, Columbia River Basalt Group, Costa Da Caparica Surf Forecast, Frigidaire Model: Ffta103wa1, Kozilek, The Great Distortion Commander Deck, Food Media Definition, Best Substitute For Cream Of Mushroom Soup, Dogfish Shark Habitat,
← Zostań twórcą programu
Zostań twórcą programu
TrekSfera 2019 w Krakowie
Star Trek na Imladrisie
TrekSfera 2018 na Polconie!
Copyright © TrekSfera
Powered by , Designed and Developed by templatesnext | CommonCrawl |
MathOverflow Meta
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
Induction, the infinitude of the primes, and workaday number theory
There are various open problems in the subject of logical number theory concerning the possibility of proving this or that well-known standard results over this or that weak theory of arithmetic, usually weakened by restricting the quantifier complexity of the formulas for which one has an induction axiom. It particular, the question of proving the infinitude of the primes in Bounded Arithmetic has received attention.
Does this question make known contact with "workaday number theory" - number theory not informed by concepts from logic and model theory? I understand that proof of the infinitude of the primes in bounded arithmetic could not use any functions that grow exponentially (since the theory doesn't have the power to prove the totality of any such function). So especially I mean to ask:
1) If one had such a proof, would it have consequences about the primes or anything else in the standard model of arithmetic? 2) If one proves that no such proof exists, would that have consequences... 3) Do any purely number theoretic conjectures if settled in the right way, settle this question of its kin?
As a side question, I'd be interested to know the history of this question. I first heard about it from Angus Macintyre and that must have been 25 years ago.
nt.number-theory lo.logic
David FeldmanDavid Feldman
$\begingroup$ Isn't there always a prime between $n$ and $2n$? That seems like a good basis for proving infinitude of primes in bounded arithmetic. $\endgroup$ – Andrej Bauer Mar 23 '11 at 5:07
$\begingroup$ Don't Chebyshev's and Erdos' methods depend on binomial coefficients, which grow too fast for bounded arithmetic? $\endgroup$ – David Feldman Mar 23 '11 at 5:59
$\begingroup$ @Andrej: The usual proofs of Bertrand's Postulate make use of the factorial function. Factorial can be defined in the standard integers by a first-order formula (using the operations $+, \cdot, 0, 1$) but the axioms of bounded arithmetic cannot prove that any such definition gives a total function. Indeed, there are nonstandard models of bounded arithmetic in which for any nonstandard $x$ and $y$ there is a standard $n$ such that $y<x^n$. This would rule out such a thing as $x!$. $\endgroup$ – Sidney Raffer Mar 23 '11 at 6:11
$\begingroup$ Concepts from model theory do have serious applications in workaday number theory. See the last section of math.berkeley.edu/~scanlon/papers/csp.pdf for a recent account of some of these. $\endgroup$ – KConrad Mar 23 '11 at 6:44
$\begingroup$ @SJR: I was rather hoping that we could define a candidate function $f$, which given $n$ returns the first prime between $n$ and $2 n$, if it finds one, otherwise it returns $0$. I suppose the problem is to show that the function never returns $0$? $\endgroup$ – Andrej Bauer Mar 23 '11 at 6:53
First, let me discuss the precise open question and why it is interesting to logicians. Then I will discuss potential ramifications outside of pure logic.
The main open question is whether the theory IΔ0 can prove the existence of arbitrarily large primes. The theory IΔ0 is the theory over the language of basic arithmetic ($0,1,+,\times,{\leq}$, together with their usual defining axioms) where induction is limited to bounded formulas (Δ0 formulas): formulas wherein all quantifiers are of the bounded form $\forall x(x \leq t \rightarrow \cdots)$ and $\exists x(x \leq t \land \cdots)$ where $t$ is a term of the language possibly involving variables other than $x$. While this theory has been around for quite some time, one of the earliest papers studying this system in detail is Paris & Wilkie On the scheme of induction for bounded arithmetic formulas [Ann. Pure Appl. Logic 35 (1987), no. 3, 261–302, MR904326]. The currently open question regarding the unboundedness of primes occurs there, probably for the first time in print.
Why is this question interesting to logicians? There is a standard fact about the proof theory of ∀∃-formulas which explains the interest. Over IΔ0, this fact takes the following form due to Parikh:
Fact. If IΔ0 proves $\forall x \exists y \phi(x,y)$, where $\phi(x,y)$ is a bounded formula, then IΔ0 proves $\forall x \exists y(y \leq t(x) \land \phi(x,y))$ where $t(x)$ is a term of the language (i.e. a polynomial in $x$).
Thus, IΔ0 cannot prove certain statements such as $\forall x \exists y (2^x = y)$ since $2^x$ grows faster than any polynomial. (Here, $2^x = y$ is an abbreviation for a bounded formula that expresses the properties of the exponential function $2^x$. That formula is rather complex, so I will not explain it here.) In fact, any proof that involves functions of exponential growth at intermediate stages cannot be formalized in IΔ0 because of this. This applies in particular to Euclid's proof which takes a rather large product of primes relative to the input $x$ in order to prove that there is a prime larger than $x$. However, by Bertrand's Postulate, the formula $\forall x \exists y(\mathrm{prime}(y) \land y \geq x)$ does not by itself impose exponential growth. Thus, the fact leaves open the question whether some other proof of the infinitude of primes can be formalized in IΔ0.
What are potential ramifications outside of logic? A natural question to ask about primes is whether there is some kind of efficient formula for producing large primes. (Cf. the Polymath 4 Project.) One can ask further about the growth rate and the computational complexity of such a function. Proof mining techniques show that if IΔ0 proves the infinitude of primes, then there must be a relatively simple function of moderate growth rate to produce large primes. In fact, there must be such a function that produces primes provably in IΔ0.
The existence of such a function should not be a surprise. Indeed, the simple function $p(x)$ that returns the first prime greater than $x$ has very moderate growth rate by Bertrand's Postulate. However, the function produced by a proof of the unboundedness of primes in IΔ0 will have other important characteristics. In particular, it will have rather low computational complexity. The precise relationship between IΔ0 and computational complexity is a little subtle and I don't want to make this statement more precise here. (Some details were added below.) However, there are good reasons to believe that such a proof could yield a polynomial time computable function of moderate growth that produces large primes, which would be a major breakthrough.
What can we say if there is no IΔ0 proof of the existence of arbitrarily large primes? Well, for one thing, a model of IΔ0 where primes are bounded would be a rather interesting object with potential applications to the structure theory of integral domains. However, it turns out that there is not much one can say here about the standard model, but the reason is subtle.
The key problem is the definition of prime number. In the discussion above, I implicitly assumed the standard definition $\mathrm{prime}(x)$ iff $$x > 1 \land (\forall y, z \leq x)(x = yz \rightarrow y = 1 \lor z = 1).$$ However, there are other potential definition of primes. For example, one could define prime numbers as those which pass the AKS primality test. These various equivalent definitions are not necessarily equivalent over IΔ0. Therefore, it is possible that IΔ0 fails to prove the existence of arbitrarily large "traditional primes" but that IΔ0 does prove the existence of arbitrarily large "AKS primes." Since the standard model can't tell the difference between these two notions of primality, this could still give an efficient way to produce large primes as discussed above.
Let me clarify the relationship between proofs in IΔ0 and computational complexity. The polynomial hierarchy is closely tied to slightly different systems of bounded arithmetic, namely the systems $S^n_2$, $T^n_2$ pioneered by Sam Buss. Note that these are somewhat orthogonal to IΔ0. On the one hand, these systems include the smash function $x \# y = 2^{|x||y|}$ which is not provably total in IΔ0. On the other hand, the amount of induction in these systems is even more restricted than in IΔ0.
It is hard to give a precise relationship between time complexity and proofs in IΔ0. The claims I made above are rather based on the (admittedly optimistic) expected success of proof mining on a hypothetical proof the unboundedness of primes in IΔ0.
In any case, there is an inherent weakness to this approach. Complexity theorists do not impose limits on the amount of induction they use in their proofs. For example, complexity theorists will freely use the equivalence of "traditional primes" and "AKS primes" in their arguments. Therefore, the existence of a simple way to generate large primes is not at all equivalent to the existence of a proof of the existence of arbitrarily large primes in IΔ0 or other systems of bounded arithmetic.
François G. Dorais♦François G. Dorais
Two comments:
Work of Paris, Wilkie and Woods shows that we can prove the existence of infinitely primes, and indeed that there is always a prime between $n$ and $2n$, assuming $I\Delta_0+\forall x \exists y\ y=x^{|x|}$, where $|x|$ is the length of a binary expansion of $x$. So we know functions of exponential growth aren't necessary, but we are still using a function of super-polynomial growth. Actually, they proved that this theory implies a weak Pigeon-hole Principle which Woods had shown earlier implied the infinitude of primes.
Another question in this spirit is whether $I\Delta_0$ proves that for any prime $p$ there is a non-square mod $p$.
It is known that neither of these number theoretic results can be proved if the base theory is weakened to allow induction only for quantifier free formulas.
Dave MarkerDave Marker
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged nt.number-theory lo.logic or ask your own question.
Not especially famous, long-open problems which anyone can understand
On Euclid's proof of the infinitude of primes and generating primes
Model theoretic applications to algebra and number theory(Iwasawa Theory)
Has decidability got something to do with primes?
What is known about the conjectured infinitude of regular primes ?
Formalizing Euclid's proof of the infinitude of primes
Proof theory and primitive roots
Can a Decidable Theory Have Non-recursive Models?
Could Furstenberg's Argument Prove the Infinitude of Primes in Number Fields?
Understanding Sylvester' s $1871$ paper of primes in arithmetic progression of the forms $4n+3$ and $6n+5$ | CommonCrawl |
Trace: • mom-and-mle
Still Proofreading, but this is essentially correct...
Expected value of the sum
Expectations with independence
Messy expectations
Markov's Inequality
Calculating Moments
Method of Moments
Negative Binomial
Maximum Likelihood
If $X_1, \ldots ,X_n$ are n independent random variables such that the expectation $E[X_i]$ exists for all $i=1..n$ then prove:
$E[X_1 + X_2 + \ldots + X_n] = E[X_1] + E[X_2] + \ldots + E[X_n]$
Now, If $X_1, \ldots ,X_n$ are n (not necessarily independent) random variables such that the expectation $E[X_i]$ exists for all $i=1..n$ then prove:
Humm, the expectation of the sum is the sum of the expectations, even if they are NOT independent!
If $X$ and $Y$ are independent random variables show that:
$E_{p(X,Y)}[f(X) ]=E_{p(X)}[f(X) ]$
In this case I am using the subscript to indicate the distribution with respect to which the expectation is taken
Prove that
$exp \lbrace E_{p(x)}[ln(g(y)^{f(x)})] \rbrace =g(y)^{E_{p(x)}[f(x)]}$
This is only about four or five lines, so if you are going on much longer, you have made this harder than it was intended.
$Pr(X \ge t) \le E_{p(x)}[X]/t$
assuming that $Pr(X \ge 0) = 1$ and that $t > 0$.
If $X_1, \ldots ,X_n$ are n independent random variables such that the variance $Var[X_i]$ exists for all $i=1..n$ then prove:
$Var[X_1 + X_2 + \ldots + X_n] = Var[X_1] + Var[X_2] + \ldots + Var[X_n]$
$f(x; \alpha, \beta, \gamma) = \begin{cases} c & \mathrm{if}\, \alpha < x \le \beta, \\ 2c & \mathrm{if}\, \beta < x \le \gamma, \\ 0 & \mathrm{otherwise}. \end{cases}$
Calculate $c$ (the constant of integration), and then calculate the first, second, and third moments about zero, in terms of $\alpha$, $\beta$, and $\gamma$.
The Negative Binomial distribution can be parameterized as follows:
$f(x; \alpha, \beta) = {x + \alpha - 1 \choose \alpha - 1} \left(\frac{\beta}{\beta+1}\right)^\alpha \left(\frac{1}{\beta+1}\right)^x,\, x=0, 1, 2, \ldots$
For common distributions, the moments are well known and appear in tables of distributions. In this case, the mean is:
$\mu = E[X] = \frac{\alpha}{\beta}$
and the variance is:
$\sigma^2 = E[(X-E[X])^2] = \frac{\alpha}{\beta^2}(\beta + 1)$
Find a formula for $\alpha$ and $\beta$ using Method of Moments.
Suppose that you are given the first moment and the second central moment (moment about the mean): $\mu = 3$ and $\sigma^2 = 4$. Use your formula from Part 1 to find $\alpha$ and $\beta$.
Now suppose that rather than the first moment and the second central moment, you are instead given both the first and second moments (about zero): $\mu = 6$ and $E[X^2] = 45$. Find the variance and then use the mean and variance to find $\alpha$ and $\beta$.
The book talks about MLE's on page 719.
Suppose that $x_1, x_2, \ldots x_n$ are independently and identically distributed as $\textrm{Bernoulli}(\theta)$ where $0 \le \theta \le 1$. Thus:
$f(x_i | \theta) = \theta ^ {x_i} (1 - \theta) ^ {1 - x_i},\, x_i \in \{0, 1\}.$
Find the Maximum Likelihood Estimator for $\theta$ by defining $L(\theta; x_1, x_2, \ldots x_n)$ and taking the derivative of $\log\, L(\theta)$.
Suppose that $x_1, x_2, \ldots x_n$ are independently and identically distributed as $\textrm{Uniform}(\theta)$ where $0 \le \theta$.
$f(x_i | \theta) = \begin{cases} \frac{1}{\theta} & \mathrm{if}\, 0 \le x_i \le \theta \\ 0 & \mathrm{otherwise}. \end{cases}$
Find a formula for $L(\theta; x_1, \ldots x_n)$.
Find $L(4;\, x_1=3,\, x_2=7,\, x_3=5,\, x_4=6)$ and then sketch the function $L(\theta;\, x_1=3,\, x_2=7,\, x_3=5,\, x_4=6)$. Identify where the maximum likelihood occurs.
Find the MLE of $\theta$ in general (for any data $x_1, x_2, \ldots x_n$).
What is the derivative at this point? Is it 0? Look at your graph from Part 2… | CommonCrawl |
Acute effects of naturalistic THC vs. CBD use on recognition memory: a preliminary study
Tim Curran ORCID: orcid.org/0000-0003-4689-83061,
Hélène Devillez2,
Sophie L. YorkWilliams2 &
L. Cinnamon Bidwell3
Journal of Cannabis Research volume 2, Article number: 28 (2020) Cite this article
The ratio of ∆9-tetrahydrocannabinol (THC) to cannabidiol (CBD) varies widely across cannabis strains. CBD has opposite effects to THC on a variety of cognitive functions, including acute THC-induced memory impairments. However, additional data are needed, especially under naturalistic conditions with higher potency forms of cannabis, commonly available in legal markets. The goal of this study was to collect preliminary data on the acute effects of different THC:CBD ratios on memory testing in a brief verbal recognition task under naturalistic conditions, using legal-market Colorado dispensary products. Thirty-two regular cannabis users consumed cannabis of differing THC and CBD levels purchased from a dispensary and were assessed via blood draw and a verbal recognition memory test both before (pretest) and after (posttest) ad libitum home administration in a mobile laboratory. Memory accuracy decreased as post-use THC blood levels increased (n = 29), whereas performance showed no relationship to CBD blood levels. When controlling for post-use THC blood levels as a covariate, participants using primarily THC-based strains showed significantly worse memory accuracy post-use, whereas subjects using strains containing both THC and CBD showed no differences between pre- and post-use memory performance. Using a brief and sensitive verbal recognition task, our study demonstrated that naturalistic, acute THC use impairs memory in a dose dependent manner, whereas the combination of CBD and THC was not associated with impairment.
Cannabis produces acute memory impairment during intoxication (Bossong et al. 2014; Broyd et al. 2016; Lundqvist 2005; Ranganathan and D'Souza 2006), although regular users may not show these acute decrements in performance (Ranganathan and D'Souza 2006; Schoeler and Bhattacharyya 2013). Cannabis contains many cannabinoids that may have differential effects on memory. Overall, research studies have not sufficiently considered the fact that cannabis exists in different forms and have not characterized the effects of cannabis as the compound action of different cannabinoids that vary in terms of their pharmacological effects. Two of the primary cannabinoids, ∆9-tetrahydrocannabinol (THC) and cannabidiol (CBD), have some opposing effects (Osborne et al. 2017; Rømer Thomsen et al. 2017; Zhornitsky and Potvin 2012), and the ratio of THC to CBD varies dramatically among different strains of cannabis, with some strains in Colorado testing at greater than a 20:1 CBD to THC ratio, while other strains have a 1:1 THC to CBD ratio, and many have negligible amounts of CBD. Furthermore, most research to date has used low-strength government-grown cannabis (THC ranging from 3 to 6%) that lacks other key cannabinoids (CBD close to 0%) and has been administered in tightly controlled laboratory environments, all of which maximize internal validity, but compromise external validity. Currently, the THC strength of recreational cannabis in Colorado can exceed 25%, and the strength of CBD comes close to 25% in some strains (Vergara et al. 2017).
Recent reviews suggest that CBD has no effect on cognition in healthy individuals, but can improve cognitive processes including attention, executive function, working memory, and episodic memory in various pathological conditions including acute THC intoxication (Osborne et al. 2017; Rømer Thomsen et al. 2017; Zhornitsky and Potvin 2012). In this context, CBD has been considered as a potential treatment for cognitive impairments resulting from schizophrenia, Alzheimer's disease, ischemia, inflammatory states, and hepatic encephalopathy (a disorder resulting from acute and chronic liver failure) (Osborne et al. 2017). Thus, a better understanding of the protective effects of CBD during THC impairment may also provide insights about CBD's potential for improving cognitive problems with varying etiologies.
Previous episodic memory studies indicate that cannabinoids such as CBD may counteract the effects of THC. Chronic benefits of CBD were suggested in a study showing better recognition memory for words in regular cannabis users with CBD present in their hair (Morgan et al. 2012). A prior naturalistic study assessed acute effects in users who already prefer high-CBD strains (Morgan et al. 2010b). Prose recall was significantly higher after use of cannabis that was high in CBD compared to the low CBD group. Other previous studies have suggested that CBD acutely reduces THC-related learning and memory impairments in well-controlled human (Englund et al. 2013) and animal studies (Vann et al. 2008; Wright Jr. et al. 2013). In one clinical study, subjects were given an oral dose of CBD (600 mg) or a placebo 210 min ahead of an intravenous injection of THC (1.5 mg). Those in the CBD group showed better episodic memory (delayed free recall) compared to the placebo group (Englund et al. 2013). On the other hand, another prose recall study compared placebo, THC 8 mg, CBD 16 mg and THC 8 mg + CBD 16 mg in a randomized, double-blind crossover design with vaporizer inhalation (Morgan et al. 2018). Both the THC and THC + CBD conditions impaired memory, but CBD had no effects, even though the same subjects showed some protective effects of CBD in identification of facial emotions (Hindocha et al. 2015). These studies highlight that the effects of THC and CBD on memory may vary by dose, timing, and form of administration. Furthermore, they point to the need for measuring blood cannabinoid levels after cannabis administration to determine THC and CBD exposure.
In one preliminary study, we began to assess the effects of higher THC and CBD concentrations on verbal recall (Bidwell et al. 2018). Regular cannabis users were asked to use either a + THC/−CBD strain (~ 17% THC, < 1% CBD; n = 11) or a + THC/+CBD strain (8% THC, 16% CBD; n = 12) that was acquired from a local dispensary. Participants used the assigned cannabis strain in accordance with their normal usage habits for 3 days, including a final use on the third day. Immediately after this final use, participants were transported to the lab by the research team for a detailed assessment of its effects on neuro- and bio- behavioral functions, including memory. Blood draws were collected before the three-day use period (i.e., baseline), immediately upon arrival at the lab (within 15 min of last cannabis use), and at the end of the two-hour assessment in order to verify effective strain assignment and cannabinoid exposure. Testing included the International Shopping List Task (ISLT) as a measure of verbal recall (Thompson et al. 2011). The ISLT consists of a 12-item shopping list that was read out loud to the participant three times in the same order. After 30 min, a delayed free recall test was given. Results suggested that recall performance was negatively correlated with THC blood levels for the THC-only strain (+THC/−CBD), but recall performance was not significantly correlated with THC blood levels for the CBD-containing strain (+THC/+CBD). These preliminary findings suggest that the strain type differentially affected recall and prompt further research into the impacts of naturalistic administration of legal market THC and CBD on memory function.
The present experiment used a novel design to naturalistically assess the effects of real-world cannabis products on memory with the use of a mobile pharmacology and phlebotomy laboratory, which was driven to participants' homes to allow assessment of participants both immediately before and after naturalistic administration of real-world cannabis. Although cannabis is legal at the state level in Colorado, researchers are not allowed to have participants use or handle state legal cannabis in any form on university property or in the presence of University staff, as this would be a violation of the federal Drug Free Schools Act. While we could have participants self-administer at home and take a taxi to the lab (a strategy we attempted in our prior work (Bidwell et al. 2018)), there are two major disadvantages of this approach: 1) We are unable to take a baseline assessment immediately prior to administration of an acute dose of cannabis, and 2) There is a high degree of variability in when participants actually arrive at the lab, meaning it is difficult to standardize assessments as a function of time since consumption. Using our mobile pharmacology and phlebotomy lab, we were able to draw blood to assess cannabinoid levels and collect assessments immediately before cannabis use, and at more precise time points post use. This innovative approach allows us to conduct cutting edge research on the acute effects of cannabis strains legally available in our state, but not allowed in University laboratories.
In the present experiment, we sought to collect feasibility data that would allow us to replicate and extend prior work using a mobile laboratory (Bidwell et al. 2018), facilitate more precise timing of pre- and post- cannabis use assessments, and administer a verbal recognition memory task. These feasibility data were collected in the context of two larger studies focused on the acute effects of high potency legal market forms of concentrate (State of Colorado Marijuana Research Grant 96,947 to LCB) or flower cannabis (R01DA039707 to Kent E Hutchison). The two studies were otherwise identical regarding the tasks that subjects completed. The detailed procedures and primary outcomes of these larger studies are described and reported elsewhere (Bidwell et al. 2020). A recognition memory task, which was not part of the original aims of either study, was selected to extend our previous ISLT results (Bidwell et al. 2018) beyond free recall with a task that provides better control over memory retrieval conditions (Kahana 2012). In addition to recollection processes required for free recall, recognition engages familiarity-based memory processes (Diana et al. 2006; Malmberg 2008; Yonelinas 2002) that we plan to dissociate in future cannabis studies with event related potentials (ERPs, Curran and Doyle 2011; Rugg and Curran 2007). Regular cannabis users twice completed a verbal recognition memory task with words: Before ("pretest") and approximately 35 min after ad libitum use ("posttest") of their assigned cannabis strain. Several strains of flower and two concentrates were used, and each strain fell into one of two groups: THC and THC + CBD (see Table 1). We assessed the effects of each cannabis strain as the degree of memory performance decrement from the pretest to the posttest. We hypothesized that CBD should have a protective effect on THC-induced memory impairment, so we predicted that the pre/post decrement would interact with strain such that the decrement would be largest in the THC group compared to the THC + CBD group. Furthermore, a blood draw taken immediately after cannabis consumption was used to quantify peak levels of THC and CBD. We predicted that posttest memory performance would decline as THC levels increased, and THC and CBD levels would interact such that THC levels would have diminished effects as CBD levels increased.
Table 1 Assignment of different products to groups
Participants (32 cannabis users aged between 21 and 66 years) were recruited from the Boulder-Denver Metro area in Colorado using social media postings and mailed flyers. Because the goal was to collect feasibility data using a novel methodology, the recognition memory task reported here was only assessed in 32 subjects. Trained research staff screened eligible participants via telephone. Criteria for inclusion in the study were: 1) Aged between 21 and 70; 2) Used cannabis at least 4 times in the past month; 3) Experience with the highest potency of cannabis that could be assigned in the study (24% THC for flower groups and 90% THC for concentrate groups); 4) No other non-prescription drug use in the past 60 days; with a urine toxicology screen; 5) No daily tobacco use; 6) Reported drinking 2 times or fewer per week, and ≤ 3 drinks per occasion; 7) Not be pregnant, or trying to become pregnant; 8) No self-reported prior or current psychotic or bipolar disorder. Those eligible for the study completed both a baseline appointment and an experimental appointment, described in greater detail below.
Overview of Design of Feasibility Study
In an observational study, cannabis flower and concentrate users were assigned to purchase and use a legal market THC only or THC + CBD product. Participants completed a verbal recognition memory task at baseline and during an experimental mobile laboratory assessment approximately 50 min after ad libitum administration of their product. Thus, product strain was manipulated between participants and pre/post-use memory assessment was manipulated within participants.
Baseline appointment
Participants were instructed not to use cannabis on the day of their baseline appointment, which took place at the research team's on-campus laboratory. After completing the informed consent process, a Breathalyzer (Intoximeter, Inc., St. Louis, MO) and urinalysis test was administered to ensure that participants had no alcohol, sedatives, cocaine, opiates, or amphetamines in their system. If either test was positive, the baseline appointment was rescheduled, and participants with repeated positives were terminated from the study. Female participants were required to take a urine pregnancy test, to ensure that they were not currently pregnant. Participants completed questionnaires on demographics, lifestyle, substance use, and medical history. After baseline questionnaires were completed, participants provided a blood draw.
Before leaving the baseline appointment, each participant was given a card with directions to a local dispensary in order to purchase their study product. Several strains of flower and two concentrates were used and randomly assigned in the larger studies (details on these procedures are in Bidwell et al. (2020)). In order to achieve a wide range of THC and CBD exposure for the purposes of this verbal recognition feasibility study, individuals were assigned to the full range of strains being tested in the parent studies and each strain was grouped into one of the following categories for the purposes of this feasibility study: THC or THC + CBD (see Table 1). Specifically, participants who primarily used cannabis concentrates purchased either a 70% or 90% THC concentrate which fell into the THC group. Participants who primarily used flower, instead of other cannabis products, were given instructions to purchase one of the following flower strains: 24% THC and 1% CBD, which fell into the THC group; or one of the THC + CBD group strains that contained either 14% THC and 9% CBD, 6% THC and 9% CBD, 9% THC and 10% CBD, or 24% CBD and 1% THC. The THC and CBD potency of each study product was tested and labeled consistent with State of Colorado requirements, in an International Organization for Standardization (ISO) 17,025 accredited laboratory. ISO 17025 is the highest recognized quality standard in the world for calibration and testing laboratories. Independent testing by University researchers is not permitted under federal law. Research staff were blinded to strain condition, and the blind was maintained by the dispensary and one senior member of the lab. The sample sizes of each group were: THC (n = 15) and THC + CBD (n = 17).
Experimental appointment
After participants obtained the study product, they were asked to use it exclusively, and ad libitum, for the 5 days leading up to the experimental appointment, which took place in a mobile laboratory outside of the participants' place of residence. Participants were asked to abstain from using cannabis on the day of the appointment, prior to the experiment. At the first assessment of the day (pre-use), participants completed a blood draw and the primary outcome measures, followed by the first administration of the recognition task.Footnote 1 Then they returned home to use their study cannabis ad libitum with their normally preferred method of administration. The THC group used 6 different administration methods: oil rig (n = 6), bong (n = 4), vaporizer (n = 1), glass straw (n = 2), joint (n = 1) and bubbler (n = 1). The THC + CBD group used 4 different administration methods: pipe (n = 7), bong (n = 5), vaporizer (n = 2) and joint (n = 2). Shortly thereafter, they returned to the mobile lab to complete the blood draw to estimate peak cannabinoid exposure, the primary outcome measures, and the recognition memory task again, while acutely intoxicated (acute post-use). The post-use recognition memory task took place 35 min after participants returned to the van.Footnote 2
Past-month use of cannabis
To report on their typical use of cannabis at the baseline appointment, participants completed a calendar-assisted, researcher administered Timeline Followback that queried their use of alcohol, nicotine/tobacco, cannabis, prescription drugs, and illicit drugs over a 30-day retrospective timeframe (Dennis et al. 2004).
Cannabinoid content
Because University research staff are not permitted to handle legal market cannabis, we asked participants to weigh their product with a study-provided scale [American Weigh Scale, Gemini Series Precision Digital Milligram Scale (GEMINI-20)] at the experimental appointment both before and after ab libitum use. Although blood THC, CBD, and metabolite measures remain our primary measure of individual cannabinoid exposure, the weight that each participant provided (mg) was used to further estimate the amount of each cannabinoid consumed based on the percentages of THC and CBD contained in their specific study strain. While these mg estimates are not considered a primary measurement of cannabinoid dose, we include these data in order to facilitate integration and interpretation of our findings with prior controlled laboratory studies.
Blood cannabinoids
A certified phlebotomist collected 32 mL (2 tablespoons) of blood through venipuncture of a peripheral arm vein using standard, sterile phlebotomy techniques in order to assess plasma cannabinoids. Plasma was separated from erythrocytes by centrifugation at 400 xg for 15 min, transferred to a fresh microcentrifuge tube, and stored at − 80 °C. Plasma samples were sent to iC42 Clinical Research and Development (Department of Anesthesiology) on the Anschutz Medical Campus at the University of Colorado Denver. Four cannabinoids were quantified in the blood (THC and its primary metabolites THC-COOH and 11-OH-THC, and CBD) using validated high performance liquid chromatography/mass-spectroscopy (HPLC-MS/MS) (API5500) in MRM mode (Klawitter et al. 2017).
Recognition memory task
Figure 1 provides an overview of the recognition memory task procedures. In each of the two runs of the recognition memory task, subjects studied 20 words followed by a recognition memory test with 20 old (studied) and 20 new (non-studied) concrete nouns. The pretest and posttest tasks included different words, and the exact same lists were used for each participant to minimize variability. The four lists (2 old × 2 new) were matched on word length and Kucera-Francis written frequency (Kucera and Francis 1967). The study lists also included 2-word, non-tested buffer items at the beginning and end of the list to reduce primacy and recency effects. Each study trial started with a 500–700 ms fixation cross, followed by a word for 1000 ms, and ending with a 1000 ms blank screen. Participants were instructed to try to remember each word in preparation for the upcoming test. Participants played Sudoku for 3 min between each of the study and test lists to provide a distracting stimulus that would minimize active rehearsal during the delay. Each test trial started with a 500–1000 ms fixation cross, followed by a word for 2000 ms, and ending with a 1000 ms blank screen. Subjects were instructed to judge each word as old or new as quickly and accurately as possible, by pressing either a leftward (R or F) or a rightward (U or J) key on the keyboard. Assignment of response keys and left/right to old/new responses was counterbalanced across subjects.
Time course of one trial during the study phase and the test phase
Cannabinoid plasma biomarker levels taken immediately post-use were our primary assessment of the strength of the effects of each cannabinoid, but cannabinoid content weight is also reported to facilitate comparison with other studies. The total weight of the product that each participant used was measured as the difference between pre- and post-use weight (mg Total, Table 2). The amount of each cannabinoid consumed by each participant was estimated by multiplying the total weight used by the percentage of THC and CBD in that subject's strain (mg THC and mg CBD, Table 2). To examine differences in cannabinoid content across groups, analyses were performed in a mixed-design ANOVA with cannabinoid type (CBD, THC) as a within-subject factor and strain group (THC, THC + CBD) as a between-subject factor.
Table 2 Participant characteristics and blood biomarkers by strain group. Means are reported with 95% confidence intervals in brackets
Cannabinoid plasma biomarker levels
Given that our observational study involved ad libitum use of various cannabis products, cannabinoid plasma biomarker levels obtained from blood taken immediately after cannabis administration were our primary quantitative assessment of individual exposure to each relevant cannabinoid. As shown in Table 2, four cannabinoids were quantified in the blood (THC and its primary metabolites THC-COOH and 11-OH-THC, and CBD). Analysis of THC levels were performed with a composite THC + metabolites measure, which is the sum of the three THC levels. These measurements were analyzed in a mixed-design ANOVA with session (pretest, posttest) and cannabinoid type (CBD, sum THC + metabolites) as within-subject factors and strain group (THC, THC + CBD) as a between-subject factor.
Estimated cannabis dose and strain effects on memory
As is typical in recognition memory research (Macmillan and Creelman 2005; Malmberg 2008; Neath and Surprenant 2003; Wixted 2007) and consistent with previous studies on the effects of THC and CBD on recognition memory (Morgan et al. 2012; Morgan et al. 2010b), d' (accuracy in discriminating old vs. new words) was used as the primary measure of memory performance. The hit rate (H, proportion of correct "old" responses to studied words) and false alarm rate (FA, proportion of incorrect "old" responses to non-studied words) are used to calculate d' (d′ = zH − zFA, where z is the standard normal distribution). Given the distribution of the metabolites, we performed a log transformation of the metabolite data.
For d' we first ran a regression model to examine how cannabinoid levels (sum THC + metabolites and CBD) were associated with accuracy (d').
$$ \mathrm{d}{\hbox{'}}_{\mathrm{i}}={\upbeta}_0+{\upbeta}_1\log \left({\mathrm{THCLevel}}_{\mathrm{i}}\right)+{\upbeta}_2\log \left({\mathrm{CBDLevel}}_{\mathrm{i}}\right)+{\upbeta}_3\log \left({\mathrm{THCLevel}}_{\mathrm{i}}\right)\ast \log \left({\mathrm{CBDLevel}}_{\mathrm{i}}\right)+{\upvarepsilon}_{\mathrm{i}} $$
The regression allows us to assess how memory accuracy was affected by differences in the strength of neurophysiological exposure to each cannabinoid alone and in combination. Second, the effect of strain group on memory accuracy (d') was analyzed in a mixed-design analysis of variance (ANOVA) with session (pretest, posttest) as a within-subject factor and strain group (THC and THC + CBD) as a between-subject factor. Because the THC content was lower in the product consumed by the THC + CBD group, we ran a second ANOVA with log (THC + metabolites) as a covariate in this ANOVA.
Our primary measure of recognition memory performance was d', but Table 3 shows other performance measures for completeness, including the hit and false alarm rates used to calculate d'. Table 3 shows a measure of response bias (c = − 1/2 * [zH − zFA]), where negative values indicate a liberal bias to respond "old" and positive values indicate a conservative bias to respond "new". Table 3 also shows response time (RT). Each of these performance measures were separately analyzed in a mixed-design analysis of variance (ANOVA) with session (pretest, posttest) as a within-subject factor, strain group (THC and THC + CBD) as a between-subject factor, and THC + metabolite levels as a covariate.
Table 3 d', hit rate, FA (false alarm) rate, c (response bias) and reaction time (RT) for pre- and posttest, for the two strain groups, THC and THC + CBD. Means are reported with 95% within subject confidence intervals in brackets. The right columns indicate significant differences from the t-test on group differences (*: p < .05, **: p < .01)
Multiple comparisons were assessed with Bonferroni post-hoc tests (with corresponding p-values reported as pbf) for all analyses.
One of the 32 participants was excluded from analyses because their pretest blood levels exceeded mean + 3 standard deviation over all participants, when considering the combination of THC + metabolites level (sum of THC, THC-COOH and 11-OH-THC) and CBD level.Footnote 3 This reduced the THC + CBD strain group from 17 to 16 participants (see Table 1).
As seen in Table 2, the strain groups did not significantly differ in age, first age of regular cannabis use, or time away from the van. We did observe a significant difference in cannabis consumption for the past 30 days, showing more cannabis use in the THC group compared to the THC + CBD group.
One participant did not weigh her or his product, so dosage results are based on only 14 subjects in the THC group. As reported in Table 2, the groups did not differ significantly in the total amount (mg) of product they consumed during at-home administration. However, they did differ in the amount (mg) of CBD and THC. As expected based on product content and group assignment, and as shown in Table 2, results indicated that each group differed on THC and CBD dosages in the expected directions. The THC group had the highest THC doses and the CBD group had the highest CBD doses.
Cannabinoid plasma biomarker levels (Table 2) were analyzed in a mixed-design ANOVA with 2 sessions (pretest, posttest) and 2 cannabinoid types (CBD, sum THC + metabolites) as within-subject factors, and strain (THC, THC + CBD) as a between-subject factor. Pre-test THC levels fell < 10 ng/mL on average across both groups, supporting that participants complied with day of abstinence procedures prior to their mobile laboratory study appointment.
Analysis of cannabinoid plasma biomarker levels revealed a main effect of session, F(1,29) = 11.44, p < .001, \( {\eta}_p^2 \) = 0.28, and a significant main effect of cannabinoid type, F(1,29) = 16.12, p < .001, \( {\eta}_p^2 \) = 0.36. Cannabinoid type interacted with strain group, F(1,29) = 5.25, p < .05, \( {\eta}_p^2 \) = 0.15, showing that sum THC + metabolite levels were higher for the THC group compared to the THC + CBD group (pbf < .05). Cannabinoid type interacted with session, F(1,29) = 7.69, p < .01, \( {\eta}_p^2 \) = 0.21, showing that the level of sum THC + metabolites was higher at posttest (i.e., after cannabis use) compared to pretest (pbf < .001). There was a significant 3-way interaction between cannabinoid type, strain group, and session, F(1,29) = 5.42, p < .05, \( {\eta}_p^2 \) = 0.16. When this interaction was decomposed with Bonferroni-corrected post hoc tests, they indicated that the strain groups did not differ on any pretest levels, but posttest sum THC + metabolites levels were higher for the THC group than the THC + CBD group (pbf < .001). When testing each measure separately (Table 2), we only observed a significant difference for THC levels at pretest. Posttest CBD levels were higher for the THC + CBD group than the THC group, whereas posttest THC levels and sum THC + metabolites were higher for the THC group than the THC + CBD group.
Cannabis dose and strain effects on memory
First, we ran a regression model (Eq. 1) to examine how cannabinoid levels (THC + metabolites and CBD) were associated with accuracy (d'). The model revealed that the level of THC + metabolites was significantly negatively correlated to accuracy (p < .05, \( {\eta}_p^2 \) = 0.28) (Fig. 2a), but neither the effect of CBD (Fig. 2b) nor the THC × CBD interaction was significant. This result was observed across the two strain groups, and neither THC nor CBD blood levels were significantly correlated with d′ within each strain group.
Accuracy d' according to blood biomarkers log (THC + metabolites) (a) and log (CBD) (b) during posttest, for the two strain groups: THC and THC + CBD. The black lines represent the correlation between accuracy and blood biomakers with R2 reported
Second, accuracy (d′, Fig. 3) was analyzed in a mixed-design analysis of variance (ANOVA) with session (pretest, posttest) as a within-subject factor, and strain group (THC, THC + CBD) as a between-subject factor. d′ significantly decreased between pre- and post-test, F(1, 29) = 5.84, p < .05, \( {\eta}_p^2 \) = 0.17, and d′ was significantly higher for the THC + CBD group compared to the THC group, F(1, 29) = 6.05, p < .05, \( {\eta}_p^2 \) = 0.17. The significant session × strain group interaction, F(1,29) = 7.90, p < .01, \( {\eta}_p^2 \) = 0.21, showed that accuracy was lower at posttest than pretest for the THC group (pbf < .01), but not for the THC + CBD group. We also observed that the accuracy at posttest was lower for the THC group than for the THC + CBD group (pbf < 0.01). Additionally, sum THC + metabolite blood plasma levels were included as a covariate since it significantly predicted memory accuracy in the regression analysis and because the THC content of the product consumed by the THC + CBD group was lower in THC. As performed in previous analyses, we used the log transform of metabolite data. The covariate log (THC) was significant, F(1,28) = 7.79, p < .01, \( {\eta}_p^2 \) = 0.22. The significant session × strain group interaction, F(1,28) = 6.18, p < .05, \( {\eta}_p^2 \) = 0.18, showed similar results as before, with lower accuracy at posttest compared to pretest for the THC group (pbf < .01), but not for the THC + CBD group. Also, the accuracy at posttest for the THC group was lower than for the THC + CBD group (pbf < 0.01).
Accuracy d′ for pretest and posttest, for the two strain groups: THC and THC + CBD. Colored regions represent the 95% within subject confidence intervals (Morey 2008). Thick black lines represent the mean. Individual data points represent the mean d' for each participant. Thin black lines connect individuals across conditions. Asterisks show results of the Bonferroni post-hoc tests (* pbf < 0.05)
Consistent with our approach for d', each of the other performance measures was separately analyzed in a mixed-design analysis of variance (ANOVA) with session (pretest, posttest) as a within-subject factor, and strain (THC, THC + CBD) as a between-subject factor. Results are presented without a covariate. When adding log (THC) as a covariate, no significant effects were observed for the 4 measures. Analysis of false alarm (FA) rate indicated a significant main effect of session, F(1, 29) = 18.45, p < .001, \( {\eta}_p^2 \) = 0.39, showing a higher rate of FA at posttest compared to pretest. Session also interacted with strain for FA, F(1, 29) = 4.86, p < .05, \( {\eta}_p^2 \) = 0.14, such that only the posttest FA rate was higher for the THC group than for the THC + CBD group (Table 3). Analysis of response bias (c) indicated a significant effect of session, F(1, 29) = 5.79, p < .05, \( {\eta}_p^2 \) = 0.17, such that subjects were somewhat conservative pretest (tended to respond "no" more than "yes") but somewhat liberal posttest (tended to respond "yes" more than "no"). Analysis of hit rate and reaction time revealed no significant effects. The presence of significant posttest hit rate effects in the t tests (Table 3), but not in the ANOVA, suggests that ANOVA did not have sufficient power to detect the session × strain interaction for this outcome.
This study demonstrates the feasibility of a brief and mobile verbal recognition memory task for naturalistic and experimental studies of the acute effects of cannabis. Participants completed a recognition memory task before (pretest) and shortly after (posttest) ad libitum acute administration of cannabis products with varying THC:CBD ratios. Participants using products containing primarily THC showed significantly worse memory accuracy (d') after use than before use, whereas subjects using strains containing both THC and CBD showed no differences between pre- and posttest memory performance. When blood cannabinoid levels were considered, d' was negatively correlated with THC levels, whereas performance showed no association with CBD levels. Thus, acute THC use was associated with impaired memory in a dose dependent manner, whereas the combination of THC and CBD was not associated with impaired memory.
Compared to other recent studies examining the acute effects of THC on episodic memory, the present study included more naturalistic methods of cannabis use and higher dosage. Recognition accuracy was better before than after THC consumption and decreased as THC blood levels increased. Our participants self-administered their assigned products ad libitum using their normally preferred methods at home. The mean estimated THC dosage across both the THC and THC + CBD strain groups was 58.61 mg (range = 1.92–235.8 mg). In a broad review of studies of cannabis use on human cognition from 2004 to 2015, Broyd et al. (2016) identified 11 studies investigating acute effects on verbal episodic memory. Of those demonstrating acute memory deficits, five administered intravenous (IV) THC (D'Souza et al. 2004; D'Souza et al. 2008; Englund et al. 2013; Morrison et al. 2009; Ranganathan et al. 2012), two administered vaporized cannabis (Liem-Moolenaar et al. 2010; Theunissen et al. 2015), and one administered oral THC (nabilone) (Wesnes et al. 2009). Dosage in these studies ranged from 2 to 12 mg of THC. More recent studies have documented episodic memory impairments after acute use of 8 mg of THC with a vaporizer (Morgan et al. 2018) and 10.73 mg of THC with experimenter-regimented joint smoking (Hindocha et al. 2015). Thus, we have replicated prior work under more naturalistic conditions and higher doses, as well as replicating our previous free recall results in a separate sample of participants with a recognition memory task (Bidwell et al. 2018).
As predicted, the deleterious effects of THC on recognition memory accuracy were not present when CBD was co-self-administered. Because THC levels were negatively correlated with posttest memory accuracy and THC levels differed between strain groups, we controlled for THC levels as a covariate and found a significant interaction between strain group and pre/posttest sessions. Participants using products that contained only THC showed memory accuracy decrements from pre- to posttest. No such decrements were observed in subjects using both THC and CBD. While preliminary, this finding is generally consistent with other suggestions that CBD and THC can have opposing effects on a variety of outcomes (Bidwell et al. 2018; Osborne et al. 2017; Rømer Thomsen et al. 2017; Zhornitsky and Potvin 2012) as well as other recent episodic memory studies suggesting that CBD can counteract memory impairments caused by acute THC use (Bidwell et al. 2018; Englund et al. 2013; Morgan et al. 2010a; Morgan et al. 2010b). These prior studies have all used free recall measures of memory, which the present results extend to recognition memory. Both recollection and familiarity processes are thought to contribute to recognition memory, whereas only recollection is relevant to free recall (Diana et al. 2006; Malmberg 2008; Yonelinas 2002). Some older studies have suggested that acute cannabis use impairs recollection more than familiarity (Fletcher and Honey 2006; Ilan et al. 2004), but none have examined differential acute effects of THC vs. CBD. ERPs have proven useful for discriminating these processes (Curran and Doyle 2011; Rugg and Curran 2007) and we plan to use ERPs in future research examining THC and CBD effects on recognition memory.
In addition to being a small feasibility study that needs to be replicated, there are three primary limitations of the present study. First, like Morgan et al. (2010a, 2010b), assignment of subjects to strains was not completely random, so pre-existing differences between participants could have influenced the results. For example, regular users of high potency THC concentrates may be more or less susceptible to its acute effects than other subjects. Bidwell et al. (2018) and Englund et al. (2013) used random assignment, but only Bidwell et al. (2018) used naturalistic administration. Second, the 50 min that elapsed after consumption prior to the memory assessment (which occurred ~ 35 min after blood draw to assess peak cannabinoid levels) may have limited the observed effects of THC and CBD. On the other hand, we have found the effects of THC on verbal recall memory to be relatively persistent when international shopping list test (ISLT) performance was compared between 15 and 30 min after use versus 60–75 min after use (Bidwell et al. 2020). Third, given the nature of this observational pilot study we were not powered to include all relevant covariates or ethically able to match the groups on important characteristics such as cannabis use history, preferred form of cannabis (e.g. flower vs. concentrate), or preferred route of inhaled administration (e.g. bong, pipe, etc.). Furthermore, compared to the THC group, the THC + CBD group tended to be older (with age also ranging more widely), started regular cannabis use later, used less cannabis in the past month, and consumed significantly less THC in their assigned strain. Although the first three demographic trends were not significant, that may be attributable to the small sample size, so these factors could have contributed to group differences on memory. Despite these concerns, our strongest memory effects were shown in the THC group, which had the heaviest levels of use prior to the study sessions mitigating a concern that our findings are driven by tolerance effects in heavy users. Typically, heavier users are less likely to show acute decrements in memory performance (Ranganathan and D'Souza 2006; Schoeler and Bhattacharyya 2013).
This study puts forward novel, naturalistic data on the feasibility of a brief and mobile recognition memory task that can assess the impacts of higher potency legal market forms of cannabis that vary in levels of THC and CBD. With an emphasis on external validity, we demonstrate the feasibility of a method for assessing cannabis-related memory impairment after the use of legal market forms of cannabis either in the field or in clinical settings. Very few studies have examined the cognitive effects of legal market cannabis, which leaves a gap in the current literature in regards to real world consumption patterns when legal market access as well as medical and recreational use is rapidly increasing. These findings contribute naturalistic data to the public health sphere on the impact of THC and CBD on memory function and are relevant to patients, medical providers, policy makers, and law enforcement.
The data are available on the Open Science Framework (https://osf.io/x4yns/?view_only=7e3c4c3de122454c816893a47263e513).
Because the recognition task was added onto another ongoing protocol, it was always run after the primary outcome measures for the main study which included assessments of other memory tasks, attention, inhibitory control, balance, and subjective drug effects. These tasks are unlikely to interfere with recognition memory results. The only other verbal memory test included was the International Shopping List Task (ISLT), which used different words than the recognition task. Our larger study found that THC administration was negatively associated with ISLT performance, but CBD results await ongoing data collection and analysis (Bidwell et al. 2020).
We do not have the specific time point for the memory assessment for each participant, so the time given here is an estimate based on the general flow of the protocol. The timing of the protocol should not differ between participants.
Results obtained without excluding the outlier were similar and are not presented in detail. In particular, d′ negatively correlated with THC blood levels (p < .05), but not CBD blood levels. The session × strain interaction on d′ was significant, with or without the log(THC) covariate (both p < .01).
11-OH-THC:
11-Hydroxy-THC, primary active metabolite of THC
Response bias
Discriminability
ERP:
Event related potential
FA:
False alarm rate
HPLC-MS/MS:
High performance liquid chromatography/mass-spectroscopy
ISLT:
International Shopping List Task
International Organization for Standardization
IV:
mg:
MRM:
Multiple reaction monitoring
Millisecond
N, n:
ng/mL:
Nano grams per milliliter
RT:
∆9-tetrahydrocannabinol
THC-COOH:
tetrahydrocannabinol carboxylic acid, major inactive metabolite of THC
xg:
Times gravity
Standard normal distribution
Bidwell LC, Mueller R, YorkWilliams SL, Hagerty S, Bryan AD, Hutchison KE. A novel observational method for assessing acute responses to cannabis: preliminary validation using legal market strains. Cannabis Cannabinoid Res. 2018;3:35–44. https://doi.org/10.1089/can.2017.0038.
Bidwell LC et al. Association of naturalistic administration of cannabis flower and concentrates with intoxication and impairment. JAMA Psychiatry. 2020;77:787–96. https://doi.org/10.1001/jamapsychiatry.2020.0927.
Bossong MG, Jager G, Bhattacharyya S, Allen P. Acute and non-acute effects of cannabis on human memory function: a critical review of neuroimaging studies. Curr Pharm Des. 2014;20:2114–25. https://doi.org/10.2174/13816128113199990436.
Broyd SJ, van Hell HH, Beale C, Yucel M, Solowij N. Acute and chronic effects of cannabinoids on human cognition-a systematic review. Biol Psychiatry. 2016;79:557–67. https://doi.org/10.1016/j.biopsych.2015.12.002.
Curran T, Doyle J. Picture superiority doubly dissociates the ERP correlates of recollection and familiarity. J Cogn Neurosci. 2011;23:1247–62. https://doi.org/10.1162/jocn.2010.21464.
D'Souza DC, et al. Effects of haloperidol on the behavioral, subjective, cognitive, motor, and neuroendocrine effects of Δ-9-tetrahydrocannabinol in humans. Psychopharmacology (Berl). 2008;198:587–603. https://doi.org/10.1007/s00213-007-1042-2.
Dennis ML, Funk R, Godley SH, Godley MD, Waldron H. Cross-validation of the alcohol and cannabis use measures in the global appraisal of individual needs (GAIN) and timeline followback (TLFB; form 90) among adolescents in substance abuse treatment. Addiction. 2004;99:120–8. https://doi.org/10.1111/j.1360-0443.2004.00859.x.
Diana RA, Reder LM, Arndt J, Park H. Models of recognition: a review of arguments in favor of a dual-process account. Psychon Bull Rev. 2006;13:1–21. https://doi.org/10.3758/BF03193807.
D'Souza DC, et al. The psychotomimetic effects of intravenous Delta-9-tetrahydrocannabinol in healthy individuals: implications for psychosis. Neuropsychopharmacology. 2004;29:1558. https://doi.org/10.1038/sj.npp.1300496.
Englund A, et al. Cannabidiol inhibits THC-elicited paranoid symptoms and hippocampal-dependent memory impairment. J Psychopharmacol. 2013;27:19–27. https://doi.org/10.1177/0269881112460109.
Fletcher PC, Honey GD. Schizophrenia, ketamine and cannabis: evidence of overlapping memory deficits. Trends Cogn Sci. 2006;10:167–74. https://doi.org/10.1016/j.tics.2006.02.008.
Hindocha C, Freeman TP, Schafer G, Gardener C, Das RK, Morgan CJA, Curran HV. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users. Eur Neuropsychopharmacol. 2015;25:325–34. https://doi.org/10.1016/j.euroneuro.2014.11.014.
Ilan AB, Smith ME, Gevins A. Effects of marijuana on neurophysiological signals of working and episodic memory. Psychopharmacology (Berl). 2004;176:214–22. https://doi.org/10.1007/s00213-004-1868-9.
Kahana MJ. Foundations of Human Memory. Oxford: Oxford University Press; 2012.
Klawitter J, et al. An atmospheric pressure chemical ionization MS/MS assay using online extraction for the analysis of 11 cannabinoids and metabolites in human plasma and urine. Ther Drug Monit. 2017;39:556–64. https://doi.org/10.1097/FTD.0000000000000427.
Kucera H, Francis WN. Computational analysis of present-day American English. Providence: Brown University Press; 1967.
Liem-Moolenaar M, et al. Central nervous system effects of haloperidol on THC in healthy male volunteers. J Psychopharmacol. 2010;24:1697–708. https://doi.org/10.1177/0269881109358200.
Lundqvist T. Cognitive consequences of cannabis use: comparison with abuse of stimulants and heroin with regard to attention, memory and executive functions. Pharmacol Biochem Behav. 2005;81:319–30. https://doi.org/10.1016/j.pbb.2005.02.017.
Macmillan NA, Creelman CD. Detection theory: a user's guide. 2nd ed. Mahwah: Erlbaum; 2005.
Malmberg KJ. Recognition memory: a review of the critical findings and an integrated theory for relating them. Cogn Psychol. 2008;57:335–84. https://doi.org/10.1016/j.cogpsych.2008.02.004.
Morey RD. Confidence intervals from normalized data: a correction to Cousineau (2005). Tutorials in Quant Methods Psychol. 2008:61–4. https://doi.org/10.20982/tqmp.04.2.p061.
Morgan CJA, Freeman TP, Hindocha C, Schafer G, Gardner C, Curran HV. Individual and combined effects of acute delta-9-tetrahydrocannabinol and cannabidiol on psychotomimetic symptoms and memory function. Transl Psychiatry. 2018;8:181. https://doi.org/10.1038/s41398-018-0191-x.
Morgan CJA, Freeman TP, Schafer GL, Curran HV. Cannabidiol attenuates the appetitive effects of Delta 9-tetrahydrocannabinol in humans smoking their chosen cannabis. Neuropsychopharmacology. 2010a;35:1879–85. https://doi.org/10.1038/npp.2010.58.
Morgan CJA, Schafer G, Freeman TP, Curran HV. Impact of cannabidiol on the acute memory and psychotomimetic effects of smoked cannabis: naturalistic study. Br J Psychiatry. 2010b;197:285–90. https://doi.org/10.1192/bjp.bp.110.077503.
Morgan CJA, et al. Sub-chronic impact of cannabinoids in street cannabis on cognition, psychotic-like symptoms and psychological well-being. Psychol Med. 2012;42:391–400. https://doi.org/10.1017/S0033291711001322.
Morrison PD, et al. The acute effects of synthetic intravenous Δ9-tetrahydrocannabinol on psychosis, mood and cognitive functioning. Psychol Med. 2009;39:1607–16. https://doi.org/10.1017/S0033291709005522.
Neath I, Surprenant AM. Human memory: an introduction to research, data, and theory. 2nd ed. Belmont: Wadsworth; 2003.
Osborne AL, Solowij N, Weston-Green K. A systematic review of the effect of cannabidiol on cognitive function: Relevance to schizophrenia. Neurosci Biobehav Rev. 2017;72:310–24. https://doi.org/10.1016/j.neubiorev.2016.11.012.
Ranganathan M, D'souza DC. The acute effects of cannabinoids on memory in humans: a review. Psychopharmacology (Berl). 2006;188:425–44. https://doi.org/10.1007/s00213-006-0508-y.
Ranganathan M, et al. Naltrexone does not attenuate the effects of intravenous Delta9-tetrahydrocannabinol in healthy humans. Int J Neuropsychopharmacol. 2012;15:1251–64. https://doi.org/10.1017/S1461145711001830.
Rømer Thomsen K, Callesen MB, Feldstein Ewing SW. Recommendation to reconsider examining cannabis subtypes together due to opposing effects on brain, cognition and behavior. Neurosci Biobehav Rev. 2017;80:156–8. https://doi.org/10.1016/j.neubiorev.2017.05.025.
Rugg MD, Curran T. Event-related potentials and recognition memory. Trends Cogn Sci. 2007;11:251–7. https://doi.org/10.1016/j.tics.2007.04.004.
Schoeler T, Bhattacharyya S. The effect of cannabis use on memory function: an update. Subst Abuse Rehabil. 2013;4:11–27. https://doi.org/10.2147/SAR.S25869.
Theunissen EL, et al. Rivastigmine but not vardenafil reverses cannabis-induced impairment of verbal memory in healthy humans. Psychopharmacology (Berl). 2015;232:343–53. https://doi.org/10.1007/s00213-014-3667-2.
Thompson TA, Wilson PH, Snyder PJ, Pietrzak RH, Darby D, Maruff P, Buschke H. Sensitivity and test-retest reliability of the international shopping list test in assessing verbal learning and memory in mild Alzheimer's disease. Arch Clin Neuropsychol. 2011;26:412–24. https://doi.org/10.1093/arclin/acr039.
Vann RE, Gamage TF, Warner JA, Marshall EM, Taylor NL, Martin BR, Wiley JL. Divergent effects of cannabidiol on the discriminative stimulus and place conditioning effects of Delta(9)-tetrahydrocannabinol. Drug Alcohol Depend. 2008;94:191–8. https://doi.org/10.1016/j.drugalcdep.2007.11.017.
Vergara D, et al. Compromised External validity: federally produced cannabis does not reflect legal markets. Sci Rep. 2017;7:46528. https://doi.org/10.1038/srep46528.
Wesnes KA, et al. Nabilone produces marked impairments to cognitive function and changes in subjective state in healthy volunteers. J Psychopharmacol. 2009;24:1659–69. https://doi.org/10.1177/0269881109105900.
Wixted JT. Dual-process theory and signal-detection theory of recognition memory. Psychol Rev. 2007;114:152–76. https://doi.org/10.1037/0033-295X.114.1.152.
Wright MJ Jr, Vandewater SA, Taffe MA. Cannabidiol attenuates deficits of visuospatial associative memory induced by Delta(9) tetrahydrocannabinol. Br J Pharmacol. 2013;170:1365–73. https://doi.org/10.1111/bph.12199.
Yonelinas AP. The nature of recollection and familiarity: a review of 30 years of research. J Mem Lang. 2002;46:441–517. https://doi.org/10.1006/jmla.2002.2864.
Zhornitsky S, Potvin S. Cannabidiol in humans-the quest for therapeutic targets. Pharmaceuticals (Basel). 2012;5:529–52. https://doi.org/10.3390/ph5050529.
Thanks to Kent Hutchison for research advice and comments on an earlier version of the manuscript and to William Carpenter for editing the manuscript.
Funding was provided by grants from the NIH (DA039707 to Kent E Hutchison) and Colorado Department of Public Health and Environment (96947 to LCB). These funding bodies had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Department of Psychology and Neuroscience, UCB 345, University of Colorado Boulder, Boulder, CO, 80309-0345, USA
Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO, 80309-0345, USA
Hélène Devillez & Sophie L. YorkWilliams
Department of Psychology and Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, 80309-0345, USA
L. Cinnamon Bidwell
Hélène Devillez
Sophie L. YorkWilliams
TC, HD, LCB wrote manuscript. HD analyzed the results. All authors contributed to the design of the study and read and approved the final manuscript.
Correspondence to Tim Curran.
The study was approved by the University of Colorado Boulder's Institutional Review Board (protocols 15–0797 and 16–0768). Each participant provided written informed consent.
Curran, T., Devillez, H., YorkWilliams, S.L. et al. Acute effects of naturalistic THC vs. CBD use on recognition memory: a preliminary study. J Cannabis Res 2, 28 (2020). https://doi.org/10.1186/s42238-020-00034-0
Verbal memory
Cannabis and cannabinoids clinical pharmacology | CommonCrawl |
In Boltzmann distribution, why is the system at the same temperature as the reservoir?
Consider a boltzmann distribution where the total energy of the reservoir and the system is $E$. The energy of the system can be $\epsilon_i$ and the energy of the reservoir is $E-\epsilon_i$.
Now if the system can take on different energies of $\epsilon_i$, why can one say that the system is at equilibirum with the reservoir and has a fixed temperature $T$ which is same as that of the reservoir?
At one second the system can have energy $\epsilon_1$ and at the next second the system can have energy $\epsilon_2$, with good probability of it happening. There is thus a net flow of energy between the system and the reservoir. How can then one say that the system and reservoir is at equilibirum and have the same temperature?
thermodynamics statistical-mechanics
TaeNyFanTaeNyFan
$\begingroup$ I don't really understand your question. If you know the energy of the system is $\epsilon_1$ then the energy distribution of the system is a Dirac delta, so there is no question of it being a Boltzmann distribution. You need to consider a statistical averaging of some sort for any non-trivial probability distribution to make sense. $\endgroup$ – By Symmetry Oct 6 '18 at 17:59
$\begingroup$ See the Zeroth law of thermodynamics $\endgroup$ – Alexander Oct 6 '18 at 18:02
$\begingroup$ The small system can be so small that it does not really have a temperature. For example a harmonic oscillator. $\endgroup$ – Pieter Oct 6 '18 at 18:48
$\begingroup$ @coniferous_smellerULPBG-W8ZgjR Just one oscillator, a single particle in one dimension. It is easiest when it is supposed to be quantized, with one level per energy, evenly spaced levels. $\endgroup$ – Pieter Oct 6 '18 at 19:08
$\begingroup$ @TaeNyFan It is difficult to apply the Boltzmann distribution to larger systems, because one does not know how the number of microstates varies with energy. For a single gas molecule, the number of microstates is proportional to the kinetic energy, which leads to the Maxwell-Boltzmann distribution of velocities. But how would one do this for a more complex system? And all this is about equilibrium and isolated systems. $\endgroup$ – Pieter Oct 7 '18 at 13:08
The reservoir is taken to be large enough to provide the system with a very well-defined probability distribution for its energy $\epsilon_i$. For the theorists, this means of course that the reservoir is actually infinitely large.
A system is said to be in thermal equilibrium with the reservoir if its energy is found to obey the expected probability distribution provided by the heat reservoir. This means that one should measure the energy of the system over some longer period of time, since one is considering the probability distribution of the energy, not the value of the energy at any single instant. If the system is not in thermal equilibrium with the bath, the distribution of energies will be very different than expected. For instance, if the system is initially colder than the reservoir, its energy will be found to be smaller than expected until the system has equilibrated.
Stijn B.Stijn B.
$\begingroup$ Doesn't saying that body A and body B is in thermal equilibrium refers to body A having a definite energy and body B having a definite energy with no net exchange of energy between the two bodies? Why is the definition of thermal equilibrium different for the Boltzmann distribution? $\endgroup$ – TaeNyFan Oct 7 '18 at 12:11
$\begingroup$ The phrase "net exchange of energy" is a bit misleading here. Energy can be exchanged all the time between the reservoir and the system, but in the long run, this exchange will average out. That is the idea of thermal equilibrium. Note that in order to have a well-defined notion of temperature, strictly speaking the heat reservoir should be infinitely large so that the energy of the reservoir does not fluctuate. In contrast, the energy of the system is allowed to fluctuate according to a Boltzmann distribution. $\endgroup$ – Stijn B. Oct 8 '18 at 8:57
Consider a small system in thermal contact with a large system which has coolness $\beta$. For simplicity, the small system has just one state per energy level. It could be a harmonic oscillator with a small quantum energy. The large system has a multiplicity $\Omega_0$ when the small system is in its ground state.
When the small system absorbs a quantum of energy, the multiplicity of the large system decreases, but its $\beta$ remains the same. It is a heat reservoir, either because it is large or because it is for example melting ice in water. This means that the next quantum will change the multiplicity of the reservoir with the same fractional amount. This results in a negative exponential for multiplicity of the heat reservoir as a function of the amount of energy in the small system. Because the small system has only one state per energy, this means that the multiplicity and the probability of finding the total system in a state where the small system has an energy $E$ is a negative exponential of $E$.
Mathematically, one can start this reasoning with the definition of the thermodynamic beta: $$\beta = \Omega^{-1}\ \frac{{\rm d}\Omega}{{\rm d}E}.$$
One can rewrite that as a differential equation:
$$\frac{{\rm d}}{{\rm d}E} \Omega = - \beta \Omega,$$ where the minus sign is a consequence of the energy $E$ of the small system is taken from the heat reservoir. This has the solution $$ \Omega = \Omega_0 e^{-\beta E}.$$ The probability of finding the small system in a state with energy $E$ is therefore $$ P(E) \propto e^{-\beta E}.$$ This is the Boltzmann factor. This derivation relies on the small oscillator being distinguishable, that is why this gives the classical distribution function.
PieterPieter
Not the answer you're looking for? Browse other questions tagged thermodynamics statistical-mechanics or ask your own question.
Showing existence of negative temperature for a quantum system
Fermi energy on a "fermion pre-gas model"
Statiscal treatment of multiparticle system (Thermodynamics)
Boltzmann distribution and the most probable configuration
Which energy is in Boltzmann distribution?
Why do velocities obey the Boltzmann distribution?
The shape of the Maxwell-Boltzmann distribution
Degeneracy of Maxwell-Boltzmann distribution
Interpretation of the Boltzmann factor and partition function
Heat transfer reservoir and system at same temperature | CommonCrawl |
Fast inversion of a triangular matrix
I need to inverse a matrix $A$ given its $QR$ decomposition. It's a numerical task.
It is told that the inversion should be "possibly cheap". But it does not look like I can do something more efficient than computing $R^{-1}$ and multiplying (if one needs a matrix, not a product of two matrices) it by $Q^{T}$. However, even if a product is OK (no multiplication needed), I'm still ineffective, because I don't know how to inverse a triangular matrix faster than in cubic time.
Are there any tricky algorithms doing that (I'm not asking about something like fast matrix multiplication, it's a stupid homework) or the task only sounds wise but all I have to do is to invert a triangular matrix as it is taught in linear algebra course or using back substitution?
EDIT: I emphasise that the task is to invert a matrix, not to find a solution of a linear system.
numerical-methods computational-complexity numerical-linear-algebra
savick01
savick01savick01
$\begingroup$ The best I can come up with is Divisions: $\small n$, Multiplications: $\small 1/6 n^3 +1/2 n^2 - 2/3 n$ . Anyway. R.P.Brent has done much work in optimizing algorithms for formal powerseries, possibly there is something similar about matrices and invresion. Some of his articles are online; what I've found of him is really sophisitcated - perhaps you find something relevant using google with "brent complexity matrices" ... $\endgroup$ – Gottfried Helms Dec 17 '11 at 4:46
$\begingroup$ I've found one article where instead of $\small O(x^3) $ the authors work out a characteristic of $\small O(x^{\log_2(7)})$. I've just glanced over the article so far, possibly it is valid only for a subclass of matrices. I've found this online at jstor in "Triangular Factorization and Inversion by Fast Matrix Multiplication", James R. Bunch and John E. Hopcroft Mathematics of Computation Vol. 28, No. 125 (Jan., 1974) (pp. 231-236) $\endgroup$ – Gottfried Helms Dec 17 '11 at 23:26
$\begingroup$ Just found this more basic explanative article at "Ask Dr.Math" mathforum.org/library/drmath/view/51908.html which also links further to some research article. $\endgroup$ – Gottfried Helms Dec 17 '11 at 23:54
$\begingroup$ Thank you very much! The article by Bunch & Hopcroft is very valuable. You should have written it as an answer. $\endgroup$ – savick01 Dec 18 '11 at 23:10
$\begingroup$ That article @Gottfried linked you to is freely available. $\endgroup$ – J. M. is a poor mathematician Dec 19 '11 at 0:02
Any $N \times N$ triangular system can be solved in $\mathcal{O}(N^2)$.
For instance, if it is an upper triangular system, start from the last equation $(N^{th}$ equation) which requires only one division to get the $N^{th}$ unknown. Once you have this go to the previous equation $((N-1)^{th}$ equation) which requires only one multiplication, one subtraction and one division to get the $(N-1)^{th}$ unknown. Go to the $(N-2)^{nd}$ equation which requires $2$ multiplications, $2$ subtractions and $1$ division. In general, the $(N-k)^{th}$ equation requires $k$ multiplications, $k$ subtractions and $1$ division. Hence, the total cost is $$1+2+3+\cdots+(N-1) = \frac{N(N-1)}{2} \text{ multiplications}$$ $$1+2+3+\cdots+(N-1) = \frac{N(N-1)}{2} \text{ subtractions}$$ $$N \text{ divisions}$$
The same idea works for lower triangular systems as well (in which case you start from the first equation and proceed all the way down).
In fact, the idea behind all matrix decomposition algorithms is to make the solving part cheaper so that given a linear system even if the right hand side were to change you can solve it in a relatively inexpensive way once you have the decomposition of the matrix. The two main decomposition algorithms which are used satisfy this requirement.
$A=LU$. Factoring $A$ into a lower triangular times an upper triangular. The factorization cost is $\mathcal{O}(N^3)$. But once this is done solving $Ax = b$ requires solving $Ly = b$ and $UX = y$ both costing $\mathcal{O}(N^2)$ since both are triangular systems.
$A = QR$. Factoring $A$ into an orthonormal matrix times an upper triangular matrix. The factorization cost is $\mathcal{O}(N^3)$. But once this is done solving $Ax = b$ requires solving $Qy = b$ and $RX = y$. The nice thing about orthonormal matrices is that the inverse is nothing but the transpose. Hence, $y=Q^Tb$ which is nothing but a matrix vector product which costs $\mathcal{O}(N^2)$. Solving $Rx = y$ costs $\mathcal{O}(N^2)$ since it is a triangular system.
Other decomposition algorithms like the SVD where $A = U \Sigma V^T$ where $U$ and $V$ are orthonormal and $\Sigma$ is a diagonal also satisfy the requirement. Once we have $A = U \Sigma V^T$, solving $Ax = b$ is equivalent to solving $Uy = b$, whose solution is given by $y = U^Tb$ and costs $\mathcal{O}(N^2)$, $\Sigma z = y$, which can be easily inverted since $\Sigma$ is just a diagonal matrix and hence costs $\mathcal{O}(N)$, and $V^Tx = z$, whose solution is given by $x = Vz$ and costs $\mathcal{O}(N^2)$.
EDIT In case you want the inverse of the lower triangular operator, you proceed as follows. (In numerical linear algebra it is one of the cardinal sins to find the inverse explicitly. In any application you will never need to find the inverse explicitly.) $$L = \begin{pmatrix}1 & 0 & 0 & 0 & \cdots & 0 \\ l_{21} & 1 & 0 & 0 & \cdots & 0 \\ l_{31} & l_{32} & 1 & 0 & \cdots & 0 \\ l_{41} & l_{42} & l_{43} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & l_{n4} & \cdots & l_{nn} \\ \end{pmatrix}$$ We can write $L$ as $$L = L_1 L_2 L_3 \cdots L_{n-1}$$ where $$L_k = \begin{pmatrix}1 & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & l_{k+1,k} & 1 & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & l_{k+2,k} & 0 & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & l_{k+3,k} & 0 & \cdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \vdots & 0 & \cdots & \cdots & \ddots & \vdots\\ 0 & 0 & \cdots & l_{n,k} & 0 & \cdots & \cdots & \cdots & 1 \end{pmatrix}$$ Then $$L^{-1} = L_{n-1}^{-1}L_{n-2}^{-1}L_{n-3}^{-1} \cdots L_{1}^{-1}$$ where $$L_k^{-1} = \begin{pmatrix}1 & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & -l_{k+1,k} & 1 & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & -l_{k+2,k} & 0 & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & -l_{k+3,k} & 0 & \cdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \vdots & 0 & \cdots & \cdots & \ddots & \vdots\\ 0 & 0 & \cdots & -l_{n,k} & 0 & \cdots & \cdots & \cdots & 1 \end{pmatrix}$$
$\begingroup$ The name for the process is "Gaussian elimination". That may be some help to readers trying to understand how to actually carry it out. $\endgroup$ – Keith Irwin Dec 16 '11 at 18:05
$\begingroup$ Yes, that is what I meant by "using back substitution" - it can be used to solve equation $Rx_i=e_i$ and than say that $R^{-1}=[x_1 \ldots x_n]$. It works in time $O(n^2)+O((n-1)^2) + \ldots + O(1^2) = O(n^3)$ just as I said before. I'm asking if there exists something faster. Again: I want to find an inversion of a matrix, not to solve a triangular system. $\endgroup$ – savick01 Dec 16 '11 at 18:15
$\begingroup$ A very nice lecture and I thank you for your effort but that is not what I'm asking about (and in fact I know those things). $\endgroup$ – savick01 Dec 16 '11 at 18:24
$\begingroup$ @savick01: Why do you need the inverse? Typically in almost applications you never deal with the inverse. You resort to just solving the linear system. If you want the inverse of a unit lower triangular system, you do it as follows. Write $L = L_1 L_2 L_3 \ldots L_n$ where $L_k$'s are unit lower triangular and the only non-zero entries of $L_k$ aprt from the diagonal are the entires in the $k^{th}$ column which is same as the $k^{th}$ column of $L$. $L_k^{-1}$ is a unit lower triangular matrix with non-zero entries in the $k^{th}$ column where the entries are negative of the entries of $L_k$. $\endgroup$ – user17762 Dec 16 '11 at 18:27
$\begingroup$ What is wrong about my question? Somebody TOLD me to find an inverse. I can do it in $O(n^3)$ and know that it is not done frequently in applications and I know about solving linear systems, BUT I don't know if the inversion can be done faster than in $O(n^3)$. So I ask about it. $\endgroup$ – savick01 Dec 16 '11 at 19:03
I agree with Sivaram's assessment that an actual matrix inversion is almost never needed (except in some applications, like forming the variance-covariance matrix in statistics). That being said, there is an $O(n^3)$ method to invert a triangular matrix in place (but note that it takes less effort than the inversion of a general matrix). Pete Stewart shows the lower-triangular version in his book (the version given there is for lower triangular matrices; I hope the needed modifications for the upper triangular version are transparent to you), and there is a FORTRAN implementation in LAPACK.
As a final note, since you said that this is part of a QR decomposition: if you used Householder matrices for generating the orthogonal factor, you should know that it is usually much better to keep the components of the Householder vectors around than the multiplied-out orthogonal matrix. (The usual storage format for QR decompositions is to use the upper triangle of the original for storing the triangular factor, and the lower triangle (and also possibly an auxiliary array) for the Householder vectors that form the orthogonal factor.) See Golub and Van Loan, for instance.
J. M. is a poor mathematicianJ. M. is a poor mathematician
$\begingroup$ Correct me if I am wrong. What Pete Stewart does is the same as evaluating this $L_n^{-1} L_{n-1}^{-1} \cdots L_1^{-1}$ right which costs $n^2$ since there is a tremendous sparsity which can be exploited? $\endgroup$ – user17762 Dec 17 '11 at 3:29
$\begingroup$ Not exactly the same; if you look at the text, what's being done is to partition the lower triangular matrix and repeatedly apply forward elimination on the trailing submatrix. $\endgroup$ – J. M. is a poor mathematician Dec 17 '11 at 3:36
$\begingroup$ I don't get a few things: 1) @SivaramAmbikasaran first said that I shall never multiply $L_i^{-1}$ out and now says that there are so much sparsity that it can be done in $O(n^2)$ (so why not if it is so cheap?). 2) Is there really so much sparsity? As I previously wrote, I can't see so much of it. 3) @ J.M. - Stewart's algorithm looks like a simple back substitution or whatever it is called in English. He computes $x_i$ satisfying $x_i^T R=e_i^T$ one by one. There are two nested loops and multiplied vectors are on average linear in size, so we get $O(n^3)$. $\endgroup$ – savick01 Dec 18 '11 at 22:52
$\begingroup$ Well, in presence of the article mentioned by @GottfriedHelms I don't expect that we know an algorithm working in square time, especially such a simple one. $\endgroup$ – savick01 Dec 18 '11 at 23:13
$\begingroup$ @savick01: Mine was a question to J.M. I was wondering if in the sparsity of $L_k^{-1}$ helped in reducing the cost to $n^2$ instead of $n^3$. $\endgroup$ – user17762 Dec 18 '11 at 23:28
Not the answer you're looking for? Browse other questions tagged numerical-methods computational-complexity numerical-linear-algebra or ask your own question.
How hard is it to do arithmetic?
How do deal with a giant sparse matrices?
Is there an efficient method for the calculation of $e^{1/e}$?
Is there a limit for how "good" a numerical method can be? | CommonCrawl |
Root of a function
Roots of a function. You are encouraged to solve this task according to the task description, using any language you may know. Task. Create a program that finds and outputs the roots of a given function, range and (if applicable) step width. The program should identify whether the root is exact or approximate One of the easiest ways to estimate roots of a function is to graph the function using technology and then zoom in on the root. There are many different graphing programs that will do this, and even graphing calculators (TI-82), which might be the most available technology, can so this. In this essay I will use the program Algebra Xpressor A root of a polynomial is a zero of the corresponding polynomial function. The fundamental theorem of algebra shows that any non-zero polynomial has a number of roots at most equal to its degree , and that the number of roots and the degree are equal when one considers the complex roots (or more generally, the roots in an algebraically closed extension ) counted with their multiplicities . [4 Roots of a function are x-values for which the function equals zero. They are also known as zeros. When given a rational function, make the numerator zero by zeroing out the factors individually. Check that your zeros don't also make the denominator zero, because then you don't have a root but a vertical asymptote Roots What is a root and how to calculate it? A root of a function is an intersection of the graph with the x-axis. You calculate roots by solving the equation . Where do I find examples? This is Mathepower. Just enter your own function and our free calculator solves it step by step
An root function is a function expressed by x1/n for positive integer n greater than 1. The graphical representation of power functions is dependent upon whether n is even or odd. For even values of n (i.e., n = 2, 4, 6,...), root functions will resemble the form illustrated for square root function expressed by f (x) = x1/2 depicted below Note the exact agreement with the graph of the square root function in Figure 1(c). The sequence of graphs in Figure 2 also help us identify the domain and range of the square root function. In Figure 2(a), the parabola opens outward indefinitely, both left and right. Consequently, the domain is \(D_{f} = (−\infty, \infty)\), or all real numbers. Also, the graph has vertex at the origin and opens upward indefinitely, so the range is \(R_{f} = [0, \infty)\)
Roots of a function - Rosetta Cod
Together we will write a Python program to find the roots of a function and solve equations, Without importing anything
Find a root of a function in an interval using Ridder's method. bisect (f, a, b[, args, xtol, rtol, maxiter, ]) Find root of a function within an interval using bisection. newton (func, x0[, fprime, args, tol, ]) Find a zero of a real or complex function using the Newton-Raphson (or secant or Halley's) method
The roots (sometimes also called zeros) of an equation f(x)=0 are the values of x for which the equation is satisfied. Roots x which belong to certain sets are usually preceded by a modifier to indicate such, e.g., x in Q is called a rational root, x in R is called a real root, and x in C is called a complex root. The fundamental theorem of algebra states that every polynomial equation of degree n has exactly n complex roots, where some roots may have a multiplicity greater..
e the the roots of a quadratic function is by factorizing. The ABC Formula. Another way to find the roots of a quadratic function. This is an easy method that anyone can use. It....
Root of a Function Defined by a File Find a zero of the function f(x) = x3 - 2x - 5. First, write a file called f.m. function y = f (x) y = x.^3 - 2*x - 5
The Root-Finding Problem Given a function f(x). Find a number x = ξ such that f(ξ) = 0. 1.2. INTRODUCTION 3 Definition 1.1. The number x = ξ such that f(ξ) = 0 is called a root of the equation f(x) = 0 or a zero of the function f(x). Asseen fromourcase studyabove, theroot-findingproblemisa classical problemanddates back to the 18th century. The function f(x) can be algebraic or.
The domain of a function is the set of numbers that can go into a given function. In other words, it is the set of x-values that you can put into any given equation. The set of possible y-values is called the range. If you want to know how to find the domain of a function in a variety of situations, just follow these steps
The root function takes the form root (f (var), var, [a, b]). It returns the value of var to make the function f equal to zero. The real numbers a and b are optional. If they are specified (bracketed), root finds var on this interval But some functions do not have real roots and some functions have both real and complex zeros. One such function is q(x) = x^{2} + 1 which has no real zeros but complex. Now the question arises how can we understand that a function has no real zeros and how to find the complex zeros of that function The function RootOf is a placeholder for representing all the roots of an equation in one variable. In particular, it is the standard representation for Maple algebraic numbers, algebraic functions (see evala), and finite fields GF p k , p prime , k > 1 (see mod). � In this video, we learn about the limit of the nth root of a function. SUBSCRIBE NOW: https://www.youtube.com/user/sipnayanph/?sub_confirmation=1 Explore the..
In mathematics and its applications, the root mean square is defined as the square root of the mean square. The RMS is also known as the quadratic mean and is a particular case of the generalized mean with exponent 2. RMS can also be defined for a continuously varying function in terms of an integral of the squares of the instantaneous values during a cycle. For alternating electric current, RMS is equal to the value of the constant direct current that would produce the same power. Root is also known as an algebraic number when f is polynomial with integer coefficients or a transcendental number when there is no such polynomial f possible. Root is typically used to represent an exact number and is automatically generated by a variety of algebra, calculus, optimization and geometry functions Arguments. f. the function for which the root is sought. interval. a vector containing the end-points of the intervalto be searched for the root. additional named or unnamed arguments to be passedto f. lower, upper. the lower and upper end points of the interval tobe searched. f.lower, f.upper
Methods for Finding Roots of Functions - University of Georgi
Functions of Roots Some functions of roots are given below: Anchoring the plant Roots help to anchor the plant firmly into the ground. Absorption of water and nutrients from the soil They help plants to absorb water and nutrients from the soil, which are essential for their survival. Preventing soil erosion They help to bind the soil particles together, thereby preventing them from being.
In this method, we will look at how to use the function of the numpy root and print the given function help of the print function in python. numpy.roots() function returns the roots of a polynomial with coefficients given in p. The coefficients of the polynomial are to be put in a numpy array in a sequence. Syntax Syntax: numpy.roots(p) Parameter. It takes the coefficients of an given polynomial. Return Value. The function will return the roots of the polynomial. Let's do some.
Below is the direct formula for finding roots of the quadratic equation. There are the following important cases. If b*b < 4*a*c, then roots are complex (not real). For example roots of x 2 + x + 1, roots are -0.5 + i1.73205 and -0.5 - i1.73205 If b*b == 4*a*c, then roots are real and both roots are same. For example, roots of x 2 - 2x + 1 are.
Zero of a function - Wikipedi
erals to the plant
Finding the root (or zero) of a function is an important computational task because it enables you to solve nonlinear equations. I have previously blogged about using Newton's method to find a root for a function of several variables.I have also blogged about how to use the bisection method to find the zeros of a univariate function.. As of SAS/IML 12.1, there is an easy way to find the roots.
The similar concept of the th root of a complex number is known as an nth root. The roots of a complex function can be obtained by separating it into its real and imaginary plots and plotting these curves (which are related by the Cauchy-Riemann equations) separately. Their intersections give the complex roots of the original function. For example, the plot above shows the curves.
Finding Roots - Free Math Hel
That function takes one argument, which is the initial guess for a root. Inside the function, the SOLVE function finds the root of the Func1 function (defined earlier). If it was successful, it returns the root to the caller. To test the Root_Func1 function, I call it from a DATA step and pass in the values -2, 0, and +2
ROOT as Function Plotter Using one of ROOT's powerful classes, here TF1 1, will allow us to display a function of one ariable,v x. ryT the following: 1 root [11] TF1* f1= new ( , sin x)/ ,0. ,10.) ; 2 root [12] f1 > Draw ; f1 is a pointer to an instance of a TF1 class, the arguments are used in the constructor; the rst one of type string is a name to be entered in the internal ROOT memory.
Math explained in easy language, plus puzzles, games, quizzes, worksheets and a forum. For K-12 kids, teachers and parents
After having gone through the stuff given above, we hope that the students would have understood Domain of a square root function .Apart from the stuff given above, if you want to know more about Domain of square root function , please click hereApart from the stuff given in this section, if you need any other stuff in math, please use our google custom search here I want to calculate root mean square of a function in Python. My function is in a simple form like y = f(x). x and y are arrays. I tried Numpy and Scipy Docs and couldn't find anything 1-Dim Function Class TF1 class TF1 : public TFormula, public TAttLine, public TAttFill, public TAttMarker Class Description A TF1 object is a 1-Dim function defined between a lower and upper limit. The function may be a simple function or a precompiled user function. The function may have associated parameters. The following types of functions can be created: A- Expression using variable x and. However, teachers at universities don't like to let the things easy for students, that's why in programming classes you may need to find a way to find the square root of a number without using this library in C ! As homeworks or tasks aren't optional, we'll show you how you can easily achieve this goal without using the sqrt function in C
Free calculator for roots of function
Gain-of-function experiments on bat viruses aren't new. Going back decades, these types of experiments have been publicly documented in a series of peer-reviewed scientific papers co-authored by the Director of the Wuhan lab, Dr. Zhengli Shi, popularly known as the Bat Woman. Published papers reveal that researchers have been collecting samples, and carrying out experiments to.
A radical function is a function that contains a radical—(√) squares, cubics, or other roots of algebraic expressions. They are inverses of power functions, and just a little bit more complicated.. Example of a Radical Function. Perhaps the simplest example of a radical function is the square root function.It is the inverse of the power function.The curve looks like half of the curve of.
Root represents an exact number as a solution to an equation f [ x] 0 with additional information specifying which of the roots is intended. Root numbers can be used like any other numbers, both in exact and approximate computations. Root numbers are formatted as where approx is a numerical approximation
Root Bracketing Algorithms¶. The root bracketing algorithms described in this section require an initial interval which is guaranteed to contain a root—if and are the endpoints of the interval then must differ in sign from .This ensures that the function crosses zero at least once in the interval
The Root model of normal and abnormal foot function remains the basis for clinical foot orthotic practice globally. Our aim was to investigate the relationship between foot deformities and kinematic compensations that are the foundations of the model. A convenience sample of 140 were screened and 100 symptom free participants aged 18-45 years were invited to participate
Real Functions: Root Functions - Math
The third (optional) argument is a root selector. Selectors are meant to specify a particular root of an equation or a subset of the roots. They can also be used for working with several (not necessarily specified) roots of the same polynomial. The RootOf function supports the following selectors: - A numerical approximation c; if the polynomial is univariate with rational coefficients, the.
Get root directory of Azure Function App v2. Ask Question Asked 2 years, 7 months ago. Active 4 months ago. Viewed 10k times 20. 3. I build an Azure Function App (v2). Configuration tasks necessary for all functions are done in a Setup class that is structured like the following: [assembly: WebJobsStartup(typeof(Startup))] internal class Startup : IWebJobsStartup { public void Configure.
erals from the environment, It anchors the plant in the ground, and it stores the food that has been made in the leaves by the photosynthesis process, So, the food can be used later by the plant to grow and survive. Excretion in plants, Importance & types of transpiration for the plant
Given a number N, the task is to find the square root of N without using sqrt() function. Examples: Input: N = 25 Output: 5. Input: N = 3 Output: 1.73205. Input: N = 2.5 Output: 1.58114 . Recommended: Please try your approach on first, before moving on to the solution. Approach: Start iterating from i = 1. If i * i = n, then print i as n is a perfect square whose square root is i. Else find t
4.8: The Square Root Function - Mathematics LibreText
You can use the sqrt() function to find the square root of a numeric value in R: sqrt(x) The following examples show how to use this function in practice. Example 1: Calculate Square Root of a Single Value. The following code shows how to calculate the square root of a single value in R: #define x x <- 25 #find square root of x sqrt(x) [1]
Growth and function of the sugarcane root system D.M. Smitha,*, N.G. Inman-Bambera, P.J. Thorburnb a CSIRO Sustainable Ecosystems, Private Mail Bag, Aitkenvale, Qld 4814, Australia b CSIRO Sustainable Ecoystems, Queensland Biosciences Precinct, 306 Carmody Road, St. Lucia, Qld 4067, Australia Abstract A literature review was undertaken to assess current knowledge on how root system growth and.
The square root of a number is a value that, when multiplied by itself, gives the number. The SQRT function in Excel returns the square root of a number.. 1. First, to square a number, multiply the number by itself. For example, 4 * 4 = 16 or 4^2 = 16. Note: to insert a caret ^ symbol, press SHIFT + 6
New Square Root Function block to perform square root, signed square root, and reciprocal square root operations . mathworks.com. mathworks.com. Neuer Block Square Root Function zur Ausführung folgender Operationen: Quadratwurzel, Quadratwurzel mit Vorzeichen und reziproke Quadratwurzel. mathworks.ch . mathworks.ch. Unlike the other simpler pocket calculators, the SR-10 offered an extended.
For fx to have real values the radicand expression under the radical of the square root function must be positive or equal to 0. Its ok if the numerator is zerowe can divide into zero if we do we get zero but we are not allowed to divide by zero. Recall that the domain of a function is the set of possible input values x-values of the function. From Rule 5 we know that a function of the form. If i * i = n, then we returned i as n is a perfect square whose square root is I, else we find the smallest i for which i * i is just greater than n. Now we know the square root of n lies in the interval i - 1 and i
Finding a Functions Roots with Python by Mohammad-Ali
How to find the domain and range of a root function. The easiest way to identify the range of other functions such as root and fraction functions is to draw the graph of the function using a graphing calculator. The Range of a Function is the set of all y values or outputs ie the set of all fx when it is defined. Set the radicand greater than or equal to zero and solve for x x. The range of an. As ever, the inline for-loop construct is very handy for defining function curves. If you define all the paths and points first and then draw them all together at the end, then it's a bit easier to get them drawn in the right orde Square root example Syntax of using the sqrt function. This is how you may use the square root function: math.sqrt( x ) For example: math.sqrt(25) The sqrt method returns the square root of the given number that must be greater than 0. Note that, by default, the math functions return the float values. An example of getting the square root
Optimization and root finding (scipy
C library function - sqrt() - The C library function double sqrt(double x) returns the square root of x
Modeling of root architecture and function has been the subject of many recent reviews (see Pagès 2002 , Fourcaud et al. 2008 ; Pierret et al. 2007 ), thus is onl
Root traits varied among plant species (Table 1).The grasses L. perenne and A. odoratum displayed the greatest root length density, specific root length and narrowest average diameter.Trifolium repens had the lowest root length density, root mass and tissue density, and the highest dry matter content, while the other legume L. corniculatus had the greatest root mass, diameter and tissue.
Wheat root-surface-associated microbiome structure and function, as well as soil and plant properties, were highly influenced by interactions between CO2 and nitrate levels. Relative abundance of.
Root -- from Wolfram MathWorl
Returns square root of f. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { // The formula made famous by Pythagoras, also used internally by // Vector3.Distance and several other standard functions. float HypotenuseLength (float sideALength, float sideBLength) { return Mathf.Sqrt (sideALength.
You Are Here : Home » Square root c++ - sqrt() function with custom implementation. Square root c++ - sqrt() function with custom implementation. Basics - What is Square Root? Let's say there is a number X, when we multiply X with itself, it gives Y. So, X is the square root of Y. E.g. 2 is the square root of 4 5 is the square root of 25. Square root in C++ - sqrt() function. C++.
Fast inverse square root, sometimes referred to as Fast InvSqrt() or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates 1 ⁄ √ x, the reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number x in IEEE 754 floating-point format.This operation is used in digital signal processing to normalize a vector, i.e., scale it to length 1
Square Root function requires only one argument for its function i.e. Number. SQRT is a square root function in both excel and VBA. The method to use this function is as follows SQR(number) and used to calculate the square root of a given number in excel; however, the nomenclature is different, and it is written as SQRT compared to SQR in VBA
Dorsal Root Ganglion Function. Even though dorsal root ganglia are a part of the system of peripheral nerves, they lie very close to the spine, and therefore to the central nervous system. That.
Problem description: Given a number N, compute the square root. Solution: Alright, I know that every programming language has a built-in function to compute the square root of a number and of course that's what you're gonna use in a normal situation. But what if you are asked to solve this problem in a programmin Julia Square Root Julia Square Root is used to find the square root of a number. In this tutorial, we will learn how to use the square root function, sqrt() with examples. Example 1 - Julia Square Root Square root function with Integer Square root function with Floating Point Numbers Square root function with Complex Numbers Conclusion In this Julia Tutorial, we learned about Julia Square. Given a number N, the task is to find the floor square root of the number N without using the built-in square root function.Floor square root of a number is the greatest whole number which is less than or equal to its square root.. Examples: Input: N = 25 Output: 5 Explanation: Square root of 25 = 5. Therefore 5 is the greatest whole number less than equal to Square root of 25
Python sqrt function is inbuilt in a math module, you have to import the math package . The sqrt function in a python programming language returns the square root of any number (number > 0). In this tutorial, you will learn how to get the square root of any number in python with various ways and examples Dongre S, Langade D, Bhattacharyya S. Efficacy and safety of ashwagandha (Withania somnifera) root extract in improving sexual function in women: A pilot study. BioMed Research Int. 2015;2015:284154.doi: 10.1155/2015/284154. Design. A double-blind, randomized, placebo-controlled trial Participants. Women aged 21 to 50 who were in a steady heterosexual relationship for over one year, and. In the C Programming Language, the sqrt function returns the square root of x
For more information about working with VBA, select Developer Reference in the drop-down list next to Search and enter one or more terms in the search box. This example uses the Sqr function to calculate the square root of a number. Dim MySqr. MySqr = Sqr (4) ' Returns 2. MySqr = Sqr (23) ' Returns 4.79583152331272. MySqr = Sqr (0) ' Returns 0 Derivative of the Square Root Function a) Use implicit differentiation to find the derivative of the inverse of f(x) = x2 for x > 0. b) Check your work by finding the inverse explicitly and then taking its deriva tive. Solution If you're having trouble with this problem, it may help to review Professor Jerison's example of the derivative of the arctangent function. a) Use implicit.
Root-secreted chemicals mediate multi-partite interactions in the rhizosphere, where plant roots continually respond to and alter their immediate environment. Increasing evidence suggests that root exudates initiate and modulate dialogue between roots and soil microbes. For example, root exudates se Regulation and function of root exudates Plant Cell Environ. 2009 Jun;32(6):666-81. doi: 10. Compute square root. Returns the square root of x. C99. C++98. C++11. Header <tgmath.h> provides a type-generic macro version of this function. This function is overloaded in <complex> and <valarray> (see complex sqrt and valarray sqrt ). Additional overloads are provided in this header ( <cmath>) for the integral types: These overloads. Root Function Wellness Prof LLC. 418 likes · 21 talking about this · 2 were here. Root Function Wellness is a Direct Pay Functional Medicine clinic.. Retrieves URI for themes directory
Finding out the cube root is easy enough, especially when you have a calculator handy, but what about the cube root function? Many people get confused by this, which is why we'll move on to briefly explaining function families in mathematics. A Cube Function Family. There are often many challenging concepts to wrap your head around in mathematics and a function family is one of them. Square Root Function. 3 REPLIES 3. SOLVED Back to Fusion 360 Category. Reply. Topic Options. Subscribe to RSS Feed; Mark Topic as New; Mark Topic as Read; Float this Topic for Current User; Bookmark; Subscribe; Printer Friendly Page; Back to Topic Listing; Previous; Next; Message 1 of 4 ksheehan. 4124 Views, 3 Replies 04-01-2017 07:55 AM. Mark as New; Bookmark; Subscribe; Mute; Subscribe to.
The function takes an input that requires the function being evaluated, the lower and upper bounds, the tolerance one is looking for before converging (i recommend 0.0001) and the maximum number of iterations before giving up on finding the root (the root will always be found if the root is bracketed and a sufficient number of iterations is allowed) Take for example the function f(x) = .5e^x - 5x + 2. How can a highschool student go about finding the roots of this equation. Most likely, in these cases an estimate of the root with accuracy to a certain number of decimal places will be sufficient. This esitmation could be very difficult if not impossible to do by hand, depending on how the.
A root of a polynomial is a zero of the corresponding polynomial function. The fundamental theorem of algebra shows that any non-zero polynomial has a number of roots at most equal to its degree, and that the number of roots and the degree are equal when one considers the complex roots (or more generally, the roots in an algebraically closed extension) counted with their multiplicities root[0] .L myfunction.C root[1] main() Load the macro in ROOT and execute the main routine Other methods to define a function (cont'd) User-defined functions Arguments not optional! xmin, xmax, number of parameters} In this section, we will learn how to find the root(s) of a quadratic equation. Roots are also called x-intercepts or zeros. A quadratic function is graphically represented by a parabola with vertex located at the origin, below the x-axis, or above the x-axis.Therefore, a quadratic function may have one, two, or zero roots Square root function, its graph and equation as translations. The inverse of a parabola. Plus free pictures of square root function graph Function Width: pixels; Image Size: by pixels; About: Beyond simple math and grouping (like (x+2)(x-4)), there are some functions you can use as well. Look below to see them all. They are mostly standard functions written as you might expect. You can also use pi and e as their respective constants. Please note: You should not use fractional exponents. For example, don't type x^(1/3) to.
Math: How to Find the Roots of a Quadratic Function
What Is the Function of a Root Hair Cell? A root hair cell in a plant absorbs minerals that have been dissolved in water. They allow a plant to absorb these minerals by increasing the surface area; this is extremely beneficial to plants that live in dry areas. The root hair cells are delicate structures on the root of a plant which live only. Type-A response regulators are required for proper root apical meristem function through post-transcriptional regulation of PIN auxin efflux carriers Plant J. 2011 Oct;68(1):1-10. doi: 10.1111/j.1365-313X.2011.04668.x. Epub 2011 Jul 21. Authors Wenjing Zhang 1. ROOT Forum. Newbie forum for when you're not sure. News. If you're new to ROOT, C++, data analysis etc, and you hesitate to ask your question, then please ask it in the Newbie section, where nice people help and we have special rules to be more welcoming. Don't hesitate, jus. 2 In this program, the sqrt() library function is used to calculate the square root of a number. The function declaration of sqrt() is defined in the cmath header file. That's why we need to use the code #include <cmath> to use the sqrt() function. To learn more, visit C++ Standard Library functions Polynomial's root finder (factoring) Write 10x 4-0x 3-270x 2-140x+1200 or any other polynomial and click on Calculate to obtain the real and/or complex roots. P(x): Apply rounding Fractions: Free Online Polynomials Calculator and Solver (real/complex coeff./roots); VB.Net Calculator download; source code; tutorial. Windipoles download. Polynomial's root finder (factoring) Write 10x 4-0x 3-270x.
Root of nonlinear function - MATLAB fzer
A real number x will be called a solution or a root if it satisfies the equation, meaning . It is easy to see that the roots are exactly the x-intercepts of the quadratic function , that is the intersection between the graph of the quadratic function with the x-axis. a<0: a>0: Example 1: Find the roots of the equation Solution. This equation is equivalent to Since 1 has two square-roots , the. There are many physical situations that can be modeled using the square root function. However, before we attempt any data analysis and regression, we must first become more familiar with the characteristics of the square root function and its inverse. Translations of the square root function A graph of the square root parent function i Note that the given function is a square root function with domain [1 , + ∞) and range [0, +∞). We first write the given function as an equation as follows y = √(x - 1) Square both sides of the above equation and simplify y 2 = (√(x - 1)) 2 y 2 = x - 1 Solve for x x = y 2 + 1 Change x into y and y into x to obtain the inverse function. f-1 (x) = y = x 2 + 1 The domain and range of the. Java sqrt Function syntax. The basic syntax of the Math sqrt in Java Programming language to find the square root is as shown below. The following Java sqrt function will accept positive double value as an argument and returns the square root of the specified expression or Value. static double sqrt (double number); //Return Type is Double // In.
Root aerenchyma - formation and function Urška VIDEMŠEK1, Boris TURK2, Dominik VODNIK3 Received June 27, 2006, accepted August 18, 2006 Prispelo 27. junija 2006, sprejeto 18. avgusta 2006 ABSTRACT The formation of root aerenchyma, the prominent air spaces in the root cortex which are normally induced by waterlogging, has an important role in providing an internal pathway for oxygen. The ROOT function performs the Cholesky decomposition of a matrix (for example, ) such that where is upper triangular. The matrix must be symmetric and positive definite. For example, consider the following statements: xpx={25 0 5, 0 4 6, 5 6 59}; U=root(xpx); These statements produce the following result: U 5 0 1 0 2 3 0 0 7 If you need to solve a linear system and you already have a Cholesky. Number of Subdirectories in Themes Directory The function below informs about the number of subdirectories in the themes directory. Note that this doesn't necessarily match the number of themes recognized by WordPress
Based on the limited precision of the square root function, only the first six decimal places of the output can actually be relied on (but that's more than sufficient for most real world uses). RMS OF WHOLE NUMBERS 1.0 THROUGH 10.0 = 6.20483683432 ALGOL W begin % computes the root-mean-square of an array of numbers with R sqrt Function Example 4. The sqrt function also allows you to find the square roots of column values. In this example, We are going to find the square root of all the records present in [Standard Cost], and [Sales Amount] columns using sqrt Function. For this R Square root example, we use the below-shown CSV data I have written a function for finding the square root of a unsigned number in VHDL.The function is based on Non-Restoring Square Root algorithm.You can learn more about the algorithm from this paper.The function takes one unsigned number,which is 32 bit in size and returns the square root,which is also of unsigned type with 15 bit size.The block diagram of the algorithm is given below Data: $_SERVER['DOCUMENT_ROOT'] Data type: String Purpose: Get the absolute path to the web server's document root. No trailing slash. Caveat: Don't trust this to be set, or set correctly, unless you control the server environment. Caveat: May or may not have symbolic links pre-resolved, use PHP's 'realpath' function if you need it resolved
Author Summary Rice is a monocotyledonous plant that is distinct from the dicotyledonous model plant Arabidopsis in many aspects. In Arabidopsis, ethylene-induced root inhibition is independent of ABA action. In rice, however, we report here that ethylene inhibition of root growth requires ABA function. We identified MHZ4, a rice homolog of Arabidopsis ABA4 that is involved in ABA biosynthesis Learn The Cube Root Function with free interactive flashcards. Choose from 500 different sets of The Cube Root Function flashcards on Quizlet Thus the RMS error is measured on the same scale, with the same units as. The term is always between 0 and 1, since r is between -1 and 1. It tells us how much. PGF Math Function to compute cube root. Ask Question Asked 10 years ago. Active 2 years, 8 months ago. Viewed 7k times 13. When I found that a simple x^(1.0/3.0) does not yield a graph in PGFplots for negative values of x, I attempted to define my own function for CubeRoot using pgfmathdeclarefunction as below. But, am not able to get them to work. These are based on this example for a.
6 Ways to Find the Domain of a Function - wikiHo
To determine whether AtKNAT1 and SlKNAT1 function is conserved in root development, we tested the effect of overexpression of AtKNAT1 in Arabidopsis (Lincoln et al., 1994. Lincoln C. Long J. Yamaguchi J. Serikawa K. Hake S. A knotted1-like homeobox gene in Arabidopsis is expressed in the vegetative meristem and dramatically alters leaf morphology when overexpressed in transgenic plants. Plant. Cube root/nth root function. Ken Choi. January 15, 2010 10:57PM Re: Cube root/nth root function. Rick James. January 17, 2010 01:55AM Re: Cube root/nth root function. Ken Choi. January 17, 2010 02:29AM Re: Cube root/nth root function. laptop alias. January 17, 2010 05:39AM Re: Cube root/nth root function. Ken Choi. January 17, 2010 09:37PM Re: Cube root/nth root function. laptop alias. January. To determine whether auxin-dependent regulation of aquaporin function also applies to root cortical cells, the water relation parameters of these cells were deduced using a cell pressure probe 30. Find HKEY_CLASSES_ROOT in the left area of Registry Editor. You might not see it immediately if you've used the registry recently and left various hives or keys open. Hit Home on your keyboard to see HKCR listed at the very top of the left pane. Double-click or double-tap HKEY_CLASSES_ROOT to expand the hive, or use the small arrow to the left Registry Subkeys in HKEY_CLASSES_ROOT . The list.
Function List: » Octave core » by package » alphabetical; C++ API: sqrt (x) Compute the square root of each element of x. If x is negative, a complex result is returned. To compute the matrix square root, see 'Linear Algebra'. See also: realsqrt, nthroot. Package: octave. The big thing going on is cubing something, so the outside function is a cubing function. 1-x is what you're cubing, so it's the inside function. sqrt(9-x) sqrt(x) 9-x: The big thing going on is taking the square root (outside), 9-x is what you're taking the square root of (inside) 4/(5x 2 +2) 2: 4/x 2: 5x 2 +2: Looks like 4 over something. Understanding Corn Root Function. Growers realize the importance of strong corn roots. Roots, however, do more than anchor corn securely in the soil. They take up moisture and nutrients, allowing the plant to develop. Corn breeders work to develop yield potential in a hybrid, but realizing that yield requires a root structure that protects and.
Root Function - an overview ScienceDirect Topic
Synonyme (Andere Wörter) für Square-root function & Antonyme (Entgegengesetzte Bedeutung) für Square-root function Offline root CAs can issue certificates to removable media devices (e.g. floppy disk, USB drive, CD/DVD) and then physically transported to the subordinate CAs that need the certificate in order to perform their tasks. If the subordinate CA is a non-issuing intermediate that is offline, then it will also be used to generate a certificate and that certificate will be placed on removable media. Question: Write A Function Traverse(TreeNode* Root) That Takes As Input The Root Node Of A Tree And Returns A String With An In Order Tree Traversal. The Following TreeNode Class Has Already Been Defined For You: Class TreeNode { Public: Int Val; TreeNode *left; TreeNode *right; TreeNode() : Val(0), Left(nullptr), Right(nullptr) {} TreeNode(int X) : Val(x), Left(nullptr),.. root function does not work:(. Learn more about root function, undefined, doubl (The negative of this square-root function would have given me the bottom half of the same circle. Graph . First, I'll find the domain. Because the argument of the radical is a plus quadratic, I know that this argument will be positive where the corresponding parabola is above the x-axis. I expect this to be on either side of the x-intercepts, but not in the middle between the intercepts.
How To Find The Zeros Of A Function - 3 Best Method
root locus plot for transfer function (s+2)/(s^3+3s^2+5s+1) Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. For math, science, nutrition, history, geography, engineering, mathematics, linguistics, sports, finance, music Wolfram|Alpha brings expert-level knowledge and. hier finden sie das komplette PHP Handbuch. newt_draw_root_text (PECL newt >= 0.1) newt_draw_root_text — Displays the string text at the position indicate hier finden sie das komplette PHP Handbuch. gupnp_root_device_get_relative_location (PECL gupnp >= 0.1.0) gupnp_root_device_get_relative_location — Get the relative location of root device
Is LootBear legit.
New member free MYR no deposit 2020.
Wildz Level.
VIN decoder Europe.
Empire Market pin.
Netflix account free.
Corona test Karlskrona.
IGG staking.
Münze Österreich Gold verkaufen.
Stratis verwachting 2020.
Stahlpreisentwicklung 2020 2021.
Tradestation Warrior Trading.
Online merchant account Pakistan.
Ipd Mitarbeiter.
DANSKE BANK Mastercard Platinum lounge.
C Cheat Sheet.
Andreessen horowitz email.
APR credit card.
PieDAO DeFi.
ADA Belohnung.
Youtube latest news in us.
Hoe lang gaat een elektrische scooter mee.
Error: Pancake K.
Lauterbach Hart aber fair Youtube.
BYD Aktie Realtime.
Linger Deutsch.
Agoda Australia reviews.
Commerzbank Ausbildung Ansprechpartner.
Unmineable miner erfahrungen.
Axie Infinity NFT.
Haus langzeitmieten Schweden.
Automotive design.
Investment dashboard plugin.
Server hosting Malaysia.
Würfel Generator.
Binance revenue 2021.
LAX Airport Bedeutung.
Crypto Trading Bot Deutsch.
Osmtechno. comtask.
Domowina Verlag.
Sparkasse Köln Bonn Kontostand abfragen. | CommonCrawl |
PhylogeneticTrees :: labeledTrees
labeledTrees -- enumerate all labeled trees
labeledTrees n
n, an integer, the number of leaves
a list, of all trees with n leaves
This function enumerates all possible homeomorphically-reduced trees (no degree-2 vertices) with n leaves labeled by $0,\ldots, n-1$, including all possible labelings. The trees are represented as objects of class LeafTree.
i1 : L = labeledTrees 4
o1 = {{{0, 1, 2, 3}, {set {1, 2}, set {0}, set {1}, set {2}, set {3}}}, {{0,
1, 2, 3}, {set {1, 3}, set {0}, set {1}, set {2}, set {3}}}, {{0, 1, 2,
3}, {set {2, 3}, set {0}, set {1}, set {2}, set {3}}}, {{0, 1, 2, 3},
{set {0}, set {1}, set {2}, set {3}}}}
labeledBinaryTrees -- enumerate all binary labeled trees
rootedTrees -- enumerate all rooted trees
rootedBinaryTrees -- enumerate all rooted binary trees
unlabeledTrees -- enumerate all unlabeled trees
Ways to use labeledTrees :
"labeledTrees(ZZ)"
The object labeledTrees is a method function. | CommonCrawl |
Aseismic transient during the 2010–2014 seismic swarm: evidence for longer recurrence of M ≥ 6.5 earthquakes in the Pollino gap (Southern Italy)?
25,000 Years long seismic cycle in a slow deforming continental region of Mongolia
Laurent Bollinger, Yann Klinger, … Demberel Sodnomsambuu
Migrating pattern of deformation prior to the Tohoku-Oki earthquake revealed by GRACE data
Isabelle Panet, Sylvain Bonvalot, … Jean-Michel Lemoine
Slow deformation event between large intraslab earthquakes at the Tonga Trench
Yuta Mitsui, Hinako Muramatsu & Yusaku Tanaka
Evidence of supershear during the 2018 magnitude 7.5 Palu earthquake from space geodesy
Anne Socquet, James Hollingsworth, … Michel Bouchon
Data analysis of the unsteadily accelerating GPS and seismic records at Campi Flegrei caldera from 2000 to 2020
Andrea Bevilacqua, Prospero De Martino, … Augusto Neri
Seismic rate variations prior to the 2010 Maule, Chile MW 8.8 giant megathrust earthquake
Benoit Derode, Raúl Madariaga & Jaime Campos
Months-long thousand-kilometre-scale wobbling before great subduction earthquakes
Jonathan R. Bedford, Marcos Moreno, … Michael Bevis
Evolution of aseismic slip rate along plate boundary faults before and after megathrust earthquakes
Toshihiro Igarashi & Aitaro Kato
Deformation and seismicity decline before the 2021 Fagradalsfjall eruption
Freysteinn Sigmundsson, Michelle Parks, … Thorbjörg Ágústsdóttir
Daniele Cheloni ORCID: orcid.org/0000-0002-0958-21291,
Nicola D'Agostino1,
Giulio Selvaggi1,
Antonio Avallone ORCID: orcid.org/0000-0002-0264-28971,
Gianfranco Fornaro2,
Roberta Giuliani3,
Diego Reale2,
Eugenio Sansosti ORCID: orcid.org/0000-0002-5051-40562 &
Pietro Tizzani2
Scientific Reports volume 7, Article number: 576 (2017) Cite this article
In actively deforming regions, crustal deformation is accommodated by earthquakes and through a variety of transient aseismic phenomena. Here, we study the 2010–2014 Pollino (Southern Italy) swarm sequence (main shock M W 5.1) located within the Pollino seismic gap, by analysing the surface deformation derived from Global Positioning System and Synthetic Aperture Radar data. Inversions of geodetic time series show that a transient slip, with the same mechanism of the main shock, started about 3–4 months before the main shock and lasted almost one year, evolving through time with acceleration phases that correlate with the rate of seismicity. The moment released by the transient slip is equivalent to M W 5.5, significantly larger than the seismic moment release revealing therefore that a significant fraction of the overall deformation is released aseismically. Our findings suggest that crustal deformation in the Pollino gap is accommodated by infrequent "large" earthquakes (M W ≥ 6.5) and by aseismic episodes releasing a significant fraction of the accrued strain. Lower strain rates, relative to the adjacent Southern Apennines, and a mixed seismic/aseismic strain release are in favour of a longer recurrence for large magnitude earthquakes in the Pollino gap.
The way in which a fault releases the accumulated tectonic strain during the interseismic period is a central question in seismotectonics and it has important implications in terms of crustal rheology and earthquake source mechanics. Moreover, the evaluation and the interpretation of the balance between seismic and geodetic release have key practical implications for seismic hazard assessment. In recent years, the increasing availability of geodetic data such as continuous Global Positioning System (GPS) observations and short repeat-time Synthetic Aperture Radar (SAR) images in combination with seismological data, have greatly increased our capability to discover transient aseismic slow slip episodes of different extent, duration and temporal evolution. These episodes are frequently accompanied by a variety of seismic phenomena1,2,3,4,5,6,7,8 that, in some cases seem to be the primary way in which the accrued tectonic stresses are released. While examples of post-seismic (afterslip) transients triggered by the rapid stress release in a main shock are well documented in several tectonic contexts8,9,10, most of the well-constrained sources of spontaneous aseismic slow slip come from subduction zones2,3,4,5,6, 11, such as Japan, Cascadia, Alaska, Mexico, New Zealand and Costa Rica and are referred to as slow slip events. Similar quasi-static slips have also been observed along the creeping section of the San Andreas Fault in California12, the Kilauea volcano in the Hawaii13 and, more recently, along the North Anatolian Fault in Turkey7 as well. Other kind of transient aseismic slow slip events have been hypothesized in association with earthquake swarms along active transform plate boundaries14,15,16 and in volcano active regions13, 17, 18. Although swarms are commonly related to high pore fluid pressure in the crust19, 20, other studies21 have instead suggested that aseismic processes may be a general feature of seismic swarms; however, very little information about surface deformation is usually available. Only in recent years, thanks to the development of space geodetic techniques, high spatial- and temporal- resolution surface measurements have been used to better understand the faulting behaviour of earthquake swarms, such as rupture details associated with the major individual events, as well as larger-scale deformation patterns of the whole swarm seismic process6, 15, 16, 22, 23.
Here, we use geodetic and seismological observations to document a transient aseismic slow slip event occurring during a years-long earthquake swarm that significantly contributed to the total release of the seismic moment. We inverted the 3-components GPS time series and the line-of-sight (LOS) displacements derived by processing data acquired by the COSMO-SkyMed (CSK) SAR satellite constellation with multi-temporal Differential SAR Interferometry (DInSAR) techniques, to estimate the temporal evolution of the transient slow slip event that accompanied the swarm sequence. This transient took place in the so-called Pollino seismic gap, Southern Italy, affected by an intense swarm sequence that started in October 2010 and lasted until the beginning of 201424,25,26,27. The Pollino range is located between the end of the Southern Apennines extensional domain and the Calabrian arc28,29,30. It represents a well-known seismic gap in Italy31 due to a lack of local high macroseismic intensities, a feature which is usually indicative of a "large" earthquake (M W ≥ 6.5) occurring on a nearby active fault (Fig. 1).
Tectonic setting of the Pollino swarm seismic sequence. The dots show the seismicity for the 2010–2014 Pollino earthquake swarm sequence, colour-coded by their time of occurrence32. The epicentre of the largest shock (M W 5.1) is shown as a red star. The green stars indicate the location of the M W > 3.5 events27. The mechanisms of these events are taken from time domain moment tensor (TDMT) catalogue (red and green beach-balls, http://cnt.rm.ingv.it/tdmt) and from Passarelli et al.27. White squares indicate the locations of the continuous GPS sites used in this work. The red lines represent the major mapped W-SW- dipping normal tectonic structures34, 36, 37: MF, PF, CF and CPST stand for Mercure, Pollino, Castrovillari and Castello Seluci-Timpa della Manca fault, respectively. The blue lines are the new recently identified active faults after Brozzetti et al.37: ROCS stands for the Rotonda-Campotenese normal fault system and MPR for the Morano Calabro-Piano di Ruggio fault. The inset shows the tectonic setting of Southern Italy and the historical macroseismic intensities52. Deep and intermediate seismicity in the Wadati-Benioff zone beneath the Tyrrhenian Sea, shown as contours of the subducted slab labeled in kilometers. The black lines with triangles represent the Plio-Pleistocene subduction front. The box encloses the main figure. AP = Apulia; SA = Southern Apennines; CA = Calabria; SI = Sicily. The map was created by using Generic Mapping Tools software (GMT v4.5.14; http://gmt.soest.hawaii.edu/)53.
The 2010–2014 Pollino swarm sequence comprises more than 6000 events, as recorded by the Italian seismic network32, and provides an unprecedented opportunity to characterize a normal faulting earthquake swarm using both seismic and geodetic observations. The sequence started at the end of 2010 following decades of seismic quiescence33 and lasted until the beginning of 2014. The swarm contained a M W 5.1 main shock which occurred on 25 October 2012, and which represents one of the largest earthquakes seismically recorded in this area (the only other significant event that occurred during the instrumental era is a M W 5.6 earthquake, took place in 1998 to the north of the Pollino range). Most of the 2010–2014 swarm activity occurred in the hanging-wall of the large NW-SE striking normal fault zone that bounds the Pollino range34, 35. The 3-D patterns of the relocated hypocenters24,25,26 of the larger and more intense western cluster together with focal mechanisms of the largest events26, 27 (>3.5), consistently reveal a N-NW-striking and W-SW-dipping normal fault zone with centroid depths between 5 and 10 km. Although the SW-dipping focal plane is in agreement with the structures that represent the most common faulting style34,35,36,37, Totaro et al.26 and Brozzetti et al.37 demonstrated that the hypocentral distribution was not compatible with previous maps of known active faults34,35,36. In particular, in a recent study, Brozzetti et al.37 reconstructed a previously unidentified Late Quaternary extensional fault system, suggesting that a suitable source for the 25 October 2012 earthquake could be the previously unknown W-SW-dipping Rotonda-Campotenese fault system (ROCS), while the Morano Calabro-Piano di Ruggio (MPR) fault system could have controlled the eastern cluster of seismicity. The temporal and spatial behaviour of the recorded seismicity, as described by Passarelli et al.27, is consistent with the general characteristics of swarm-like seismicity38, 39. In particular, the sequence has affected a much larger crustal volume than expected according to the largest recorded event (M W 5.1), with a significant enlargement of the focal area during the sequence (Fig. 1). The relationship between the spatial dimensions, the seismic moment released by the swarm sequence and ETAS (Epidemic Type Aftershock Sequence)40 modelling of the seismicity, has led to hypothesize that a transient forcing was acting during the Pollino swarm27. However, while being critical to reduce uncertainties in seismic hazard assessment due to seismic swarms, the nature of this transient forcing (which may range from aseismic creeping to diffusion of high pore pressure pulses, or even to fluid migration within the crust) has not been unravelled by previous studies27, lacking the observation capabilities to verify whether a transient aseismic slip episode actually accompanied the swarm.
Recent studies41 reviewed the historical seismicity in the Pollino range with magnitude comparable with the 25 October 2012 M W 5.1 normal faulting event (depth 5 km), pointing out at least two similar events (i.e. in 1693 and 1708) occurred during a year-long seismic sequence, suggestive of a distinctive character for the seismicity in the Pollino area. Paleoseismological trenching studies42, 43 on the normal faults at the southern border of the Pollino range suggest the occurrence of at least four M W ≥ 6.5 events within the last 10,000 yr. Recent preliminary estimates of tectonic loading show that the Pollino range is actively deforming, probably slower than the Southern Apennines (the latter characterized by deformation rates up to 2–2.5 mm/yr44, 45), with geodetic directions of active deformation consistent with the focal mechanisms of the largest events of the 2010–2014 Pollino swarm27.
Our combined analysis of the spatial-temporal evolution of seismicity and surface deformation associated with the Pollino seismic swarm sequence shows that an aseismic transient slip initiated several months before the main shock. Our results, indicate that an aseismic fault slip may have been the primary driving process of the Pollino swarm, suggesting that crustal deformation in the Pollino range may be characterized by aseismic slip episodes that release a significant fraction of the accrued strain, ultimately increasing the recurrence of surface-rupturing seismic events (M W ≥ 6.5).
Geodetic measurements proved to be a crucial data set for understanding the 2010–2014 Pollino swarm sequence. In fact, both GPS and DInSAR time series clearly show a transient displacement starting before the 25 October 2012 M W 5.1 main shock (Figs 2a and c). In particular, a surface displacement, mainly in the E-W direction, lasting several months from July 2012 to mid-2013, is well seen at the continuous GPS site MMNO, with a cumulative displacement up to ~10 mm in the west direction (Fig. 2a). On the contrary, the GPS daily time series and the GPS high-rate solutions (Supplementary Fig. S2) only show subtle coseismic offsets (<1–2 mm) associated with the main shock event. A similar signal is also present in nearby GPS stations (Supplementary Fig. S6a-l) with a smaller amplitude depending on the relative distance from the swarm. The transient displacement is even more clearly visible in the CSK time series measurements (Fig. 2c), which are also available in the area of maximum deformation (Fig. 3) where, unfortunately no GPS stations were operating. In this case, the cumulative displacement reaches ~60 mm in LOS direction. The comparison of the CSK time series with the independent GPS data at the MMNO station (Supplementary Fig. S5) demonstrates the consistency between the different data sets at the level of a fraction of a centimetre.
Geodetic time series, aseismic slip and seismicity rate. (a) Orange dots indicate the E-W (longitude) daily displacement recorded at GPS site MMNO in the Apulia (Ap) reference frame. The grey line is the prediction from the best-fit time dependent model of the aseismic transient slip event. The black dashed line indicates the 25 October 2012 M W 5.1 main event. Histogram bars in panels (a) and (c) show the number of seismic events in 16-day intervals, while in panel (b) in 8-day intervals. The complete set of the GPS time series are in the Supplementary Material. (b) Best-fit aseismic slip rate estimated from the time dependent inversion (red line). Labelled dashed vertical lines indicated selected CSK acquisition epochs. (c) Blue triangles indicate the CSK light-of-sight (LOS) displacement observed in the area of maximum deformation (i.e., between MMNO and VIGG sites), while grey line is the prediction from the best-fit model. The transient event is clearly visible between July 2012 and the first half of 2013. (d) Evolution of cumulative aseismic slip (red line) and cumulative number of seismic events (grey area). The figures were created by using Generic Mapping Tools software (GMT v4.5.14; http://gmt.soest.hawaii.edu/)53.
CSK DInSAR data around the Pollino range. Data (left panels), model (middle panels) and residuals (right panels) sampled points from CSK time series showing the displacement field as a function of time: (a) between 5 June-11 October 2012 (T04), (b) between 11 October-12 November 2012 (T05), and (c) between 12 November 2012–2 March 2014 (T35). Negative changes represent increase in radar LOS. The red star indicates the 25 October 2012 M W 5.1 event. Details of interferometric pairs are shown in Supplementary Fig. S3. The complete set of the 36 epochs CSK displacement time series used in the inversion are fully described in the Supplementary Material. The maps were created by using Generic Mapping Tools software (GMT v4.5.14; http://gmt.soest.hawaii.edu/)53.
Our geodetic data highlight that surface deformation started in June 2012 and evolved until mid-2013, with alternating phases of acceleration and deceleration that correlate with the seismicity rate (Fig. 2). In particular, between June and October 2012 DInSAR displacement measurements (Fig. 3) reveal a cumulative LOS deformation of about 20 mm in the epicentral area before the occurrence of the main shock. Significant surface deformation is observed between 11 October and 12 November 2012 (interval containing the M W 5.1 event), when the LOS cumulative displacement field reached a value of about 40 mm (Fig. 3, second row). In the following eight months, between 12 November 2012 and 26 July 2013, after about 1 year from the start of the detected transient surface deformation, the LOS displacement gradually achieved the final cumulative value of ~60 mm (the complete set of LOS displacement fields are shown in the Supplementary Fig. 7a–g).
Joint inversion of the 3-D continuous GPS time series at 12 sites and 35 DInSAR cumulative LOS displacement measurements spanning the swarm sequence, indicates that the main area of transient aseismic slip took place at shallow depths (between 2–7 km) along a source model which appears to be consistent with the mechanisms of the coseismic fault plane of the 25 October 2012 M W 5.1 main shock (Fig. 4a). The surface projection of our best-fit model seems not to be fully compatible with the major mapped active faults34,35,36,37 in the Pollino area (that is, the Mercure and Pollino faults, MF and PF in Fig. 4a). On the other hand, our solution is consistent with recent studies26, 37 that have shown two newly identified sub-parallel W-SW-dipping fault segments (ROCS in Fig. 4a) as the main causative source of the Pollino swarm sequence.
Surface deformation during the transient aseismic event and interseismic velocity field. (a) Observed (blue arrows) and predicted (white arrows) cumulative horizontal displacements from the aseismic model. The purple box represents the best-fit uniform slip aseismic fault plane, while the colour scale is the aseismic slip distribution (in mm) of the total cumulative displacement computed on an extended fault plane discretized into smaller patches. (b) Best-fit interseismic horizontal velocity field in an Apulian (Ap) reference frame. The dashed lines enclose the polygon used for strain rate calculation, while the double sided arrows indicate the principal strain rates (\({\dot{\varepsilon }}_{max}\) = 34 ± 7 × 10−9 yr−1). Green dots represent the relocated54 seismic events during the swarm. Traces of active faults as in Fig. 1. (c) Estimated aseismic slip distribution as a function of depth (symbols as in panel c). The hypocentre of the largest shock (M W 5.1) is shown as a red star and the mechanism of this event is taken from time domain moment tensor (TDMT) catalogue. The maps were created by using Generic Mapping Tools software (GMT v4.5.14; http://gmt.soest.hawaii.edu/)53.
The maximum cumulative slip reaches about 250 mm (Fig. 4a) and the moment release is equivalent to a magnitude M W 5.5 earthquake. The cumulative moment release through earthquakes (including the M W 5.1 event) in the corresponding time period is equivalent to M W 5.17, indicating that the deformation occurring during the Pollino swarm sequence was about 70% aseismic. Both GPS and CSK time series (Fig. 2) show that this aseismic transient slip evolved with time. In fact, we observed phases of acceleration and deceleration of the aseismic slip which correlate with increase and decrease of the seismicity rate, respectively (Fig. 2b). In particular, between June and August 2012, about 3–4 months before the occurrence of the M W 5.1 main shock, the rate of seismicity increased faster than in the previous 2011–2012 bursts of activity27, which unfortunately were not covered by the geodetic measurements. The aseismic slip rate shows a synchronous increment in almost the same time period, culminating in a dramatic increase just before the 25 October 2012 M W 5.1 earthquake (Fig. 2). About two months after the main shock both the seismicity and the aseismic slip suddenly dropped off. The 2013 activity marks a new phase of swarm activity, with a significant enlargement of the area affected by the seismicity27. In this period, the aseismic slip rate gradually decreased until May 2013.
Between May and July 2013 (that is, between 200–240 days after the occurrence of the main event of the sequence) we observe another small increment in the slip rate. This increment is synchronous with an enlargement of the crustal area affected by the seismicity and is required to accommodate the surface deformation observed in our geodetic time series. Additionally, statistical analysis (i.e. ETAS modeling27) of the swarm, suggests that the transient forcing process lasted throughout all the seismic sequence and not just during the acceleration phases observed by our geodetic data. This leads to the conclusion that the slow-slip event is the main driver of the whole seismic sequence since October 2010.
We suggest that a transient aseismic slow slip event started about 3–4 months before the occurrence of the main shock (M W 5.1), and systematically accompanied the seismic sequence (at least in the time span covered by the geodetic observations). The start of the aseismic transient coincides with the fast increase of the seismicity rate detected by Passarelli et al.27 and ascribed to an aseismic transient forcing. The observed increasing and decreasing seismicity rate, were accompanied by the transient acceleration and deceleration of the aseismic slip respectively (Fig. 2). The surface deformation increased with time, reaching up to ~10 mm at MMNO GPS station and about 60 mm in the LOS at the end of the transient. Furthermore, the signal amplitude and the spatial extent of deforming area clearly increase with time (Fig. 3). Therefore, the LOS changes and GPS surface deformation across the Pollino range are observed not only during the M W 5.1 event, but also before and after the main shock, thus demonstrating that aseismic slip occurred during the seismic swarm. Both seismic and aseismic moment release contributed to the total release of the tectonic strain accumulated during the interseismic phase. For this reason, the detection and estimation of the transient aseismic phenomena have significant implications for the evaluation of the fraction of tectonic loading released seismically which in turn has consequences in terms of seismic hazard. Furthermore, it is important to identify other possible faults in the Pollino seismic gap region that may have been brought closer to failure by the stress changes associated with the estimated aseismic transient slip episode. Apart from the obvious increased stress on the portion of the causative fault surrounding the aseismically slipped area, we find very little stress increase (about 0/0.2 bar) in the north-western tip of the southern PF fault (Supplementary Fig. S8). We find a general decrease in stress (−3.8/−0.8 bar) on the south-eastern half of both the MF and CPST fault planes, and a stress increase (up to 1.5 bar) on their north-western parts (Supplementary Fig. S8) which may represent a feature that should be considered in future hazard assessment (a complete description of the static stress changes calculation is given in the Supplementary Material).
The observed aseismic transient fault slip implies that the Pollino range faults should have accumulated interseismic elastic strain before the swarm sequence. Figure 4b shows the interseismic velocities corrected for the transient displacements occurred during the Pollino swarm. Our estimate of secular tectonic loading is ~1.7 mm/yr (Fig. 4b), thus showing a significant southward decrease of active extension from the Southern Apennines (extension ~2.5 mm/yr44, 45) to the Pollino range. However, the definition of the interseismic behaviour for the Pollino range active faults is challenging due to the coverage of the geodetic network in the region which poorly resolves the main deformation mechanism active on the faults in the Pollino area. A long-lasting (at least one decade) centimetre scale creeping behaviour has been suggested by Sabadini et al.46 on the basis of non-continuous DInSAR and GPS data. However, discrepancies between the proposed rates and the regional tectonic loading and the lack of surface expression along the trace of the involved faults does not appear to fully support this hypothesis. On the other hand, if we assume that the Pollino range fault systems behave the same way as those involved during the seismic swarm, then the creeping reported on the southern fault system by Sabadini et al.46 could be interpreted as the superimposition of several episodic aseismic slip transients.
The behaviour observed during the Pollino swarm sequence suggests that the seismically-radiating faults (velocity-weakening patches) may be heterogeneously distributed in a spotty style, while velocity-strengthening zones could be more widely distributed on the fault. The M W 5.1 main shock has nucleated on a velocity-weakening patch, although our interseismic velocity field cannot resolve the accurate geometry of this structure. Paleoseismological data41, 42 from the southern fault segments of the Pollino fault system suggests that the dimensions of velocity-weakening patches are not limited to M W ≈ 5 events, but may reach sizes capable of generating a surface-rupturing event (M W ≥ 6.5).
One important consequence of the transient aseismic slip is that the associated expected rate of large earthquakes is lower than the one envisaged from a full velocity-weakening behaviour and a full seismic release. We calculate the recurrence of M W ≥ 6.5 events predicted by the observed interseismic strain rate (corrected for the effect of the aseismic transient, Fig. 4b) by accounting for the effect of different fraction of the aseismic deformation (see Methods). Figure 5 shows that complete seismic release of the tectonic loading (case 1 in Fig. 5) requires a M W ≥ 6.5 event every 350–890 years. This value is slightly lower than, but similar to, the recurrence of M W ≥ 6.5 events in the Central-Southern Apennines (240–600 years), where the spatial distribution of large macroseismic intensities in the last 1000 years does not show significant gaps47. Halving the seismic coupling (case 2 in Fig. 5) doubles the recurrence of M W ≥ 6.5 events and increases the probability of not observing large macroseismic intensities in the historical catalogue. Thus, the combination of lower strain rates relative to the adjacent Southern Apennines, and a mixed seismic/aseismic strain release may be a possible scenario capable of increasing the recurrence time of large magnitude events in the Pollino seismic gap.
Recurrence of M W ≥ 6.5 events derived from the interseismic geodetic strain rate. The blue line shows the recurrence of M W ≥ 6.5 events as a function of seismic coupling fraction. The grey area includes the ±1-sigma uncertainty. The recurrence estimates have been calculated using \({\dot{\varepsilon }}_{max}\) = 34 ± 7 × 10−9 yr−1 and H = 10 ± 2.5 km. Full seismic coupling (c = 1.0, case 1) predicts a M W ≥ 6.5 event every 350–890 years. Allowing half of the tectonic loading to be released aseismically (c = 0.5, case 2), doubles the recurrence to 700–1780 years and increase the probability of not observing large macroseismic intensities in the Pollino seismic gap area. The figure was created by using Generic Mapping Tools software (GMT v4.5.14; http://gmt.soest.hawaii.edu/)53.
GPS data and processing
Surface displacements have been recorded by 12 permanent Global Positioning System (GPS) stations managed by different public and private institutions (Fig. 1). GPS data were processed using the Jet Propulsion Laboratory (JPL) GIPSY-OASIS II software. A complete description of the processing details and strategies are given in the Supplementary Material. Visual inspections of the GPS time series (Supplementary Fig. S6a–l) only show subtle coseismic offsets (<1–2 mm) related to the 25 October 2012 M W 5.1 main event of the swarm sequence. On the contrary, the two GPS sites located closest the source (MMNO and VIGG) are likely affected by significant (>5 mm) transient deformation in the E-W component, especially following the M W 5.1 event (Fig. 2). A similar signal is also present in the other nearby stations (Supplementary Fig. S6a–l).
High-rate GPS analysis
A high-rate analysis of GPS data at the closest stations (MMNO and VIGG) is performed with the strategy described in the Supplementary Material. This analysis results in 30 sec-sampled time series covering a 1.5-hour time interval spanning the M W 5.1 earthquake (Supplementary Fig. S2). The uncertainties of these time series are 0.45, 0.39 and 1.09 cm for the North, East and Vertical components, respectively. The signals produced by the M W 5.1 event are within the uncertainties of the high-rate GPS solutions and no clear offsets seem to occur during the largest earthquake of the swarm sequence.
DInSAR data and processing
We used Synthetic Aperture Radar (SAR) data acquired by the COSMO-SkyMed (CSK) constellation, composed of four satellites and operated by the Italian Space Agency. A temporally dense data set was available thanks to a specific acquisition planning conveniently managed during the seismic crisis. Images were acquired in ascending orbits (side-looking angle of about 30° off the vertical) in the stripmap (HIMAGE) mode with 3 m by 3 m spatial resolution. A data set of 39 stripmap images was available: the time interval covered by the acquisitions starts on 5 June 2012 and includes almost two years of surface deformation up to 8 April 2014. Acquisition parameters in terms of temporal and spatial baselines with respect to the reference master image acquired on 23 May 2013, are listed in Supplementary Table T3 and are shown in Supplementary Fig. S3. Data have been processed with a two-scale approach, at low resolution (small scale) and high resolution (large scale). A complete description of the processing details and strategies is given in the Supplementary Material.
Figure 2 and Supplementary Figs S5 and S7a–g show the temporal evolution of the line-of-sight (LOS) displacement derived from the time series analysis in the area of maximum deformation (i.e., between MMNO and VIGG stations). Two first clear (≥1 cm) LOS displacements occurred before the M W 5.1 earthquake. In particular, between the first two acquisition dates of 5 June and 23 July 2012, and between 24 August and 11 October 2012, when the surface deformation reached a cumulative LOS value of >2 cm. Between 11 October and 12 November 2012, that is time interval spanning the 25 October 2012 M W 5.1 earthquake, we observed another significant (>2 cm) phase of rapid surface deformation. Finally, in the following months the surface deformation continued until the middle of 2013, but at a slower rate, reaching a total cumulative LOS displacement >6 cm.
We analysed the GPS and DInSAR time series by projecting the GPS positions along the CSK LOS and comparing the resulting values with the CSK displacements averaged within 150 meters from the GPS monument (Supplementary Fig. S5). In particular, the comparison of the CSK time series with the independent GPS time series of MMNO station demonstrates the consistency between the different data sets of measurements at the level of a fraction of a centimetre.
Time dependent inversion
We inverted the 3-D GPS time series and DInSAR displacement fields to simultaneously estimate the coseismic displacement related to the M W 5.1 earthquake source and the aseismic transient slow slip event. To emphasize the extensional deformation across the Southern Apennines, the time series are shown (Supplementary Figs S1 and S6) in a reference frame defined by minimizing the horizontal velocities of the stations in the Apulian block44. The inversions were performed with TDEFNODE48. Because of the limited number of GPS stations and of the simple concentric deformation pattern observed in the DInSAR displacement fields, we assume a uniform slip on the rupture plane.
In particular, we modelled the M W 5.1 event as a 4 km by 4 km square dislocation with a uniform slip of 10 cm and we fixed the fault strike, dip, rake and hypocentral depth (164°/47°/−84°/5 km) according to the focal mechanism solution of the TDMT catalogue (http://cnt.rm.ingv.it/tdmt). The synthetic offsets produced by this source are in agreement with the small static coseismic offsets observed both in the daily and in the high-rate 30-sec sampled GPS solutions (Fig. S2). The aseismic transient slow slip event was modelled as a planar uniform slip source with time dependence set as a series of overlapping triangles. We inverted for the dimensions, positions and strike, dip and rake of the fault plane. In addition, for a slow slip event, the free parameters for the time history are the origin time, T 0 , and the triangle amplitudes, A i (where i is the progressive number of the triangle in the time function). The rise time of the triangle is fixed at 16 days. To test variable slip on the fault plane, we also computed the slip distribution of the total cumulative displacement (Fig. 4a). A full explanation of the inversion scheme and tests are given in the Supplementary Material.
Seismic moment accumulation and seismic potential
To estimate the rate of seismic release and the effect of aseismic deformation, we computed the rate of seismic moment accumulation from the geodetic strain rate in the polygon in Fig. 4b using a scalar version of the Kostrov's formula49:
$${\dot{M}}_{geod}=2\mu AT{\dot{\varepsilon }}_{max}$$
where \({\dot{\varepsilon }}_{max}\) is the largest absolute eigenvalue of the strain rate tensor, A is the considered area, T is the seismogenic thickness and μ is the rigidity modulus (3.3 × 1010 Pa). The rate of seismic release is evaluated under the assumption that the seismic moment is distributed across earthquakes obeying the Gutenberg-Richter relation between magnitude and frequency truncated to a maximum moment earthquake50:
$$\dot{N}({M}_{0})=\alpha {M}_{0}^{-\beta }[1-H({M}_{0}-{M}_{0}^{max})]$$
where \(\dot{N}\) is the rate of events having moment greater than or equal to M 0 , \({M}_{0}^{max}\) is the moment of the maximum magnitude event, H is the Heaviside function and β = 2/3 (equivalent to assuming b = 1 in the Gutenberg-Richter relation). The rate of total moment release is refs 50, 51:
$${\dot{M}}_{0}^{tot}=\frac{\alpha \beta {M}_{0}^{max(1-\beta )}}{1-\beta }$$
Equation (3) can be reformulated and α inserted in (2) thus leading to:
$$\dot{N}({M}_{0})=c\frac{{\dot{M}}_{0}^{tot}(1-\beta )}{\beta {M}_{0}^{max(1-\beta )}}{M}_{0}^{-\beta }$$
We assume the magnitude of the maximum event M max = 7.0 (similar to the estimated maximum magnitude of the largest events observed in the Apennines31). We also introduced the value c to account for a variable fraction (between 0 and 1) of seismically released \({\dot{M}}_{0}^{tot}\) .
Peng, Z. & Gomberg, J. An integrated perspective of the continuum between earthquake and slow-slip phenomena. Nat. Geosci. 3, 599–607 (2010).
Roger, G. & Dragert, H. Episodic tremor and slip on the Cascadia subduction zone: the chatter of silent slip. Science 300(5627), 1942–1943 (2003).
Ozawa, S. et al. Detection and monitoring of ongoing aseismic slip in the Tokay region, central Japan. Science 298(5595), 1009–1012 (2002).
Wallace, L. M. & Eberhart-Phillips, D. Newly observed, deep slow slip events at the central Hikurangi margin, New Zealand: implications for downdip variability of slow slip and tremor, and relationship to seismic structure. Geophys. Res. Lett. 40, (5393–5398 (2013).
Pritchard, M. E. & Simons, M. An aseismic slip pulse in northern Chile and along-strike variations in seismogenic behaviour. J. Geophys. Res. 111, B08405, doi:10.1029/2006JB004258 (2006).
ADS Google Scholar
Villegas-Lanza, J. C. et al. A mixed seismic-aseismic stress release episode in the Andean subduction zone. Nat. Geosci. 9, 150–154 (2016).
Rousset, B. et al. An aseismic slip transient on the North Anatolian Fault. Geophys. Res. Lett. 43, 3254–3262 (2016).
Cheloni, D. et al. New insights into fault activation and stress transfer between en echelon thrusts: The 2012 Emilia, Northern Italy, earthquake sequence. J. Geophys. Res. 121, doi:10.1002/2016JB012823 (2016).
Cheloni, D. et al. Coseismic and post-seismic slip of the 2009 L'Aquila (central Italy) Mw 6.3 earthquake and implications for seismic potential along the Campotosto fault from joint inversion of high-precision levelling, InSAR and GPS data. Tectonophysics 622, 168–185 (2014).
Perfettini, H. & Avouac, P. Modeling afterslip and aftershocks following the 1992 Landers earthquake. J. Geophys. Res. 112, B07409, doi:10.1029/2006JB004399 (2007).
Miyazaki, S., Segall, P., McGuire, J. J., Kato, T. & Hatanaka, Y. Spatial and temporal evolution of stress and slip rate during the 2000 Tokai slow earthquake. J. Geophys. Res. 111, B03409, doi:10.1029/2004JB003426 (2006).
Linde, A. T., Gladwin, M. T., Johnston, M. J. S., Gwyther, R. L. & Bilham, R. G. A slow earthquake sequence on the San Andreas Fault. Nature 383, 65–68 (1996).
Segall, P., Desmarais, E. K., Shelly, D., Miklius, A. & Cervelli, P. Earthquakes triggered by silent slip events on Kilauea volcano, Hawaii. Nature 442, 71–74 (2006).
Lohman, R. B. & McGuire, J. J. Earthquake swarms driven by aseismic creep in the Salton Trough, California. J. Geophys. Res. 112, B04405, doi:10.1029/2006JB004596 (2007).
Wei, S. et al. Complementary slip distributions of the largest earthquakes in the 2012 Brawley swarm, Imperial Valley, California. Geophys. Res. Lett. 40, 847–852 (2013).
Wei, S. et al. The 2012 Brawley swarm triggered by injection-induced aseismic slip. Earth Planet. Sci. Lett. 422, 115–125 (2015).
Toda, S., Stein, R. S. & Sagiya, T. Evidence form the AD 2000 Izu islands earthquake swarm that stressing rate governs seismicity. Nature 419, 58–61 (2002).
Klein, F. W., Einarsson, P. & Wyss, M. Reykjanes Peninsula, Iceland, earthquake swarm of September 1972 and its tectonic significance. J. Geophys. Res. 82, 865–888 (1977).
Waite, G. P. & Smith, R. B. Seismic evidence for fluid migration accompanying subsidence of the Yellowstone caldera. J. Geophys. Res. 107, 2177–2192 (2002).
Hainzl, S. Seismicity patterns of earthquake swarms due to fluid intrusion and stress triggering. Geophys. J. Int 159, 1090–1096 (2004).
Vidale, J. E. & Shearer, P. M. A survey of 71 earthquakes bursts across southern California: Exploring the role of pore fluid pressure fluctuations and aseismic slip as drives. J. Geophys. Res. 111, B05312, doi:10.1029/2005JB004034 (2006).
Kyriakopoulos, C. et al. Monthly migration of a tectonic seismic swarm detected by DInSAR: southwest Peloponnese, Greece. Geophys. J. Int. 194, 1302–1309 (2013).
Borghi, A., Aoudia, A., Javed, F. & Barzaghi, R. Precursory slow-slip loaded the 2009L'Aquila earthquake sequence. Geophys. J. Int. 205, 776–784 (2016).
Totaro, C. et al. The ongoing seismic sequence at the Pollino Mountains, Italy. Seismol. Res. Lett. 84, 955–962 (2013).
Govoni, A. et al. Investigating the Origin of the Seismic Swarms. EOS Trans. Am. Geophys. Un. 94, 361–362 (2013).
Totaro, C. et al. An Intense Swarm in the Southernmost Apennines: Fault Architecture from High-Resolution Hypocenters and Focal Mechanisms. Bull. Seism. Soc. Am. 105, doi:10.1785/0120150074 (2015).
Passarelli, L. et al. Aseismic transient driving the swarm-like seismic sequence in the Pollino range, Southern Italy. Geophys. J. Int. 201, 1553–1567 (2015).
Faccenna, C. et al. Topography of the Calabria subduction zone (southern Italy): clues for the origin of Mt Etna. Tectonics 30, TC1003, doi:10.1029/2010TC002694 (2011).
Chiarabba, C., Piana Agostinetti, N. & Bianchi, I. Lithospheric fault and kinematic decoupling of the Apennines system across the Pollino range. Geophys. Res. Lett. 43, 3201–3207 (2016).
Valensise, G. & Pantosti, D. Syntax of referencing In Anatomy of an Orogen: The Apennines and Adjacent Mediterranean Basins (ed. by Vai, G. B. & Martini, I. P.) 495–512 (2001).
Rovida, A. et al. CPTI11, the 2011 version of the parametric catalogue of Italian earthquakes http://emidius.mi.ingv.it/CPTI (2011).
ISIDe Working Group Italian Seismological Instrumental and Parametric Databases http://iside.rm.ingv.it (Accessed: 4th July 2016).
Castello, B. et al. CSI Catalogo della Sismicità Italiana 1981–2002 http://csi.rm.ingv.it. (Accessed: 4th July 2016).
Michetti, A. M. et al. Ground effects during the 9 September 1998, Mw = 5.6 Lauria earthquake and the seismic potential of the "aseismic" Pollino region in southern Italy. Seismol. Res. Lett. 71, 31–46 (2000).
DISS Working Group Database of Individual Seismogenic Sources (DISS), Version 3.2.0: A compilation of potential sources for earthquakes larger than M 5.5 in Italy and surrounding areas http://diss.rm.ingv.it/diss/(Accessed: 4th July 2016).
Papanikolaou, I. D. & Roberts, G. P. Geometry, kinematics and deformation rates along the active normal fault system in the southern Apennines: Implications for fault growth. J. Struct. Geol. 29, 166–188 (2007).
Brozzetti, F. et al. Newly identified active faults in the Pollino seismic gap, southern Italy, and their seismotectonics significance. J. Struct. Geol. 94, 13–31, doi:10.1016/j.jsg.2016.10.005 (2017).
Hainzl, S., Fischer, T. & Dahm, T. Seismicity-based estimation of the driving fluid pressure in the case of the swarm activity in Western Bohemia. Geophys. J. Int 191, 271–281 (2012).
Roland, E. & McGuire, J. J. Earthquake swarms on transform faults. Geophys. J. Int. 178, 1677–1690 (2009).
Hainzl, S. & Ogata, Y. Detecting fluid signals in seismicity data through statistical earthquake modelling. J. Geophys. Res. 110, B05S07, doi:10.1029/2004JB003247 (2005).
Tertulliani, A. & Cucci, L. New Insights on the Strongest Historical Earthquake in the Pollino Region (Southern Italy). Seismol. Res. Lett. 85, 743–751 (2014).
Cinti, F., Moro, M., Pantosti, D., Cucci, L. & D'Addezio, G. New constraints on the seismic history of the Castrovillari fault in the Pollino gap (Calabria, southern Italy). J. Seismol. 6, 199–217 (2002).
Michetti, A. M., Ferreli, L., Serva, L. & Vittori, E. Geological evidence for strong historical earthquakes in as 'aseismic' region: The Pollino case. J. Geodyn. 24, 67–87 (1997).
D'Agostino, N. et al. Forearc extension and slow rollback of the Calabrian Arc from GPS measurements. Geophys. Res. Lett. 38, L17304, doi:10.1029/2011GL048270 (2011).
D'Agostino, N., Avallone, A., D'Anastasio E. & Cecere, G. GPS velocity and strain rate field in the Calabro-Lucania region. Report on the deliverable/task D24/c2 (2013).
Sabadini, R. et al. First evidences of fast creeping on a long-lasting quiescent earthquake normal-fault in the Mediterranean. Geophys. J. Int. 179, 720–732 (2009).
D'Agostino, N. Complete seismic release of tectonic strain and earthquake recurrence in the Apennines (Italy). Geophys. Res. Lett. 41, 1155–1162 (2014).
McCaffrey, R. Time-dependent inversion of three-component continuous GPS for steady and transient sources in northern Cascadia. Geophys. Res. Lett. 36, L07304, doi:10.1029/2008GL036784 (2009).
Kostrov, V. V. Seismic moment and energy of earthquakes, and seismic flow of rock. Phys. Solid Earth 1, 23–44 (1974).
MathSciNet Google Scholar
Molnar, P. Earthquake recurrence intervals and plate tectonics. Bull. Seismol. Soc Am. 69, 115–133 (1979).
England, P. & Bilham, R. The Shillong Plateau and the great 1897 Assam earthquake. Tectonics 34, 1792–1812 (2015).
Locati, M., Camassi, R. & Stucchi, M. DBMI11, la versione 2011 del Database Macrosismico Italiano http://emidius.mi.ingv.it/DBMI11 (2011).
Wessel, P. & Smith, W. H. F. New improved version of the generic mapping tools released. Eos. Trans. AGU 79, 577–579 (1998).
De Gori, P., Margheriti, L., Lucente, F. P., Govoni, A., Moretti, M. & Pastori, M. Seismic activity images the activated fault system in the Pollino area, at the Apennines-Calabrian arc boundary region. Proceedings of the 34th national conference of GNGTS, Bologna (2014).
We thank the Editor A. Aoudia and three anonymous reviewers for comments and suggestions that allowed us to significantly improve the manuscript. We are grateful to R. McCaffrey from Portland State University for support in using the TDEFNODE software. We thank S. Murphy from National Institute for Geophysics and Volcanology for review the English language of the manuscript, and the technical staff of the RING network for GPS station manteinance. Part of this work has been carried out using CSK® Products © ASI (Italian Space Agency), delivered under an ASI licence to use in the framework of COSMO-SkyMed Open Call for Science. We thank ASI for providing the data and for re-planning of the CSK acquisitions during the seismic crisis. As far as authors of the Italian Civil Protection Department (RG), the views and conclusions contained in this paper are those of the authors and they should not be interpreted as necessarily representing official policies, either expressed or implied, of the Italian Government. The GPS velocities and transient offsets are provided in the Supporting Information, while the DInSAR data can be obtained by contacting the corresponding author ([email protected]).
Istituto Nazionale di Geofisica e Vulcanologia (INGV), Centro Nazionale Terremoti, via di Vigna Murata 605, 00143, Rome, Italy
Daniele Cheloni, Nicola D'Agostino, Giulio Selvaggi & Antonio Avallone
Consiglio Nazionale delle Ricerche (CNR), Istituto per il Rilevamento Elettromagnetico dell'Ambiente, via Diocleziano 328, 80124, Naples, Italy
Gianfranco Fornaro, Diego Reale, Eugenio Sansosti & Pietro Tizzani
Dipartimento della Protezione Civile (DPC), Ufficio Rischio Sismico e Vulcanico, via Vitorchiano, 2, 00189, Rome, Italy
Roberta Giuliani
Daniele Cheloni
Nicola D'Agostino
Giulio Selvaggi
Antonio Avallone
Gianfranco Fornaro
Diego Reale
Eugenio Sansosti
Pietro Tizzani
D.C. carried out the data analysis, the time dependent inversion, and drafted the manuscript. N.D. coordinated the research, collaborated in the time dependent inversion, carried out the GPS data processing, and drafted the manuscript with D.C. A.A. carried out the high-rate GPS analysis and drafted its description in the manuscript. E.S., G.S. and R.G. helped to draft the manuscript. E.S and R.G. contributed to the planning of CSK acquisitions during the seismic crisis. G.F. coordinated the DInSAR activities, including the data acquisition under the ASI license to use ID233. D.R carried out the DInSAR data processing with the help of G.F., E.S. and P.T. All the authors read and approved the final manuscript.
Correspondence to Daniele Cheloni.
Cheloni, D., D'Agostino, N., Selvaggi, G. et al. Aseismic transient during the 2010–2014 seismic swarm: evidence for longer recurrence of M ≥ 6.5 earthquakes in the Pollino gap (Southern Italy)?. Sci Rep 7, 576 (2017). https://doi.org/10.1038/s41598-017-00649-z
DOI: https://doi.org/10.1038/s41598-017-00649-z | CommonCrawl |
Elevated Serum IL-17A but not IL-6 in Glioma Versus Meningioma and Schwannoma
Doroudchi, Mehrnoosh (Department of Immunology, School of Medicine, Shiraz University of Medical Sciences) ;
Pishe, Zahra Ghanaat (Shiraz University of Medical Sciences) ;
Malekzadeh, Mahyar (Institute for Cancer Research, School of Medicine, Shiraz University of Medical Sciences) ;
Golmoghaddam, Hossein (Department of Immunology, School of Medicine, Shiraz University of Medical Sciences) ;
Taghipour, Mousa (Shiraz University of Medical Sciences) ;
Ghaderi, Abbas (Department of Immunology, School of Medicine, Shiraz University of Medical Sciences)
https://doi.org/10.7314/APJCP.2013.14.9.5225
Background: There is a Th1/Th2 cytokine imbalance and expression of IL-17 in patients with brain tumours. We aimed to compare the levels of IL-17A and IL-6 in sera of glioma, meningioma and schwannoma patients as well as in healthy individuals. Materials and Methods: IL-17A and IL-6 levels were measured in sera of 38 glioma, 24 meningioma and 18 schwannoma patients for comparison with 26 healthy controls by commercial ELISA assays. Results: We observed an increase in the IL-17A in 30% of glioma patients while only 4% and 5.5% of meningioma and schwannoma patients and none of the healthy controls showed elevated IL-17A in their sera ($0.29{\pm}0.54$, $0.03{\pm}0.15$ and $0.16{\pm}0.68$ vs. $0.00{\pm}0.00pg/ml$; p=0.01, p=0.01 and p=0.001, respectively). There was also a significant decrease in the level of IL-6 in glioma patients compared to healthy controls ($2.34{\pm}4.35$ vs. $4.67{\pm}4.32pg/ml$; p=0.01). There was a direct correlation between the level of IL-17A and age in glioma patients (p=0.005). Glioma patients over 30 years of age had higher IL-17A and lower IL-6 in their sera compared to the young patients. In addition, a non-significant grade-specific inverse trend between IL-17A and IL-6 was observed in glioma patients, where high-grade gliomas had higher IL-17A and lower IL-6. Conclusions: Our data suggest a Th17 mediated inflammatory response in the pathogenesis of glioma. Moreover, tuning of IL-6 and IL-17A inflammatory cytokines occurs during progression of glioma. IL-17A may be a potential biomarker and/or immunotherapeutic target in glioma cases.
Glioma;meningioma;schwannoma;IL-17A;IL-6;serum
Benchetrit F, Ciree A, Vives V, et al (2002). Interleukin-17 inhibits tumor cell growth by means of a T-cell-dependent mechanism. Blood, 99, 2114-21. https://doi.org/10.1182/blood.V99.6.2114
Alvarez-Rodriguez L, Lopez-Hoyos M, Munoz-Cacho P, Martinez-Taboada VM (2012). Aging is associated with circulating cytokine dysregulation. Cell Immunol, 273, 124-32. https://doi.org/10.1016/j.cellimm.2012.01.001
Arima T, Natsume A, Hatano H, et al (2005). Intraventricular chordoid meningioma presenting with Castleman disease due to overproduction of interleukin-6. Case report. J Neurosurg, 102, 733-7. https://doi.org/10.3171/jns.2005.102.4.0733
Bondy ML, Scheurer ME, Malmer B, et al (2008). Brain tumor epidemiology consortium. brain tumor epidemiology: consensus from the brain tumor epidemiology consortium. Cancer, 113, 1953-68. https://doi.org/10.1002/cncr.23741
Damasceno M (2011). Bevacizumab for the first-line treatment of human epidermal growth factor receptor 2-negative advanced breast cancer. Curr Opin Oncol, 23, 3-9. https://doi.org/10.1097/01.cco.0000397417.75245.9c
Doroudchi M, Saidi M, Malekzadeh M, et al (2013). Elevated IL-17A levels in early stages of bladder cancer regardless of smoking status. Future Oncol, 992, 295-304.
Fong B, Barkhoudarian G, Pezeshkian P, et al (2011). The molecular biology and novel treatments of vestibular schwannomas. J Neurosurg, 115, 906-14. https://doi.org/10.3171/2011.6.JNS11131
Gridley DS, Loredo LN, Slater JD, et al (1998). Pilot evaluation of cytokine levels in patients undergoing radiotherapy for brain tumor. Cancer Detect Prev, 22, 20-9. https://doi.org/10.1046/j.1525-1500.1998.00010.x
Gutierrez C, Schiff R (2011). HER2: biology, detection, and clinical implications. Arch Pathol Lab Med, 135, 55-62.
Kantelhardt SR, Caarls W, de Vries AH, et al (2010). Specific visualization of glioma cells in living low-grade tumor tissue. PLoS One, 5, 11323. https://doi.org/10.1371/journal.pone.0011323
Hu J, Mao Y, Li M, Lu Y (2011). The profile of Th17 subset in glioma. Int Immunopharmacol, 11, 1173-9. https://doi.org/10.1016/j.intimp.2011.03.015
Jarnicki A, Putoczki T, Ernst M (2010). Stat3: linking inflammation to epithelial cancer- more than a "gut" feeling? Cell Div, 5, 14. https://doi.org/10.1186/1747-1028-5-14
Jiang Y, Uhrbom L (2012). On the origin of glioma. Ups J Med Sci, 117, 113-21. https://doi.org/10.3109/03009734.2012.658976
Kryczek I, Wei S, Zou L, et al (2007). Cutting edge: Th17 and regulatory T cell dynamics and the regulation by IL-2 in the tumor microenvironment. J Immunol, 178, 6730-3. https://doi.org/10.4049/jimmunol.178.11.6730
Kujas M (1993). Meningioma. Curr Opin Neurol, 6, 882-7. https://doi.org/10.1097/00019052-199312000-00009
Kumar R, Kamdar D, Madden L, et al (2006). Th1/Th2 cytokine imbalance in meningioma, anaplastic astrocytoma and glioblastoma multiforme patients. Oncol Rep, 15, 1513-6.
Lee HK, Seo IA, Suh DJ, et al (2009). Interleukin-6 is required for the early induction of glial fibrillary acidic protein in Schwann cells during Wallerian degeneration. J Neurochem, 108, 776-86. https://doi.org/10.1111/j.1471-4159.2008.05826.x
Louis DN, Ohgaki H, Wiestler OD, et al (2007). The 2007 WHO classification of tumours of the central nervous system. Acta Neuropathol, 114, 97-109. https://doi.org/10.1007/s00401-007-0243-4
MacKenzie DJ (1926). A classification of the tumours of the glioma group on a histogenetic basis with a correlated study of prognosis. Can Med Assoc J, 16, 872.
Mantovani A, Allavena P, Sica A, Balkwill F (2008). Cancerrelated inflammation. Nature, 454, 436-44. https://doi.org/10.1038/nature07205
Martin-Orozco N, Muranski P, Chung Y, et al (2009). T helper 17 cells promote cytotoxic T cell activation in tumor immunity. Immunity, 31, 787-98. https://doi.org/10.1016/j.immuni.2009.09.014
Minniti G, Amichetti M, Enrici RM (2009). Radiotherapy and radiosurgery for benign skull base meningiomas. Radiat Oncol, 4, 42. https://doi.org/10.1186/1748-717X-4-42
Martin-Villalba A, Okuducu AF, von Deimling A (2008). The evolution of our understanding on glioma. Brain Pathol, 18, 455-63. https://doi.org/10.1111/j.1750-3639.2008.00136.x
Merlo A, Juretic A, Zuber M, et al (1993). Cytokine gene expression in primary brain tumours, metastases and meningiomas suggests specific transcription patterns. Eur J Cancer, 29, 2118-25. https://doi.org/10.1016/0959-8049(93)90046-I
Michaud DS, Gallo V, Schlehofer B, et al (2010). Reproductive factors and exogenous hormone use in relation to risk of glioma and meningioma in a large European cohort study. Cancer Epidemiol Biomarkers Prev, 19, 2562-9. https://doi.org/10.1158/1055-9965.EPI-10-0447
Miyahara Y, Odunsi K, Chen W, et al (2008). Generation and regulation of human CD4+ IL-17-producing T cells in ovarian cancer. Proc Natl Acad Sci USA, 105, 15505-10. https://doi.org/10.1073/pnas.0710686105
Norden DA, Reardon AD, Wen YCP (2010). Primary central nervous system tumors: pathogenesis and therapy (Current Clinical Oncology). New York: Hummana: 2011 edition.
Numasaki M, Fukushi J, Ono M, et al (2003). Interleukin-17 promotes angiogenesis and tumor growth. Blood, 101, 2620-7. https://doi.org/10.1182/blood-2002-05-1461
Park KJ, Kang SH, Chae YS, et al (2010). Influence of interleukin-6 on the development of peritumoral brain edema in meningiomas. J Neurosurg, 112, 73-80. https://doi.org/10.3171/2009.4.JNS09158
Pfisterer WK, Hank NC, Preul MC, et al (2004). Diagnostic and prognostic significance of genetic regional heterogeneity in meningiomas. Neuro Oncol, 6, 290-9. https://doi.org/10.1215/S1152851704000158
Pouratian N, Schiff D (2010). Management of low-grade glioma. Curr Neurol Neurosci Rep, 10, 224-31. https://doi.org/10.1007/s11910-010-0105-7
Rittierodt M, Tschernig T, Samii M, et al (2001). Evidence of recurrent atypical meningioma with rhabdoid transformation and expression of pyrogenic cytokines in a child presenting with a marked acute-phase response: case report and review of the literature. J Neuroimmunol, 120, 129-37. https://doi.org/10.1016/S0165-5728(01)00425-8
Rahaman SO, Harbor PC, Chernova O, et al (2002). Inhibition of constitutively active Stat3 suppresses proliferation and induces apoptosis in glioblastoma multiforme cells. Oncogene, 21, 8404-13. https://doi.org/10.1038/sj.onc.1206047
Reihmane D, Jurka A, Tretjakovs P, Dela F (2013). Increase in IL-6, TNF-$\alpha$, and MMP-9, but not sICAM-1 concentrations depends on exercise duration. Eur J Appl Physiol, 113, 851-8. https://doi.org/10.1007/s00421-012-2491-9
Riemenschneider MJ, Jeuken JW, Wesseling P, Reifenberger G (2010). Molecular diagnostics of gliomas: state of the art. Acta Neuropathol, 120, 567-84. https://doi.org/10.1007/s00401-010-0736-4
Sato T, Sugiyama T, Kawataki T, et al (2010). Clear cell meningioma causing Castleman syndrome in a child. J Neurosurg Pediatr, 5, 622-5. https://doi.org/10.3171/2010.1.PEDS09413
Sfanos KS, Bruno TC, Maris CH, et al (2008). Phenotypic analysis of prostate-infiltrating lymphocytes reveals TH17 and Treg skewing. Clin Cancer Res, 14, 3254-61. https://doi.org/10.1158/1078-0432.CCR-07-5164
Sherry MM, Reeves A, Wu JK, Cochran BH (2009). STAT3 is required for proliferation and maintenance of multipotency in glioblastoma stem cells. Stem Cells, 27, 2383-92. https://doi.org/10.1002/stem.185
Sim M, Dawson B, Landers G, et al (2013). Effect of exercise modality and intensity on post-exercise interleukin-6 and hepcidin levels. Int J Sport Nutr Exerc Metab, 23, 178-86. https://doi.org/10.1123/ijsnem.23.2.178
Sreekanthreddy P, Srinivasan H, Kumar DM, et al (2010). Identification of potential serum biomarkers of glioblastoma: serum osteopontin levels correlate with poor prognosis. Cancer Epidemiol Biomarkers Prev, 19, 1409-22. https://doi.org/10.1158/1055-9965.EPI-09-1077
Wernicke AG, Dicker AP, Whiton M, et al (2010). Assessment of Epidermal Growth Factor Receptor (EGFR) expression in human meningioma. Radiat Oncol, 5, 46. https://doi.org/10.1186/1748-717X-5-46
Tofaris GK, Patterson PH, Jessen KR, Mirsky R (2002). Denervated Schwann cells attract macrophages by secretion of leukemia inhibitory factor (LIF) and monocyte chemoattractant protein-1 in a process regulated by interleukin-6 and LIF. J Neurosci, 22, 6696-703.
Van Meir E, Sawamura Y, Diserens AC, et al (1990). Human glioblastoma cells release interleukin 6 in vivo and in vitro. Cancer Res, 50, 6683-8.
Wang L, Yi T, Kortylewski M, et al (2009). IL-17 can promote tumor growth through an IL-6-Stat3 signaling pathway. J Exp Med, 206, 1457-64. https://doi.org/10.1084/jem.20090207
Wiemels J, Wrensch M, Claus EB (2010). Epidemiology and etiology of meningioma. J Neurooncol, 99, 307-14. https://doi.org/10.1007/s11060-010-0386-3
Zhang B, Rong G, Wei H, et al (2008). The prevalence of Th17 cells in patients with gastric cancer. Biochem Biophys Res Commun, 374, 533-7. https://doi.org/10.1016/j.bbrc.2008.07.060
Zhang Y, Yu J, Qu L, Li Y (2012). Calcification of vestibular schwannoma: a case report and literature review. World J Surg Oncol, 10, 207. https://doi.org/10.1186/1477-7819-10-207
Zhou J, Tryggestad E, Wen Z, et al (2011). Differentiation between glioma and radiation necrosis using molecular magnetic resonance imaging of endogenous proteins and peptides. Nat Med, 17, 130-4. https://doi.org/10.1038/nm.2268
Expression of Neuronal Markers, NFP and GFAP, in Malignant Astrocytoma vol.15, pp.15, 2014, https://doi.org/10.7314/APJCP.2014.15.15.6315
Comparison of Linear Accelerator and Helical Tomotherapy Plans for Glioblastoma Multiforme Patients vol.15, pp.18, 2014, https://doi.org/10.7314/APJCP.2014.15.18.7811
Application of Computed Tomography for Differential Diagnosis of Glioma Stoke and Simple Cerebral Hemorrhage vol.15, pp.8, 2014, https://doi.org/10.7314/APJCP.2014.15.8.3425
Th17 Cells in Autoimmune and Infectious Diseases vol.2014, pp.2042-0099, 2014, https://doi.org/10.1155/2014/651503
IL-17A Levels in the Sera of Patients with Gastric Cancer Show Limited Elevation vol.16, pp.16, 2015, https://doi.org/10.7314/APJCP.2015.16.16.7149
The evaluation of the serum level of IL-10 in OLP patients pp.1618-565X, 2017, https://doi.org/10.1007/s00580-017-2564-6
Targeting the Tumor Microenvironment: The Protumor Effects of IL-17 Related to Cancer Type vol.17, pp.9, 2016, https://doi.org/10.3390/ijms17091433
Prognostic role of high sensitivity C-reactive protein and interleukin-6 in glioma and meningioma patients vol.138, pp.2, 2018, https://doi.org/10.1007/s11060-018-2803-y | CommonCrawl |
Public-health impact of outdoor air pollution for 2nd air pollution management policy in Seoul metropolitan area, Korea
Jong Han Leem1,
Soon Tae Kim2 &
Hwan Cheol Kim1
Air pollution contributes to mortality and morbidity. We estimated the impact of outdoor air pollution on public health in Seoul metropolitan area, Korea. Attributable cases of morbidity and mortality were estimated.
Epidemiology-based exposure-response functions for a 10 μg/m3 increase in particulate matter (PM2.5 and PM10) were used to quantify the effects of air pollution. Cases attributable to air pollution were estimated for mortality (adults ≥ 30 years), respiratory and cardiovascular hospital admissions (all ages), chronic bronchitis (all ages), and acute bronchitis episodes (≤18 years). Environmental exposure (PM2.5 and PM10) was modeled for each 3 km × 3 km.
In 2010, air pollution caused 15.9% of total mortality or approximately 15,346 attributable cases per year. Particulate air pollution also accounted for: 12,511 hospitalized cases of respiratory disease; 20,490 new cases of chronic bronchitis (adults); 278,346 episodes of acute bronchitis (children). After performing the 2nd Seoul metropolitan air pollution management plan, the reducible death number associated with air pollution is 14,915 cases per year in 2024. We can reduce 57.9% of death associated with air pollution.
This assessment estimates the public-health impacts of current patterns of air pollution. Although individual health risks of air pollution are relatively small, the public-health consequences are remarkable. Particulate air pollution remains a key target for public-health action in the Seoul metropolitan area. Our results, which have also been used for economic valuation, should guide decisions on the assessment of environmental health-policy options.
Urbanization and industrialization are ongoing worldwide. Air pollution accompanied by urbanization and industrialization has already become a major risk factor threatening human health. Fine PM (PM2.5) air pollution and mortality were linked in the Six Cities Study, which reported an association between PM2.5 and all cause, cardiopulmonary, and lung cancer mortality [1,2].
Research conducted during the past 20 years in the US, EU, and Asian countries has confirmed that outdoor air pollution contributes to morbidity and mortality [3-5]. Some effects may be related to short-term exposure [6,7], others have to be considered as contributions of long-term exposure. Although the mechanisms have not been fully explained, epidemiological evidence suggests that outdoor air pollution is a contributing cause of morbidity and mortality [8].
The recent Global Burden of Disease report estimated that 89% of the world's population lived in areas with PM2.5 ambient levels above the World Health Organization (WHO) Air Quality Guideline of 10 μg/m3, and 32% lived in areas above the WHO Level 1 Interim Target of 35 μg/m3. East Asia was singled out, with an estimated 76% population exposure above the Level 1 Interim Target [9].
In 1998, the average PM10 concentration in Seoul was 78 μg/m3. According to the first metropolitan air pollution management policy, the average PM10 concentration in Seoul was 41 μg/m3 in 2012 [10]. Even with a dramatic decrease of PM10, the current level of PM10 still exerts a significant burden of disease on people in Korea.
The Korean government established the 2nd metropolitan air pollution management plan during 2015–2024. In this plan, the goal of air quality is that PM10 reach 30 μg/m3 and PM2.5 reach 20 μg/m3.
Assessing public health benefit can be possible by risk assessment of air pollution. Up to now, public health assessment are not performed in Korea for monitoring air quality. In this study, we will assess public health benefit by calculating mortality and morbidity cases attributed to air pollution in the Seoul metropolitan area.
Design and participants
The impact assessment relies on calculating the attributable number of cases [11,12]. Cases of morbidity or mortality attributable to air pollution were derived for the health outcomes listed in Table 1. Outcomes were ignored if quantitative data were not available, if costing was impossible (e.g., valuing decrement in pulmonary function), and to prevent overlapping health measures from causing multiple counting of the same costs (e.g., emergency visits were not considered because they were partly included in the hospital admissions). We selected only PM2.5 or PM10 in order to derive the attributable cases because PM2.5 and PM10 are useful indicators of several sources of outdoor air pollution such as fossil-fuel combustion. Three data components are required for estimation of the number of cases attributed to outdoor air pollution in a given population: the exposure-response function; the frequency of the health outcome (e.g., the incidence or prevalence) and the level of exposure. The association between outdoor air pollution and health-outcome frequency is usually described with an exposure-response function (or effect estimate) that expresses the relative increase in adverse health for a given increment in air pollution.
Table 1 Health outcome definition and source of data
Baseline population, mortality and morbidity data
Population data for the Seoul metropolitan area from Statistics Korea is classified according to address and registration in the following age groups: 0–4, 5–9, 10–14, 15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49, 50–54, 55–59, 60–64, and 65+ years. Citizens' residences in the Seoul metropolitan area were divided into 80 sections according to neighborhoods, forming basic administrative units such as Gu and Gun, used in city planning and management. Eighty sections were formed in order to identify site-specific exposure to air pollution and identify the areas with the greatest risk. 2024 population data were obtained from population prediction data of Statistics Korea. The health outcome definition and source of data are listed in Table 1. The total regional baseline mortality was retrieved from statistics on Korea (International Classification of Diseases–ICD-10, A00-Y98). ICD-9 code in previous studies were translated into ICD10. The morbidity calculations were performed using hospitalization data from the Korean Health Insurance, which covers the entire population and is the sole purchaser of health care services in the country. Hospitalizations due to two main disease groups were included in the calculations: cardiovascular (I00-I99) and respiratory causes (J00-J99). Cardiac admissions (I20-I25) and cerebrovascular admissions (I60-I69) were also used for the exposure-response work on cardiovascular hospitalizations.
Exposure assessment
CMAQ (Comprehensive Multiscale Air Quality) [13] version 4.7.1 was used to simulate air quality over the Seoul Metropolitan Area (SMA) for a one-month period of each season in 2010and 2024; January, April, July, and October. The Nesting-down domains were composed of 27-km, 9-km, and 3-km horizontally resolved domains. The coarse domain (174x128 arrays) includes northeastern Asia; Korea, Japan, and most of China. The 9-km domain (67x82 arrays) includes all of South Korea and most of North Korea, and the finest domain (58x61 arrays) was set up to focus on the SMA. The SAPRC-99 (Statewide Air Pollution Research Center 99) [14] chemical mechanism for gas-phase chemistry and AERO5 for aerosol module were selected to represent model species over the region.
For meteorological simulation, WRF (Weather Research and Forecast) version 3.4.1 was utilized with NCEP (National Centers for Environmental Prediction) Final (FNL) Operational Global Analysis Model fields for initial and boundary conditions [13]. The WRF was configured to have 35 sigma layers up to 50 hpa, and the lowest layer thickness is around 30 m (sigma = 0.996). The WRF physical options are as follows: WRF Single-Moment 6-class scheme, Rapid Radiative Transfer Model longwave scheme, Goddard shortwave scheme, M-O surface layer scheme, Grell Cumulus scheme, and YSU PBL scheme. Meteorology-Chemistry Interface Program (MCIP) version 3.6 was then used for preparation of CMAQ-ready meteorological inputs. One-way nesting was applied during WRF and CMAQ simulations.
The CAPSS 2010 (Clean Air Protection Supporting System, 2010 base year), anthropogenic emissions inventory, was processed using SMOKE (Sparse Matrix Operation Kernel Emission) version 3.1, and biogenic emissions such as isoprene and terpenes were estimated using MEGAN (Model of Emissions of Gases and Aerosols from Nature) [15] version 2.04. The MICS-Asia emissions inventory was supplementary used for foreign emissions.
Exposure-response functions, calculation of mortality and morbidity
To describe the long-term effect of air pollution on mortality, the broadly employed US ACS study relative risk RR = 1.06 (95% CI 1.02–1.11) per 10 μg/m3 increase of PM2.5 was used as the exposure-response relationship [16].
RR = 1.013 (95% CI 1.001–1.025) per 10 μg/m3 increase of PM10 was used for calculations of respiratory hospitalizations due to air pollution [17-20]. For cardiovascular hospitalizations we used a weighted average RR = 1.013 (95% CI 1.007–1.019) per 10 μg/m3 increase of PM10 based on the effect on cardiac and cerebrovascular admissions from a COMEAP meta-analysis [21,22]. For chronic bronchitis incidence, we used RR = 1.098 (95% CI 1.009–1.194) per 10 μg/m3 increase of PM10 based on one study, which reported the effect of PM on the incidence of chronic bronchitis among a population with very low rates of smoking [23].
For bronchitis episodes, we used RR = 1.306 (95% CI 1.135–1.502) per 10 μg/m3 increase of PM10 validated at several studies [24-26]. The meta-analyses showed a statistically significant association between risk of lung cancer and PM10 (hazard ratio [HR] 1 · 22 [95% CI 1 · 03-1 · 45] per 10 μg/m3 [27]. For asthma attacks that occurred in children younger than 15 years old, we used RR = 1.044 (95% CI 1.027–1.062) per 10 μg/m3 increase of PM10 [28-30]. For asthma attacks that occurred in adults older than 15 years old, we used RR = 1.039 (95% CI 1.019–1.059) per 10 μg/m3 increase of PM10 [31-33]. The cases (mortality and morbidity) were calculated in absolute and relative numbers for all sections in the Seoul Metropolitan area. The following equation was used:
$$ \varDelta \mathrm{Y}=\Big[Y0\left(\mathrm{e}{\hbox{-}}^{\upbeta \kern0.15em \cdot \kern0.15em \varDelta \mathrm{PM}2:5}\cdot \hbox{--} 1\right)\cdot POP $$
where Y0 is the baseline rate; pop the number of exposed persons; · the exposure-response function and △PM2.5 the estimated excess exposure.
For each outcome we selected studies from the peer-reviewed literature in order to derive the exposure-response function and the 95% CI. For inclusion, an adequate study design and published PM10 levels were required. Cross-sectional or cohort studies relying on two or three levels of exposure were omitted, as were ecological studies, given their inherent limitations. The health-outcome frequencies (mortality, prevalence, incidence, or person-days) may differ across countries; thus, national mortality and morbidity data were used (Table 2). For some morbidity data, epidemiological studies were the only source (bronchitis incidence from the Adventist Health and Smog Study [34], which was also used by Ostro and colleagues [35]. Annual mean outdoor PM10 had to be determined on a continuous scale. Although there is no evidence for any threshold, there are also no studies available where participants were exposed to PM10 below 20 μg/m3 (annual mean). This reference level also includes the natural background PM10. Thus, the health impact of air-pollution exposure below 20 μg/m3 was ignored. To derive the population exposure distribution, mean annual concentrations of PM10 were modelled for each area at a spatial resolution of 3 km × 3 km.
Table 2 Effect estimate of health outcome and health outcome frequencies
Using the exposure-response functions, expressed as relative risk (RR) per 10 μg/m3, and the health frequency per 1000 000 inhabitants, for each health outcome we calculated the attributable number of cases (D10) for an increase of 10 μg/m3 PM10, as: D10 = (RR-1)*P0 where P0 is the health frequency, given an baseline exposure E0 and RR is the mean exposure-response function across the studies used (Table 1). The exposure-response functions are usually log-linear. For small risks and across limited ranges of exposure log-linear and linear functions would provide similar results. However, if one may apply the method to populations with very large exposure ranges, the impact may be seriously overestimated on the log-linear scale. Thus, we derived the attributable number of cases on an additive scale. The study protocol was approved by the Institutional Review Boards of the Inha University College of Medicine.
Table 2 summarizes the effect estimates, the specific health-outcome frequencies at E0, and the respective number of cases attributable to a 10 μ/m3 increase in PM 10 (D10) for each health outcome. A summary of health outcomes attributed to particulate air pollution in the Seoul metropolitan area is shown in Table 3. The number of cases attributable to air pollution is given for three scenarios; year 2010, year 2024 without regulation regarding air pollution, year 2024, when the goal of air attainment is achieved. PM 2.5 concentrations without reducing air emissions in 2024 are shown in Figure 1. PM 2.5 concentrations after reducing air emissions in 2024 are shown in Figure 2. We compared the number of cases attributed to air pollution for three scenarios. We drew flow chart of this study (Figure 3).
Table 3 Health outcome attributed to particulate air pollution in Seoul metropolitan area
PM2.5 concentration without reducing air emissions at 2024.
PM2.5 concentration after reducing air emissions at 2024.
Flow chart of this study.
Air pollution caused 15.9% of total mortality or approximately 15,346 attributable cases per year in 2010. Particulate air pollution also accounted for: 12,511 hospitalized cases of respiratory disease; 20,490 new cases of chronic bronchitis (adults); 278,346 episodes of acute bronchitis (children). After performing the 2nd Seoul metropolitan air pollution management plan, Air pollution caused 6.7% of total mortality or approximately 10,866 attributable cases per year in 2024.
The public-health impact depends not only on the relative risk but also on the exposure distribution in the population. Our assessment assigned approximately 15.9% of annual deaths to outdoor air pollution. Our assessment is relatively high compared to 6% of EU's assessment [12]. Korean people have high chronic disease incidences of cancer and chronic respiratory diseases, such as asthma and COPD [36]. According to OECD Health data [37], in 2009, the hospital admission rate for avoidable asthma in the population age 15 and over was 101.5/100,000 persons, while average OECD was 51.8/100,000 persons. In 2009, the hospital admission rate for COPD in the population age 15 and over was 222/100,000 persons, while average OECD was 198/100,000 persons. The incidence rate for all cancers combined in Korea showed an annual increase of 3.3% from 1999 to 2009 [38]. Considering current air pollution levels and our study's results, certainly air pollution has important contribution to increases of cancer and chronic respiratory diseases in Seoul metropolitan area. Our study has strong advantages. First, our study attempted to decrease uncertainty inherent in these kinds of public health risk assessment. To account for inherent uncertainty in the impact assessment, an "at least" approach was applied for each step, and the uncertainties in the effect estimates were quantified and the results were given as a range (95% CI of the exposure-response function).
To assess the effects of air pollution—a complex mixture of pollutants—epidemiological studies use several indicators of exposure (e.g., NO2, CO, PM10, total suspended particles, SO2). However, because these pollutants are correlated, epidemiological studies cannot exactly allocate observed effects to each pollutant. A pollutant-by-pollutant assessment would grossly overestimate the impact. Therefore, we selected only one pollutant to derive the attributed cases.
The short-term effects of high pollution levels on mortality were not calculated separately because these are already included in exposure-response function of long-term mortality. We consider it inappropriate to use short-term studies for the impact assessment of annual mortality [39]. Short-term studies capture only part of air-pollution-related cases, namely those where exposure and event (death) are closely connected in time. Our calculation based on cohort studies captures both the short-term effects and the long-term effects. The number of deaths attributed to air pollution would be about 4–5 times smaller if the short-term effect estimates had been applied.
Second, we based our assumption on the consistency of epidemiological results observed across many countries. We derived exposure-response from selected well designed studies used in previous public health assessment. For mortality, we had to rely on two US studies, which were confirmed by a third US study [40], French PAARC study [41], China study [5]. Exposure-response used in our study would be consistent because it was confirmed across many countries.
Third, our study used nationwide frequency data. Health-outcome frequencies may strongly influence the impact assessment. Mortality from national sources may be considered accurate. However, frequency measures of morbidity and data on health-care systems have to be considered estimates with some inherent uncertainties. We selected national health frequency data in order to reduce the impact of these limitations. National health insurance covers almost all people in Korea, so that health-outcome frequencies in our study have fewer inherent uncertainties.
However, our study has some limitations. First, our assessment relies on some limited study for deriving exposure-response functions. For chronic bronchitis, our assessment relies on one study. The advantage of the study is the reporting of effects of PM on the incidence of chronic bronchitis among a population with very low rates of smoking. This measure was particularly useful for the economic valuation and had been used in other studies [42]. Second, our study restricted the effect of air pollution to PM10 and PM2.5, and did not take other pollutants, such as NOx, SO2 and O3 into consideration. It can underestimate independent effects of air pollution not explained by or correlated by PM fractions. Temperature is increasing due to climate change. Ozone, showing higher concentration these days, is expected to have more hazardous effects on mortality in Korea and other countries [43] than before. Our calculation regarding the effect of air pollution may be underestimated, because we consider the effect of PM only.
As we did not quantify the attributed number of deaths below age 30 years, we might have underestimated the real number of deaths attributed to air pollution. We ignored potential effects on newborn babies or infants [44]. Although infant mortality is low in Korea, and thus the number of attributed cases is small, the impact on years of life lost, and therefore the economic valuation, could be considerable.
Third, we did not consider uncertainty in exposure assessment. If we assume another exposure reference value, the impact estimates will be higher. Our study assumed that the health risk showed the least at PM10 exposure level <20 μg/m3. In Korea, we have never experienced PM10 exposure level <20 μg/m3. In other countries such as EU and US, an exposure level of 15 μg/m3 corresponds to the reference value in the public-health impact [45]. Our study can be underestimated because of our assumption about the reference value.
Apart from the variability of epidemiological exposure-response estimates (95% CI), we did not quantify other sources of uncertainty, such as errors in the population exposure distribution, or in the estimation of health outcome frequencies. Simulations of multiple probability distributions may, however, erroneously suggest a level of precision in assessing uncertainty that cannot be achieved. These kinds of public health assessment contained a lot of uncertainty. Our study took an "at least" approach in order to account for inherent uncertainty in the impact assessment. Our study insisted that the risk attributed to air pollution in Korea at least exceeded our assessment.
Recent publication about air pollution study in Korea focused on short-term effects of air pollution [46,47]. Recently some studies report adverse pregnancy outcomes associated with air pollution from birth cohort study [48,49]. There is no report like this study in Korea. We assessed public health benefit by calculating mortality and morbidity cases attributed to air pollution in the Seoul metropolitan area.
Our study showed that air pollution had the significant impact on the health of Korean people. In particular, elderly people and children are the vulnerable population to air pollution. In the valuation of air pollution related death and hospitalization, assumptions about age structure of those affected may be influential [47]. The affected population will increase because of the rapidly ageing population structure. Korea is a rapidly aging society, and the number of elderly people older than 65 years is rapidly increasing. Actually the number of elderly people older than 65 years in 2024 will be 12,635,000 persons and their portion will be 24.4% among all population. Traffic is important contributor to urban air pollution in Korea and Asian countries. But traffic creates costs that are not covered by the polluters. The real related external costs from the Organization for Economic Cooperation and Development (OECD) are quantified [50-53]. The traffic share of the total PM10 exposure depended on the mean concentration, ranging from 28% at an annual mean PM10 of 10–15 μg/m3, and increasing up to 58% in some areas. In Korea, the traffic share of the total PM2.5 exposure will be higher due to rapid urbanization. Traffic air pollution will be great burden to urban air pollution because if increasing car numbers in the near future.
Death attributed to air pollution will increase, if proper countermeasures are not taken. If PM2,5 can be maintained at less than 20 ug/m3 in 2024 in the Seoul metropolitan area, death attributable to air pollution will decrease from 25,781 to 10,866. Korean people have a high disease burden of cancer and chronic respiratory diseases, such as asthma and COPD, attributed to air pollution. Clean air strategies, such as the 2nd air management plan in the Seoul metropolitan area, will decrease the burden of disease in Korean people.
Even after accounting for the overall uncertainty of this estimation, our study emphasizes the need to consider air pollution as a pivotal cause of impaired health. In a century moving toward a sustainable society, closer collaboration of public health and environmental policies will enable us to have preventive capacity. Further development of standardized impact assessment methods is needed in order to more stringently assess the benefits from clean air strategies.
Dockery DW, Pope CA, Xu X, Spengler JD, Ware JH, Fay ME, et al. An association between air pollution and mortality in six U.S. cities. N Engl J Med. 1993;329:1753–9.
Pope A, Thun M, Namboodiri M, Dockery DW, Evans JS, Speizer FE, et al. Particulate air pollution as a predictor of mortality in a prospective study of US adults. Am J Respir Crit Care Med. 1995;151:669–74.
Wilson R, Spengler J. Particles in our air: concentrations and health effects. Boston: Harvard University Press; 1996.
Holgate S, Samet J, Koren H, Maynard R. Air pollution and health. San Diego/London: Academic Press; 1999.
Zhang LW, Chen X, Xue XD, Sun M, Han B, Li CP, et al. Long-term exposure to high particulate matter pollution and cardiovascular mortality: a 12-year cohort study in four cities in northern China. Environ Int. 2014;62:41–7.
Katsouyanni K, Touloumi G, Spix C. Short-term effects of ambient sulphur dioxide and particulate matter on mortality in 12 European cities: results from times series data from the APHEA project. BMJ. 1997;314:1658–63.
Article PubMed Central CAS PubMed Google Scholar
Hong YC, Leem JH, Ha EH, Christiani DC. PM(10) exposure, gaseous pollutants, and daily mortality in Inchon. South Korea Environ Health Perspect. 1999;107(11):873–8.
Bates D. Health indices of the adverse effects of air pollution: the question of coherence. Environ Res. 1992;59:336–49.
Brauer M, Amann M, Burnett RT, Cohen A, Dentener F, Ezzati M, et al. Exposure assessment for estimation of the global burden of disease attributable to outdoor air pollution. Environ Sci Technol. 2012;46(2):652–60.
Ministry of Environment. Annual Report of Ambient Air Quality in Korea. 2012.
Ostro B, Sanchez J, Aranda C, Eskeland G. Air pollution and mortality: results from a study of Santiago, Chile. J Exp Anal Environ Epidemiol. 1996;6:97–114.
Künzli N, Kaiser R, Medina S, Studnicka M, Chanel O, Filliger P, et al. Public-health impact of outdoor and traffic-related air pollution: a European assessment. Lancet. 2000;356(9232):795–801.
Byun, D.W. and J.K.S. Ching. Science Algorithms of the EPA Models-3 Community Multi-scale Air Quality (CMAQ) Modeling System, EPA Report, EPA/600/R-99/030, NERL, Durham: US EPA; 1999
Carter WPL. Documentation of the SAPRC-99 Chemical Mechanism for VOC Reactivity Assessment, Report to California Air Resources Board, Contracts 92–329 and 95–308. 1999.
Guenther A, Karl T, Harley P, Wiedinmyer C, Palmer PI, Geron C. Estimates of global terrestrial isoprene emissions using MEGAN (Model of Emissions of Gases and Aerosols from Nature). Atmos Chem Phys. 2006;6:3181–210.
Pope III CA, Burnett RT, Thun MJ, Calle EE, Krewski D, Ito K, et al. Lung cancer, cardiopulmonary mortality, and long-term exposure to fine particulate air pollution. JAMA. 2002;287:1132–41.
Pope III CA. Respiratory hospital admissions associated with PM10 pollution in Utah, Salt Lake, and Cache Valleys. Arch Environ Health. 1991;46(2):90–7.
Spix C, Anderson HR, Schwartz J, Vigotti MA, LeTertre A, Vonk JM, et al. Short-term effects of air pollution on hospital admissions of respiratory diseases in Europe: a quantitative summary of APHEA study results. Air Pollution and Health: a European Approach. Arch Environ Health. 1998;53(1):54–64.
Wordley J, Walters S, Ayres J. Short term variations in hospital admissions and mortality and particulate air pollution. Occup Environ Health. 1997;54:108–16.
Prescott GJ, Cohen GR, Elton RA, Fowkes FG, Agium RM. Urban air pollution and cardiopulmonary ill health: a 14.5 year time series study. Occup Environ Med. 1998;55:697–704.
Poloniecki J, Atkinson R. Ponce de Leon A, Anderson H. Daily time series for cardiovascular hospital admissions and previous day's air pollution in London, UK. Occup Environ Health. 1997;54:535–40.
Medina S, Le Terte A, Dusseux E. Evaluation des Risques de la Pollution Urbaine sur lar Sante (ERPURS). Analyse des liens a court terme entre pollution atmosherique et sante: resultats 1991–95. Paris: Conseil Regional d'lle de France; 1997.
Abbey D, Petersen F, Mills P, Beeson W. Long-term ambient concentrations of total suspended particulates, ozone, and sulfur dioxide and respiratory symptoms in a nonsmoking population. Arch Environ Health. 1993;48:33–46.
Dockery D, Speizer F, Stram D, Ware J, Spengler J, Ferris BJ. Effects of inhalable particles on respiratory health of children. Am Rev Respir Dis. 1989;139:587–94.
Dockery D, Cunningham J, Damokosh A, Neas LM, Spengler JD, Koutrakis P, et al. Health Effects of acid aerosols on north American Children: respiratory symptoms. Env Health Perspect. 1996;104:500–5.
Braun-Fahrlander C, Vuille J, Sennhauser F, Neu U, Künzle T, Grize L, et al. Respiratory health and long-term exposure to air pollutants in Swiss Schoolchildren. Am J Respir Crit Care Med. 1997;155:1042–9.
Raaschou-Nielsen O, Andersen ZJ, Beelen R, Samoli E, Stafoggia M, Weinmayr G, et al. Air pollution and lung cancer incidence in 17 European cohorts: prospective analyses from the European Study of Cohorts for Air Pollution Effects (ESCAPE). Lancet Oncol. 2013;14(9):813–22.
Roemer W, Hoek G, Brunekreef B. Effect of ambient winter air pollution on respiratory health of children with chronic respiratory symptoms. Am Rev Respir Dis. 1993;147:118–24.
Segala C, Fauroux B, Just J. Short-term effect of winter air pollution on respiratory health of asthmatic children in Paris. Eur Respir J. 1998;11:677–85.
Gielen M, van der Zee S, van Wijnen J. Acute effects of summer air pollution on respiratory health of asthmatic children. Am J Respir Crit Care Med. 1997;155:2105–8.
Dusseldorp A, Kruize H, Brunekreef B. Associations of PM10 and airborne iron with respiratory health of adults living near a steel factory. Am J Respir Crit Care Med. 1995;152:1032–9.
Hiltermann T, Stolk J, van der Zee S. Asthma severity and susceptibility to air pollution. Eur Respir J. 1998;11:686–93.
Neukirch F, Segala C, Le Moullec Y. Short-term effects of low-level winter pollution on respiratory health of asthmatic adults. Arch Environ Health. 1998;53:320–8.
Abbey D, Hwang B, Burchette R. Estimated long-term ambient concentrations of PM10 and development of respiratory symptoms in a nonsmoking population. Arch Environ Health. 1995;50:139–52.
Ostro B, Chesnut L. Assessing the health benefits of reducing particulate matter air pollution in the United States. Environ Res. 1998;76:94–106.
Leem JH, Jang YK. Increase of diesel car raises health risk in spite of recent development in engine technology, Environmental Health & Toxicology 2014;29. http://dx.doi.org/10.5620/eht.e2014009.
OECD. OECD health data 2011. [cited 2014 Feb 27]. Available from: http://www.oecd.org/els/health-systems/49105858.pdf
Jung KW, Park S, Kong HJ, Won YJ, Lee JY, Seo HG, et al. Cancer Statistics in Korea: Incidence, Mortality, Survival, and Prevalence in 2009. Cancer Res Treat. 2012;44(1):11–24.
McMichael A, Anderson H, Brunekreef B, Cohen A. Inappropriate use of daily mortality analyses to estimate longer-term mortality effects of air pollution. Int J Epidemiol. 1998;27:450–3.
Abbey D, Nishino N, McDonnel W, Burchette RJ, Knutsen SF, Lawrence Beeson W, et al. Long-term inhalable particles and other air pollutants related to mortality in nonsmokers. Am J Respir Crit Care Med. 1999;159:373–82.
Baldi I, Roussillon C, Filleul L. Effect of air pollution on longterm mortality: description of mortality rates in relation to pollutants levels in the French PAARC Study. Eur Respir J. 1999;24 suppl 30:S392.
Zemp E, Elsasser S, Schindler C, Künzli N, Perruchoud AP, Domenighetti G, et al. Long-term ambient air pollution and chronic respiratory symptoms (SAPALDIA). Am J Respir Crit Care Med. 1999;159:1257–66.
Kim SY, Lee JT, Hong YC, Ahn KJ, Kim H. Determining the threshold effect of ozone on daily mortality: an analysis of ozone and mortality in Seoul, Korea, 1995–1999. Environ Res. 2004;94(2):113–9.
Brunekreef B. Air pollution kills babies. Epidemiology. 1999;10:661–2.
Council NRD(NRDC). Breath taking: premature mortality due to particulate air pollution in 239 American cities. San Francisco: NRDC; 1996.
Son JY, Lee JT, Park YH, Bell ML. Short-term effects of air pollution on hospital admissions in Korea. Epidemiology. 2013;24(4):545–54.
Park HY, Bae S, Hong YC. PM10 exposure and non-accidental mortality in Asian populations: a meta-analysis of time-series and case-crossover studies. J Prev Med Public Health. 2013;46(1):10–8.
Kim YJ, Lee BE, Park HS, Kang JG, Kim JO, Ha EH. Risk factors for preterm birth in Korea: a multicenter prospective study. Gynecol Obstet Invest. 2005;60(4):206–12.
Kim E, Park H, Hong YC, Ha M, Kim Y, Kim BN, et al. Prenatal exposure to PM10 and NO2 and children's neurodevelopment from birth to 24 months of age: mothers and Children's Environmental Health (MOCEH) study. Sci Total Environ. 2014;481:439–45.
Dora C. A different route to health: implications of transport policies. BMJ. 1999;318:1686–9.
Sommer H, Chanel O, Vergnaud JC, Herry M, Sedlak N, Seethaler R. Monetary valuation of road traffic related air pollution: health costs due to road traffic-related air pollution: an impact assessment project of Austria, France and Switzerland third WHO Ministerial Conference of Environment & Health. London: WHO; 1999.
Kunzli N, Kaiser R, Medina S, Studnicka M, Oberfeld G, Horak Jr F. Health costs due to road traffic-related air pollution: an impact assessment project of Austria, France and Switzerland. (Air pollution attributable cases. Technical report on epidemiology). Switzerland: Federal Department for Environment, Energy and Communications Bureau for Transport Studies; 1999.
Filliger P, Puybonnieux-Texier V, Schneider J. PM10 population exposure technical report on air pollution: health costs due to road traffic-related air pollution: an impact assessment project of Austria, France and Switzerland. London: WHO; 1999.
This study is supported by the Ministry of Environment, Republic of Korea.
Department of Occupational and Environmental Medicine, Inha University Hospital, 27 Inhang road Jung-gu, Incheon, 400-711, South Korea
Jong Han Leem & Hwan Cheol Kim
Division of Environmental Engineering, Ajou University Woncheon-dong, Yeongtong-gu, Suwon, 443-749, South Korea
Soon Tae Kim
Jong Han Leem
Hwan Cheol Kim
Correspondence to Jong Han Leem.
The authors have no conflicts of interest with the material presented in this paper.
LJ formulated the research questions, analyzed the data, and prepared the manuscript. KS performed exposure assessment. KH contributed to Data collection and critically revised the manuscript. All authors read and approved the final manuscript.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Leem, J.H., Kim, S.T. & Kim, H.C. Public-health impact of outdoor air pollution for 2nd air pollution management policy in Seoul metropolitan area, Korea. Ann of Occup and Environ Med 27, 7 (2015). https://doi.org/10.1186/s40557-015-0058-z
Accepted: 04 February 2015
Public health assessment | CommonCrawl |
March 2006 , Volume 16 , Issue 1
Pointwise asymptotic convergence of solutions for a phase separation model
Pavel Krejčí and Songmu Zheng
2006, 16(1): 1-18 doi: 10.3934/dcds.2006.16.1 +[Abstract](2367) +[PDF](305.2KB)
A new technique, combining the global energy and entropy balance equations with the local stability theory for dynamical systems, is used for proving that every solution to a non-smooth temperature-driven phase separation model with conserved energy converges pointwise in space to an equilibrium as time tends to infinity. Three main features are observed: the limit temperature is uniform in space, there exists a partition of the physical body into at most three constant limit phases, and the phase separation process has a hysteresis-like character.
Pavel Krej\u010D\u00ED, Songmu Zheng. Pointwise asymptotic convergence of solutions for a phase separation model. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 1-18. doi: 10.3934/dcds.2006.16.1.
Decay of correlations for non-Hölder observables
Vincent Lynch
2006, 16(1): 19-46 doi: 10.3934/dcds.2006.16.19 +[Abstract](2126) +[PDF](348.1KB)
We consider the general question of estimating decay of correlations for non-uniformly expanding maps, for classes of observables which are much larger than the usual class of Hölder continuous functions. Our results give new estimates for many non-uniformly expanding systems, including Manneville-Pomeau maps, many one-dimensional systems with critical points, and Viana maps . In many situations, we also obtain a Central Limit Theorem for a much larger class of observables than usual.
Our main tool is an extension of the coupling method introduced by L.-S. Young for estimating rates of mixing on certain non-uniformly expanding tower maps.
Vincent Lynch. Decay of correlations for non-H\u00F6lder observables. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 19-46. doi: 10.3934/dcds.2006.16.19.
Stability of travelling waves with algebraic decay for $n$-degree Fisher-type equations
Yaping Wu, Xiuxia Xing and Qixiao Ye
This paper is concerned with the asymptotic stability of travelling wave front solutions with algebraic decay for $n$-degree Fisher-type equations. By detailed spectral analysis, each travelling wave front solution with non-critical speed is proved to be locally exponentially stable to perturbations in some exponentially weighted $L^{\infty}$ spaces. Further by Evans function method and detailed semigroup estimates, the travelling wave fronts with non-critical speed are proved to be locally algebraically stable to perturbations in some polynomially weighted $L^{\infty}$ spaces. It's remarked that due to the slow algebraic decay rate of the wave at $+\infty,$ the Evans function constructed in this paper is an extension of the definitions in [1, 3, 7, 11, 21] to some extent, and the Evans function can be extended analytically in the neighborhood of the origin.
Yaping Wu, Xiuxia Xing, Qixiao Ye. Stability of travelling waves with algebraic decay for $n$-degree Fisher-type equations. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 47-66. doi: 10.3934/dcds.2006.16.47.
Regularity of the Navier-Stokes equation in a thin periodic domain with large data
Igor Kukavica and Mohammed Ziane
Let $\Omega=[0,L_1]\times[0,L_2]\times[0,\epsilon]$ where $L_1,L_2>0$ and $\epsilon\in(0,1)$. We consider the Navier-Stokes equations with periodic boundary conditions and prove that if
$ \|\| \nabla u_0\|\|_{L^2(\Omega)} \le \frac{1}{C(L_1,L_2)\epsilon^{1/6}} $
then there exists a unique global smooth solution with the initial datum $u_0$.
Igor Kukavica, Mohammed Ziane. Regularity of the Navier-Stokes equation in a thin periodic domain with large data. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 67-86. doi: 10.3934/dcds.2006.16.67.
Small-data scattering for nonlinear waves with potential and initial data of critical decay
Paschalis Karageorgis
2006, 16(1): 87-106 doi: 10.3934/dcds.2006.16.87 +[Abstract](2392) +[PDF](301.3KB)
We study the scattering problem for the nonlinear wave equation with potential. In the absence of the potential, one has sharp global existence results for the Cauchy problem with small initial data; those require the data to decay at a rate $k\geq k_c$, where $k_c$ is a critical decay rate that depends on the order of the nonlinearity. However, scattering results have appeared only for the supercritical case $k>k_c$. In this paper, we extend the latter results to the critical case and we also allow the presence of a short-range potential.
Paschalis Karageorgis. Small-data scattering for nonlinear waves with potential and initial data of critical decay. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 87-106. doi: 10.3934/dcds.2006.16.87.
Relationship of the morse index and the $L^\infty$ bound of solutions for a strongly indefinite differential superlinear system
Jiaquan Liu, Yuxia Guo and Pingan Zeng
We consider the second order strongly indefinite differential system with superlinearities. By using the approximation method of finite element, we show that bounds on solutions of the restriction functional onto finite dimensional subspace are equivalent to bounds on their relative Morse indices. The obtained results can be used to establish a Morse theory for strongly indefinite functionals with superlinearities.
Jiaquan Liu, Yuxia Guo, Pingan Zeng. Relationship of the morse index and the $L^\\infty$ bound of solutions for a strongly indefinite differential superlinear system. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 107-119. doi: 10.3934/dcds.2006.16.107.
The global attractor of the damped, forced generalized Korteweg de Vries-Benjamin-Ono equation in $L^2$
Boling Guo and Zhaohui Huo
The existence of the global attractor of the damped, forced generalized KdV-Benjamin-Ono equation in $L^2( \mathbb{R})$ is proved for forces in $L^2( \mathbb{R})$. Moreover, the global attractor in $L^2( \mathbb{R})$ is actually a compact set in $H^3( \mathbb{R})$.
Boling Guo, Zhaohui Huo. The global attractor of the damped, forced generalized Korteweg de Vries-Benjamin-Ono equation in $L^2$. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 121-136. doi: 10.3934/dcds.2006.16.121.
Convergence to V-shaped fronts in curvature flows for spatially non-decaying initial perturbations
Mitsunori Nara and Masaharu Taniguchi
This paper is concerned with the long time behavior for evolution of a curve governed by a curvature flow with constant driving force in the two-dimensional space. This problem has two types of traveling waves: traveling lines and V-shaped fronts, except for stationary circles. Studying the Cauchy problem, we deal with moving curves represented by entire graphs on the $x$-axis. In this paper, we consider the uniform convergence of curves to the V-shaped fronts. Convergence results for a class of spatially non-decaying initial perturbations are established. Our results hold true with no assumptions on the smallness of given perturbations.
Mitsunori Nara, Masaharu Taniguchi. Convergence to V-shaped fronts in curvature flows for spatially non-decaying initial perturbations. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 137-156. doi: 10.3934/dcds.2006.16.137.
The cyclicity of period annuli of some classes of reversible quadratic systems
G. Chen, C. Li, C. Liu and Jaume Llibre
The cyclicity of period annuli of some classes of reversible and non-Hamiltonian quadratic systems under quadratic perturbations are studied. The argument principle method and the centroid curve method are combined to prove that the related Abelian integral has at most two zeros.
G. Chen, C. Li, C. Liu, Jaume Llibre. The cyclicity of period annuli of some classes of reversible quadratic systems. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 157-177. doi: 10.3934/dcds.2006.16.157.
On the density of hyperbolicity and homoclinic bifurcations for 3D-diffeomorphisms in attracting regions
Enrique R. Pujals
In the present paper it is proved that given a maximal invariant attracting homoclinic class for a smooth three dimensional Kupka-Smale diffeomorphism, either the diffeomorphisms is $C^1$ approximated by another one exhibiting a homoclinic tangency or a heterodimensional cycle, or it follows that the homoclinic class is conjugate to a hyperbolic set (in this case we say that the homoclinic class is "topologically hyperbolic").
We also characterize, in any dimension, the dynamics of a topologically hyperbolic homoclinic class and we describe the continuation of this homoclinic class for a perturbation of the initial system.
Moreover, we prove that, under some topological conditions, the homoclinic class is contained in a two dimensional manifold and it is hyperbolic.
Enrique R. Pujals. On the density of hyperbolicity and homoclinic bifurcations for 3D-diffeomorphisms in attracting regions. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 179-226. doi: 10.3934/dcds.2006.16.179.
The existence of integrable invariant manifolds of Hamiltonian partial differential equations
Rongmei Cao and Jiangong You
In this note, it is shown that some Hamiltonian partial differential equations such as semi-linear Schrödinger equations, semi-linear wave equations and semi-linear beam equations are partially integrable, i.e., they possess integrable invariant manifolds foliated by invariant tori which carry periodic or quasi-periodic solutions. The linear stability of the obtained invariant manifolds is also concluded. The proofs are based on a special invariant property of the considered equations and a symplectic change of variables first observed in [26].
Rongmei Cao, Jiangong You. The existence of integrable invariant manifolds of Hamiltonian partial differential equations. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 227-234. doi: 10.3934/dcds.2006.16.227.
Traveling pulses for the Klein-Gordon equation on a lattice or continuum with long-range interaction
Peter Bates and Chunlei Zhang
We study traveling pulses on a lattice and in a continuum where all pairs of particles interact, contributing to the potential energy. The interaction may be positive or negative, depending on the particular pair but overall is positive in a certain sense. For such an interaction kernel $J$ with unit integral (or sum), the operator 1/ε2[J∗u-u], with ∗ continuous or discrete convolution, shares some common features with the spatial second derivative operator, especially when ε is small. Therefore, the equation $u_{t t}$ - 1/ε2[J∗u-u] + f(u)=0 may be compared with the nonlinear Klein Gordon equation $u_{t t}$ - $u_{x x}$$ + f(u)=0$. If $f$ is such that the Klein-Gordon equation has supersonic traveling pulses, we show that the same is true for the nonlocal version, both the continuum and lattice cases.
Peter Bates, Chunlei Zhang. Traveling pulses for the Klein-Gordon equation on a lattice or continuum with long-range interaction. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 235-252. doi: 10.3934/dcds.2006.16.235.
Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum
Renjun Duan, Tong Yang and Changjiang Zhu
In this paper, we study the Cauchy problem for the Boltzmann equation with an external force and the Vlasov-Poisson-Boltzmann system in infinite vacuum. The global existence of solutions is first proved for the Boltzmann equation with an external force which is integrable with respect to time in some sense under the smallness assumption on initial data in weighted norms. For the Vlasov-Poisson-Boltzmann system, the smallness assumption on initial data leads to the decay of the potential field which in turn gives the global existence of solutions by the result on the case with external forces and an iteration argument. The results obtained here generalize those previous works on these topics and they hold for a class of general cross sections including the hard-sphere model.
Renjun Duan, Tong Yang, Changjiang Zhu. Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum. Discrete & Continuous Dynamical Systems - A, 2006, 16(1): 253-277. doi: 10.3934/dcds.2006.16.253. | CommonCrawl |
A Lagrangian approach to extremal curves on Stiefel manifolds
JGM Home
Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model
March 2021, 13(1): 25-53. doi: 10.3934/jgm.2021001
Contact Hamiltonian and Lagrangian systems with nonholonomic constraints
Manuel de León 1,3, , Víctor M. Jiménez 2, and Manuel Lainz 1,
Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM), C\Nicolás Cabrera, 13-15, Campus Cantoblanco, UAM, 28049 Madrid, Spain
Universidad de Alcalá (UAH) Campus Universitario. Ctra. Madrid-Barcelona, Km. 33, 600. 28805 Alcal´a de Henares, Madrid, Spain
Real Academia de Ciencias Exactas, Fisicas y Naturales C/de Valverde 22, 28004 Madrid, Spain
Dedicated to Professor Tony Bloch on the occasion of his 65th birthday
Received November 2019 Revised October 2020 Published March 2021 Early access December 2020
In this article we develop a theory of contact systems with nonholonomic constraints. We obtain the dynamics from Herglotz's variational principle, by restricting the variations so that they satisfy the nonholonomic constraints. We prove that the nonholonomic dynamics can be obtained as a projection of the unconstrained Hamiltonian vector field. Finally, we construct the nonholonomic bracket, which is an almost Jacobi bracket on the space of observables and provides the nonholonomic dynamics.
Keywords: Nonholonomic constraints, contact Hamiltonian systems, Herglotz principle, dissipative systems, nonholonomic mechanics, Jacobi nonholonomic bracket.
Mathematics Subject Classification: 37J60;70F25;53D10;70H33.
Citation: Manuel de León, Víctor M. Jiménez, Manuel Lainz. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. Journal of Geometric Mechanics, 2021, 13 (1) : 25-53. doi: 10.3934/jgm.2021001
R. Abraham and J. E. Marsden, Foundations of Mechanics, AMS Chelsea Publishing, Redwood City, CA, 1978. doi: 10.1090/chel/364. Google Scholar
L. Bates and J. Śniatycki, Nonholonomic reduction, Rep. Math. Phys., 32 (1993), 99-115. doi: 10.1016/0034-4877(93)90073-N. Google Scholar
A. M. Bloch, P. S. Krishnaprasad, J. E. Marsden and R. M. Murray, Nonholonomic mechanical systems with symmetry, Arch. Rational Mech. Anal., 136 (1996), 21-99. doi: 10.1007/BF02199365. Google Scholar
A. V. Borisov and I. S. Mamaev, On the history of the development of the nonholonomic dynamics, Regul. Chaotic Dyn., 7 (2002), 43-47. doi: 10.1070/RD2002v007n01ABEH000194. Google Scholar
A. Bravetti, Contact geometry and thermodynamics, Int. J. Geom. Methods Mod. Phys., 16 (2019), 51pp. doi: 10.1142/S0219887819400036. Google Scholar
A. Bravetti, Contact Hamiltonian dynamics: The concept and its use, Entropy, 19 (2017), 12pp. doi: 10.3390/e19100535. Google Scholar
A. Bravetti, M. de León, J. C. Marrero and E. Padrón, Invariant measures for contact Hamiltonian systems: Symplectic sandwiches with contact bread, J. Phys. A: Math. Theoret., 53 (2020). doi: 10.1088/1751-8121/abbaaa. Google Scholar
A. Cannas da Silva and A. Weinstein, Geometric Models for Noncommutative Algebras, Berkeley Mathematics Lecture Notes, 10, American Mathematical Society, Providence, RI; Berkeley Center for Pure and Applied Mathematics, Berkeley, CA, 1999. Google Scholar
S. A. Chaplygin, Analysis of the Dynamics of Non-Holonomic Systems, Gostekhizdat, Mosow-Leningrad, 1949. Google Scholar
M. de León and D. M. de Diego, A constraint algorithm for singular Lagrangians subjected to nonholonomic constraints, J. Math. Phys., 38 (1997), 3055-3062. doi: 10.1063/1.532051. Google Scholar
M. de León and D. M. de Diego, On the geometry of non-holonomic Lagrangian systems, J. Math. Phys., 37 (1996), 3389-3414. doi: 10.1063/1.531571. Google Scholar
M. de León and D. M. de Diego, Solving non-holonomic Lagrangian dynamics in terms of almost product structures, Extracta Math., 11 (1996), 325-347. Google Scholar
M. de León and M. Lainz Valcázar, Contact Hamiltonian systems, J. Math. Phys., 60 (2019), 18pp. doi: 10.1063/1.5096475. Google Scholar
M. de León and M. Lainz Valcázar, Infinitesimal symmetries in contact Hamiltonian systems, J. Geom. Phys., 153 (2020), 13pp. doi: 10.1016/j.geomphys.2020.103651. Google Scholar
M. de León and M. Lainz Valcázar, Singular Lagrangians and precontact Hamiltonian systems, Int. J. Geom. Methods Mod. Phys., 16 (2019), 39pp. doi: 10.1142/S0219887819501585. Google Scholar
M. de León, J. C. Marrero and D. M. de Diego, Non-holonomic Lagrangian systems in jet manifolds, J. Phys. A, 30 (1997), 1167-1190. doi: 10.1088/0305-4470/30/4/018. Google Scholar
M. de León and P. R. Rodrigues, Higher-order mechanical systems with constraints, Internat. J. Theoret. Phys., 31 (1992), 1303-1313. doi: 10.1007/BF00673930. Google Scholar
M. de León and P. R. Rodrigues, Methods of Differential Geometry in Analytical Mechanics, North-Holland Mathematics Studies, 158, North-Holland Publishing Co., Amsterdam, 1989. Google Scholar
M. A. de León, A historical review on nonholomic mechanics, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM, 106 (2012), 191-284. doi: 10.1007/s13398-011-0046-2. Google Scholar
J. Gaset, X. Gràcia, M. C. Muñoz-Lecanda, X. Rivas and N. Román-Roy, New contributions to the Hamiltonian and Lagrangian contact formalisms for dissipative mechanical systems and their symmetries, Int. J. Geom. Methods Mod. Phys., 17 (2020), 27pp. doi: 10.1142/S0219887820500905. Google Scholar
F. Gay-Balmaz and H. Yoshimura, A Lagrangian variational formulation for nonequilibrium thermodynamics. Part I: Discrete systems, J. Geom. Phys., 111 (2017), 169-193. doi: 10.1016/j.geomphys.2016.08.018. Google Scholar
F. Gay-Balmaz and H. Yoshimura, From Lagrangian mechanics to nonequilibrium thermodynamics: A variational perspective, Entropy, 21 (2019). doi: 10.3390/e21010008. Google Scholar
B. Georgieva, The variational principle of Hergloz and related resultst, in Geometry, Integrability and Quantization, Avangard Prima, Sofia, 2011, 214–225. Google Scholar
B. Georgieva, R. Guenther and T. Bodurov, Generalized variational principle of Herglotz for several independent variables. First Noether-type theorem, J. Math. Phys., 44 (2003), 3911-3927. doi: 10.1063/1.1597419. Google Scholar
H. Goldstein, C. P. Poole and J. L. Safko, Classical Mechanics, 2006. Google Scholar
G. Herglotz, Beruhrungstransformationen, in Lectures at the University of Gottingen, Gottingen, 1930. Google Scholar
A. Ibort, M. de León, G. Marmo and D. M. de Diego, Non-holonomic constrained systems as implicit differential equations. Geometrical structures for physical theories, I (Vietri, 1996), Rend. Sem. Mat. Univ. Politec. Torino, 54 (1996), 295–-317. Google Scholar
A. A. Kirillov, Local Lie algebras, Uspehi Mat. Nauk, 31 (1976), 57-76. doi: 10.1070/RM1976v031n04ABEH001556. Google Scholar
J. Koiller, Reduction of some classical nonholonomic systems with symmetry, Arch. Rational Mech. Anal., 118 (1992), 113-148. doi: 10.1007/BF00375092. Google Scholar
V. V. Kozlov, On the integration theory of equations of nonholonomic mechanics, Regul. Chaotic Dyn., 7 (2002), 161-176. doi: 10.1070/RD2002v007n02ABEH000203. Google Scholar
V. V. Kozlov, Realization of nonintegrable constraints in classical mechanics, Dokl. Akad. Nauk SSSR, 272 (1983), 550-554. Google Scholar
A. V. Kremnev and A. S. Kuleshov, Nonlinear dynamics and stability of the skateboard, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 85-103. doi: 10.3934/dcdss.2010.3.85. Google Scholar
A. S. Kuleshov, A mathematical model of the snakeboard, Mat. Model., 18 (2006), 37-48. Google Scholar
P. Libermann and C.-M. Marle, Symplectic Geometry and Analytical Mechanics, Mathematics and Its Applications, 35, D. Reidel Publishing Co., Dordrecht, 1987. doi: 10.1007/978-94-009-3807-6. Google Scholar
A. Lichnerowicz, Les variétés de Jacobi et leurs algèbres de Lie associées, J. Math. Pures Appl. (9), 57 (1978), 453-488. Google Scholar
Q. Liu, P. J. Torres and C. Wang, Contact Hamiltonian dynamics: Variational principles, invariants, completeness and periodic behavior, Ann. Physics, 395 (2018), 26-44. doi: 10.1016/j.aop.2018.04.035. Google Scholar
N. K. Moshchuk, On the motion of Chaplygin's sledge, J. Appl. Math. Mech., 51 (1987), 426-430. doi: 10.1016/0021-8928(87)90079-7. Google Scholar
J. I. Ne${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$mark and N. A. Fufaev, Dynamics of Nonholonomic Systems, Translations of Mathematical Monographs, 33, American Mathematical Society, Providence, RI, 1972. doi: 10.1090/mmono/033. Google Scholar
V. V. Rumiantsev, On Hamilton's principle for nonholonomic systems, Prikl. Mat. Mekh., 42 (1978), 387-399. Google Scholar
V. V. Rumyantsev, Variational principles for systems with unilateral constraints, J. Appl. Math. Mech., 70 (2006), 808-818. doi: 10.1016/j.jappmathmech.2007.01.002. Google Scholar
A. A. Simoes, D. M. de Diego, M. de León and M. L. Valcázar, On the geometry of discrete contact mechanics, preprint, arXiv: 2003.11892. Google Scholar
A. A. Simoes, M. de León, M. L. Valcázar and D. M. de Diego, Contact geometry for simple thermodynamical systems with friction, Proc. A, 476 (2020), 244-259. doi: 10.1098/rspa.2020.0244. Google Scholar
I. Vaisman, Lectures on the Geometry of Poisson Manifolds, Progress in Mathematics, 118, Birkhäuser Verlag, Basel, 1994. doi: 10.1007/978-3-0348-8495-2. Google Scholar
A. van der Schaft, Classical thermodynamics revisited: A systems and control perspective, preprint, arXiv: 2010.04213. Google Scholar
M. Vermeeren, A. Bravetti and M. Seri, Contact variational integrators, J. Phys. A, 52 (2019), 28pp. doi: 10.1088/1751-8121/ab4767. Google Scholar
A. M. Vershik and L. D. Faddeev, Differential geometry and Lagrangian mechanics with constraints, Soviet Physics. Doklady, 17 (1972), 34-36. Google Scholar
Paul Popescu, Cristian Ida. Nonlinear constraints in nonholonomic mechanics. Journal of Geometric Mechanics, 2014, 6 (4) : 527-547. doi: 10.3934/jgm.2014.6.527
Larry M. Bates, Francesco Fassò, Nicola Sansonetto. The Hamilton-Jacobi equation, integrability, and nonholonomic systems. Journal of Geometric Mechanics, 2014, 6 (4) : 441-449. doi: 10.3934/jgm.2014.6.441
Luis C. García-Naranjo, Mats Vermeeren. Structure preserving discretization of time-reparametrized Hamiltonian systems with application to nonholonomic mechanics. Journal of Computational Dynamics, 2021, 8 (3) : 241-271. doi: 10.3934/jcd.2021011
Andrew D. Lewis. Nonholonomic and constrained variational mechanics. Journal of Geometric Mechanics, 2020, 12 (2) : 165-308. doi: 10.3934/jgm.2020013
Marin Kobilarov, Jerrold E. Marsden, Gaurav S. Sukhatme. Geometric discretization of nonholonomic systems with symmetries. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 61-84. doi: 10.3934/dcdss.2010.3.61
Oscar E. Fernandez, Anthony M. Bloch, P. J. Olver. Variational Integrators for Hamiltonizable Nonholonomic Systems. Journal of Geometric Mechanics, 2012, 4 (2) : 137-163. doi: 10.3934/jgm.2012.4.137
Jorge Cortés, Manuel de León, Juan Carlos Marrero, Eduardo Martínez. Nonholonomic Lagrangian systems on Lie algebroids. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 213-271. doi: 10.3934/dcds.2009.24.213
José F. Cariñena, Irina Gheorghiu, Eduardo Martínez, Patrícia Santos. On the virial theorem for nonholonomic Lagrangian systems. Conference Publications, 2015, 2015 (special) : 204-212. doi: 10.3934/proc.2015.0204
Andrew D. Lewis. Erratum for "nonholonomic and constrained variational mechanics". Journal of Geometric Mechanics, 2020, 12 (4) : 671-675. doi: 10.3934/jgm.2020033
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems. Journal of Geometric Mechanics, 2010, 2 (1) : 69-111. doi: 10.3934/jgm.2010.2.69
Francesco Fassò, Andrea Giacobbe, Nicola Sansonetto. On the number of weakly Noetherian constants of motion of nonholonomic systems. Journal of Geometric Mechanics, 2009, 1 (4) : 389-416. doi: 10.3934/jgm.2009.1.389
María Barbero-Liñán, Miguel C. Muñoz-Lecanda. Strict abnormal extremals in nonholonomic and kinematic control systems. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 1-17. doi: 10.3934/dcdss.2010.3.1
Dmitry V. Zenkov. Linear conservation laws of nonholonomic systems with symmetry. Conference Publications, 2003, 2003 (Special) : 967-976. doi: 10.3934/proc.2003.2003.967
Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159
Tomoki Ohsawa, Anthony M. Bloch. Nonholonomic Hamilton-Jacobi equation and integrability. Journal of Geometric Mechanics, 2009, 1 (4) : 461-481. doi: 10.3934/jgm.2009.1.461
Andrey Tsiganov. Poisson structures for two nonholonomic systems with partially reduced symmetries. Journal of Geometric Mechanics, 2014, 6 (3) : 417-440. doi: 10.3934/jgm.2014.6.417
Michał Jóźwikowski, Witold Respondek. A comparison of vakonomic and nonholonomic dynamics with applications to non-invariant Chaplygin systems. Journal of Geometric Mechanics, 2019, 11 (1) : 77-122. doi: 10.3934/jgm.2019005
Božzidar Jovanović. Symmetries of line bundles and Noether theorem for time-dependent nonholonomic systems. Journal of Geometric Mechanics, 2018, 10 (2) : 173-187. doi: 10.3934/jgm.2018006
Waldyr M. Oliva, Gláucio Terra. Improving E. Cartan considerations on the invariance of nonholonomic mechanics. Journal of Geometric Mechanics, 2019, 11 (3) : 439-446. doi: 10.3934/jgm.2019022
Manuel de León Víctor M. Jiménez Manuel Lainz | CommonCrawl |
Definition:Metric System/Length/Metre
< Definition:Metric System | Length(Redirected from Definition:Metre)
1.1 Symbol
2 Square Metre
3 Cubic Metre
4 Historical Note
5 Linguistic Note
The metre is the SI base unit of length.
It is defined as the distance travelled by light in vacuum in $\dfrac 1 {299 \ 792 \ 458}$ of a second.
\(\displaystyle \) \(\) \(\displaystyle 1\) metre
\(\displaystyle \) \(=\) \(\displaystyle 100\) centimetres
\(\displaystyle \) \(=\) \(\displaystyle 1000\) millimetres
\(\displaystyle \) \(=\) \(\displaystyle 10 \, 000\) microns
$\mathrm m$
The symbol for the metre is $\mathrm m$.
Its $\LaTeX$ code is \mathrm m .
Square Metre
The square metre is the SI unit of area.
The symbol for the square metre is $\mathrm m^2$.
Cubic Metre
The cubic metre is the SI unit of volume.
The symbol for the cubic metre is $\mathrm m^3$.
The metre was initially defined by Tito Livio Burattini as the length of a pendulum whose period is $1$ second.
It differs from the modern metre by half a centimetre.
It was soon established that as Acceleration Due to Gravity varies considerably according to location, this was not a sustainable definition to maintain a standard.
Hence it was changed so as to be defined as $10^{-7}$ the distance from the Earth's equator, through Paris to the North Pole (at sea level).
This definition was changed again in $1983$ to be defined as the distance travelled by light in vacuum in $\dfrac 1 {299 \ 792 \ 458}$ of a second.
Linguistic Note
The word metre originated with Tito Livio Burattini who pioneered the concept of a universal set of fundamental units.
He used the term metro cattolico from the Greek μέτρον καθολικόν (métron katholikón), that is universal measure.
This word gave rise to the French word mètre which was introduced into the English language in $1797$.
The spelling metre is the one adopted by the International Bureau of Weights and Measures.
Meter is the variant used in standard American English, but can be confused for the word for a general device used to measure something, in particular the standard household electricity meter, water meter and so on.
While $\mathsf{Pr} \infty \mathsf{fWiki}$ attempts in general to standardise on American English, the name of this unit is one place where a deliberate decision has been made to use the international spelling.
2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics (5th ed.) ... (previous) ... (next): Entry: metre
Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Metric_System/Length/Metre&oldid=428012"
Definitions/MKS
Definitions/SI
Definitions/Length
Definitions/Metric System
This page was last modified on 28 September 2019, at 00:15 and is 0 bytes | CommonCrawl |
Atomic Functions¶
This section of the tutorial describes the atomic functions that can be applied to CVXPY expressions. CVXPY uses the function information in this section and the DCP rules to mark expressions with a sign and curvature.
Operators¶
The infix operators +, -, *, / are treated as functions. + and - are affine functions. The expression expr1*expr2 is affine in CVXPY when one of the expressions is constant, and expr1/expr2 is affine when expr2 is a scalar constant.
Note that in CVXPY, expr1 * expr2 denotes matrix multiplication when expr1 and expr2 are matrices; if you're running Python 3, you can alternatively use the @ operator for matrix multiplication. Regardless of your Python version, you can also use the function matmul to multiply two matrices. To multiply two arrays or matrices elementwise, use multiply.
Indexing and slicing¶
Indexing in CVXPY follows exactly the same semantics as NumPy ndarrays. For example, if expr has shape (5,) then expr[1] gives the second entry. More generally, expr[i:j:k] selects every kth element of expr, starting at i and ending at j-1. If expr is a matrix, then expr[i:j:k] selects rows, while expr[i:j:k, r:s:t] selects both rows and columns. Indexing drops dimensions while slicing preserves dimensions. For example,
x = cvxpy.Variable(5)
print("0 dimensional", x[0].shape)
print("1 dimensional", x[0:1].shape)
O dimensional: ()
1 dimensional: (1,)
Transpose¶
The transpose of any expression can be obtained using the syntax expr.T. Transpose is an affine function.
Power¶
For any CVXPY expression expr, the power operator expr**p is equivalent to the function power(expr, p).
Scalar functions¶
A scalar function takes one or more scalars, vectors, or matrices as arguments and returns a scalar.
Monotonicity
geo_mean(x)
geo_mean(x, p)
\(p \in \mathbf{R}^n_{+}\)
\(p \neq 0\)
\(x_1^{1/n} \cdots x_n^{1/n}\)
\(\left(x_1^{p_1} \cdots x_n^{p_n}\right)^{\frac{1}{\mathbf{1}^T p}}\)
\(x \in \mathbf{R}^n_{+}\)
incr.
harmonic_mean(x)
\(\frac{n}{\frac{1}{x_1} + \cdots + \frac{1}{x_n}}\)
lambda_max(X)
\(\lambda_{\max}(X)\)
\(X \in \mathbf{S}^n\)
lambda_min(X)
\(\lambda_{\min}(X)\)
lambda_sum_largest(X,k)
\(k = 1,\ldots, n\)
\(\text{sum of $k$ largest}\\ \text{eigenvalues of $X$}\)
\(X \in\mathbf{S}^{n}\)
lambda_sum_smallest(X,k)
\(\text{sum of $k$ smallest}\\ \text{eigenvalues of $X$}\)
log_det(X)
\(\log \left(\det (X)\right)\)
\(X \in \mathbf{S}^n_+\)
log_sum_exp(X)
\(\log \left(\sum_{ij}e^{X_{ij}}\right)\)
\(X \in\mathbf{R}^{m \times n}\)
matrix_frac(x, P)
\(x^T P^{-1} x\)
\(x \in \mathbf{R}^n\)
\(P \in\mathbf{S}^n_{++}\)
max(X)
\(\max_{ij}\left\{ X_{ij}\right\}\)
same as X
min(X)
\(\min_{ij}\left\{ X_{ij}\right\}\)
mixed_norm(X, p, q)
\(\left(\sum_k\left(\sum_l\lvert x_{k,l}\rvert^p\right)^{q/p}\right)^{1/q}\)
\(X \in\mathbf{R}^{n \times n}\)
norm(x)
norm(x, 2)
\(\sqrt{\sum_{i} \lvert x_{i} \rvert^2 }\)
\(X \in\mathbf{R}^{n}\)
for \(x_{i} \geq 0\)
for \(x_{i} \leq 0\)
\(\sum_{i}\lvert x_{i} \rvert\)
norm(x, "inf")
\(\max_{i} \{\lvert x_{i} \rvert\}\)
norm(X, "fro")
\(\sqrt{\sum_{ij}X_{ij}^2 }\)
for \(X_{ij} \geq 0\)
for \(X_{ij} \leq 0\)
\(\max_{j} \|X_{:,j}\|_1\)
\(\max_{i} \|X_{i,:}\|_1\)
norm(X, "nuc")
\(\mathrm{tr}\left(\left(X^T X\right)^{1/2}\right)\)
\(\sqrt{\lambda_{\max}\left(X^T X\right)}\)
pnorm(X, p)
\(p \geq 1\)
or p = 'inf'
\(\|X\|_p = \left(\sum_{ij} |X_{ij}|^p \right)^{1/p}\)
\(X \in \mathbf{R}^{m \times n}\)
\(p < 1\), \(p \neq 0\)
\(\|X\|_p = \left(\sum_{ij} X_{ij}^p \right)^{1/p}\)
\(X \in \mathbf{R}^{m \times n}_+\)
quad_form(x, P)
constant \(P \in \mathbf{S}^n_+\)
\(x^T P x\)
for \(x_i \geq 0\)
for \(x_i \leq 0\)
constant \(P \in \mathbf{S}^n_-\)
quad_form(c, X)
constant \(c \in \mathbf{R}^n\)
\(c^T X c\)
depends on c, X
depends on c
quad_over_lin(X, y)
\(\left(\sum_{ij}X_{ij}^2\right)/y\)
\(y > 0\)
decr. in \(y\)
sum(X)
\(\sum_{ij}X_{ij}\)
sum_largest(X, k)
\(k = 1,2,\ldots\)
\(\text{sum of } k\text{ largest }X_{ij}\)
sum_smallest(X, k)
\(\text{sum of } k\text{ smallest }X_{ij}\)
sum_squares(X)
\(\sum_{ij}X_{ij}^2\)
trace(X)
\(\mathrm{tr}\left(X \right)\)
tv(x)
\(\sum_{i}|x_{i+1} - x_i|\)
\(\sum_{ij}\left\| \left[\begin{matrix} X_{i+1,j} - X_{ij} \\ X_{i,j+1} -X_{ij} \end{matrix}\right] \right\|_2\)
tv([X1,…,Xk])
\(\sum_{ij}\left\| \left[\begin{matrix} X_{i+1,j}^{(1)} - X_{ij}^{(1)} \\ X_{i,j+1}^{(1)} -X_{ij}^{(1)} \\ \vdots \\ X_{i+1,j}^{(k)} - X_{ij}^{(k)} \\ X_{i,j+1}^{(k)} -X_{ij}^{(k)} \end{matrix}\right] \right\|_2\)
\(X^{(i)} \in\mathbf{R}^{m \times n}\)
Clarifications¶
The domain \(\mathbf{S}^n\) refers to the set of symmetric matrices. The domains \(\mathbf{S}^n_+\) and \(\mathbf{S}^n_-\) refer to the set of positive semi-definite and negative semi-definite matrices, respectively. Similarly, \(\mathbf{S}^n_{++}\) and \(\mathbf{S}^n_{--}\) refer to the set of positive definite and negative definite matrices, respectively.
For a vector expression x, norm(x) and norm(x, 2) give the Euclidean norm. For a matrix expression X, however, norm(X) and norm(X, 2) give the spectral norm.
The function norm(X, "fro") is called the Frobenius norm and norm(X, "nuc") the nuclear norm. The nuclear norm can also be defined as the sum of X's singular values.
The functions max and min give the largest and smallest entry, respectively, in a single expression. These functions should not be confused with maximum and minimum (see Elementwise functions). Use maximum and minimum to find the max or min of a list of scalar expressions.
The CVXPY function sum sums all the entries in a single expression. The built-in Python sum should be used to add together a list of expressions. For example, the following code sums a list of three expressions:
expr_list = [expr1, expr2, expr3]
expr_sum = sum(expr_list)
Functions along an axis¶
The functions sum, norm, max, and min can be applied along an axis. Given an m by n expression expr, the syntax func(expr, axis=0, keepdims=True) applies func to each column, returning a 1 by n expression. The syntax func(expr, axis=1, keepdims=True) applies func to each row, returning an m by 1 expression. By default keepdims=False, which means dimensions of length 1 are dropped. For example, the following code sums along the columns and rows of a matrix variable:
X = cvxpy.Variable((5, 4))
col_sums = cvxpy.sum(X, axis=0, keepdims=True) # Has size (1, 4)
col_sums = cvxpy.sum(X, axis=0) # Has size (4,)
row_sums = cvxpy.sum(X, axis=1) # Has size (5,)
Elementwise functions¶
These functions operate on each element of their arguments. For example, if X is a 5 by 4 matrix variable, then abs(X) is a 5 by 4 matrix expression. abs(X)[1, 2] is equivalent to abs(X[1, 2]).
Elementwise functions that take multiple arguments, such as maximum and multiply, operate on the corresponding elements of each argument. For example, if X and Y are both 3 by 3 matrix variables, then maximum(X, Y) is a 3 by 3 matrix expression. maximum(X, Y)[2, 0] is equivalent to maximum(X[2, 0], Y[2, 0]). This means all arguments must have the same dimensions or be scalars, which are promoted.
abs(x)
\(\lvert x \rvert\)
\(x \in \mathbf{R}\)
for \(x \geq 0\)
for \(x \leq 0\)
entr(x)
\(-x \log (x)\)
\(x > 0\)
\(e^x\)
huber(x, M=1)
\(M \geq 0\)
\(\begin{cases}x^2 &|x| \leq M \\2M|x| - M^2&|x| >M\end{cases}\)
inv_pos(x)
\(1/x\)
decr.
kl_div(x, y)
\(x \log(x/y) - x + y\)
log(x)
\(\log(x)\)
log1p(x)
\(\log(x+1)\)
\(x > -1\)
logistic(x)
\(\log(1 + e^{x})\)
maximum(x, y)
\(\max \left\{x, y\right\}\)
\(x,y \in \mathbf{R}\)
depends on x,y
minimum(x, y)
\(\min \left\{x, y\right\}\)
\(x, y \in \mathbf{R}\)
multiply(c, x)
\(c \in \mathbf{R}\)
c*x
\(x \in\mathbf{R}\)
\(\mathrm{sign}(cx)\)
neg(x)
\(\max \left\{-x, 0 \right\}\)
pos(x)
\(\max \left\{x, 0 \right\}\)
power(x, 0)
\(1\)
\(x\)
power(x, p)
\(p = 2, 4, 8, \ldots\)
\(x^p\)
\(p < 0\)
\(0 < p < 1\)
\(x \geq 0\)
\(p > 1,\ p \neq 2, 4, 8, \ldots\)
scalene(x, alpha, beta)
\(\text{alpha} \geq 0\)
\(\text{beta} \geq 0\)
\(\alpha\mathrm{pos}(x)+ \beta\mathrm{neg}(x)\)
sqrt(x)
\(\sqrt x\)
square(x)
\(x^2\)
Vector/matrix functions¶
A vector/matrix function takes one or more scalars, vectors, or matrices as arguments and returns a vector or matrix.
bmat([[X11,…,X1q], …, [Xp1,…,Xpq]])
\(\left[\begin{matrix} X^{(1,1)} & \cdots & X^{(1,q)} \\ \vdots & & \vdots \\ X^{(p,1)} & \cdots & X^{(p,q)} \end{matrix}\right]\)
\(X^{(i,j)} \in\mathbf{R}^{m_i \times n_j}\)
\(\mathrm{sign}\left(\sum_{ij} X^{(i,j)}_{11}\right)\)
conv(c, x)
\(c\in\mathbf{R}^m\)
\(c*x\)
\(x\in \mathbf{R}^n\)
\(\mathrm{sign}\left(c_{1}x_{1}\right)\)
cumsum(X, axis=0)
cumulative sum along given axis.
diag(x)
\(\left[\begin{matrix}x_1 & & \\& \ddots & \\& & x_n\end{matrix}\right]\)
\(\left[\begin{matrix}X_{11} \\\vdots \\X_{nn}\end{matrix}\right]\)
diff(X, k=1, axis=0)
\(k \in 0,1,2,\ldots\)
kth order differences along given axis
hstack([X1, …, Xk])
\(\left[\begin{matrix}X^{(1)} \cdots X^{(k)}\end{matrix}\right]\)
\(X^{(i)} \in\mathbf{R}^{m \times n_i}\)
\(\mathrm{sign}\left(\sum_i X^{(i)}_{11}\right)\)
kron(C, X)
\(C\in\mathbf{R}^{p \times q}\)
\(\left[\begin{matrix}C_{11}X & \cdots & C_{1q}X \\ \vdots & & \vdots \\ C_{p1}X & \cdots & C_{pq}X \end{matrix}\right]\)
\(\mathrm{sign}\left(C_{11}X_{11}\right)\)
reshape(X, (n', m'))
\(X' \in\mathbf{R}^{m' \times n'}\)
\(m'n' = mn\)
vec(X)
\(x' \in\mathbf{R}^{mn}\)
vstack([X1, …, Xk])
\(\left[\begin{matrix}X^{(1)} \\ \vdots \\X^{(k)}\end{matrix}\right]\)
\(X^{(i)} \in\mathbf{R}^{m_i \times n}\)
The input to bmat is a list of lists of CVXPY expressions. It constructs a block matrix. The elements of each inner list are stacked horizontally and then the resulting block matrices are stacked vertically.
The output \(y\) of conv(c, x) has size \(n+m-1\) and is defined as \(y[k]=\sum_{j=0}^k c[j]x[k-j]\).
The output \(x'\) of vec(X) is the matrix \(X\) flattened in column-major order into a vector. Formally, \(x'_i = X_{i \bmod{m}, \left \lfloor{i/m}\right \rfloor }\).
The output \(X'\) of reshape(X, (m', n')) is the matrix \(X\) cast into an \(m' \times n'\) matrix. The entries are taken from \(X\) in column-major order and stored in \(X'\) in column-major order. Formally, \(X'_{ij} = \mathbf{vec}(X)_{m'j + i}\).
What is CVXPY?
Disciplined Convex Programming
Atomic Functions
Scalar functions
Functions along an axis
Elementwise functions
Vector/matrix functions
Disciplined Geometric Programming
Disciplined Quasiconvex Programming | CommonCrawl |
Home Journals IJSDP Simulation of the Thermal and Aerodynamic Behavior of an Established Screenhouse under Warm Tropical Climate Conditions: A Numerical Approach
Simulation of the Thermal and Aerodynamic Behavior of an Established Screenhouse under Warm Tropical Climate Conditions: A Numerical Approach
Edwin Villagran* | Roberto Ramirez | Andrea Rodriguez | Rommel Leon Pacheco | Jorge Jaramillo
Centro de Investigación Tibaitata, Corporación Colombiana de Investigación Agropecuaria - AGROSAVIA, Mosquera - Cundinamarca 250040, Colombia
Estación Experimental Enrique Jiménez Núñez, Instituto Nacional de Innovación y Transferencia en Tecnología Agropecuaria de Costa Rica – INTA., Cañas – Guanacaste 50601, Costa Rica
Centro de Investigación Caribia, Corporación Colombiana de Investigación Agropecuaria - AGROSAVIA, Sevilla – Magdalena 478020, Colombia
Centro de Investigación La Selva, Corporación Colombiana de Investigación Agropecuaria - AGROSAVIA, Rionegro - Antioquia 054040, Colombia
[email protected]
In tropical countries agriculture protected with passive and low-cost structures is one of the main alternatives for intensifying agricultural production in a sustainable manner. This type of greenhouses has adequate efficiency in cold weather conditions meanwhile its use in hot weather conditions presents disadvantages due to the generation of an inadequate microclimate for the growth and development of certain species. This has generated an important interest for the use of screen houses (SH) for the horticultural and fruit production, and currently there are many studies on the behavior of microclimates in SH; however, these experiments were developed for climatic conditions in other latitudes. In this research, a study was developed using a computational fluid dynamics (CFD) 3D numerical simulation, with the aim of evaluating the thermal and aerodynamic behavior of an SH under two specific configurations (under rain (RC) and under dry conditions (DC)). The CFD model was validated by taking experimental temperature data inside the SH. The results showed that: i) the CFD model has an acceptable capacity to predict the behavior of temperature and air flows, ii) simulations can be performed under environmental conditions of day and night, and iii) the RC configuration affected the positive thermal behavior, which limited the presence of the thermal inversion phenomenon under nocturnal conditions, meanwhile under RC daytime conditions, it reduced the velocity of the air flows generating higher thermal gradients compared to DC.
computational fluid dynamics, temperature, screenhouse microclimate, wind speed
Screen houses (SH) are a technological option offered by protected agriculture as an intermediate alternative between open field and greenhouse cultivation. With the implementation of these structures, the aim is to transform the land use from extensive to intensive or promote agricultural production in alternative and sustainable systems in order to generate the supply necessary to meet the demand for high-quality food throughout the year [1]. This type of structure is built on metal columns and support cables where a roof and side walls are installed, generally made of porous screens that are insect proof or shaded [2].
The adoption of this type of technology has generated a great boom since the end of the 90s and is currently a relevant component of farming systems undercover, which has gradually extended from the countries of the Mediterranean coast to regions in other latitudes, mainly with temperate or warm climates [3] and for different cultivation types and methods [4]. Commercially there is a great variety of screens that differ in material types, color and porosity. These characteristics affect their optical and aerodynamic properties; therefore, they have been strongly studied and modified seeking to improve the microclimatic conditions generated inside the SH [5, 6].
According to the manufacturing material of the porous screen used and its properties, various agricultural benefit objectives are sought such as (i) Shading for regions where solar radiation is excessive and with supra-optimal values [7]; (ii) reducing the vulnerability of crops to damage by weather events such as icy hail and wind gusts [8, 9]; (iii) cooling limitation in night-time conditions through the reduction in energy loss by radiation [10]; (iv) exclusion of insects and vectors that transmit viruses, allowing significant reductions in the application of pesticides [6, 11]; and (v) increase the efficient use of water, extending the growth period of the plants and delaying the ripening process of some horticultural products [12, 13]. In addition to the benefits mentioned above, this type of structure has become popular and widespread among farmers because they can potentially maximize the benefit of crops with a low-cost technological contribution compared to conventional greenhouses [9].
The knowledge of the microclimate in SH as well as in plastic greenhouses is essential to achieve adequate crop management [6]. The effects of different types of screens on the microclimate of plants have been studied since the beginning of the century [1, 11, 14, 15]. The use of screens mainly influences the radiation exchange and air flow dynamics, reducing its speed and modifying its turbulence characteristics [16], thus, affecting ventilation rates and heat exchanges, mass, and gases between the plants and their surrounding atmosphere. This usually translates into behaviors with high values of variables such as temperature and humidity that can cause physiological and environmental disorders conducive to the appearance of fungal diseases that affect the final crop yield [17].
The studies dedicated to the measurement, modeling and simulation of the microclimate distribution in conventional greenhouses have been extensive in the last three decades, obtaining results that have allowed to describe the distribution of temperature, humidity, CO2 concentration and the characteristics of airflow patterns, and develop management strategies to optimize the behavior of these variables [18-20]. On the other hand, the studies related in this field with SH are still scarce, although there are significant advances as summarized in the study developed by Tanny et al. [6]. Currently, there is a need to generate relevant information that allows researchers and farmers of horticultural products to obtain a deep understanding of the patterns and characteristics of the airflow in order to obtain a better design and positioning of screen houses [21] or study the aerodynamic effect of different types of screens on physical and biological processes in these systems [9].
One of the most used tools since the beginning of the century to characterize the microclimate distributed inside greenhouses and its interaction with the plant has been computational fluid dynamics (CFD). This tool models and simulates fluid flow and transfer of heat, mass, and momentum, obtaining great advances in the design and optimization of agricultural structures [22, 23]. The study of the microclimate in screenhouses can be approached through CFD numerical simulations, considering the roof material as a porous medium, which will allow evaluating a great variety of structures, screens and climatic environments in a relatively short period of time. Bartzanas et al. [1] developed a two-dimensional CFD study to assess the effect of a screen on radiation distribution, finding that the optical and spectral properties directly affect the distribution of solar radiation, and the degree of porosity of the screen reduces air velocity, affecting the thermal behavior inside the screenhouse. Other relevant studies using 3D CFD modeling were in charge of evaluating the behavior of air flows and the value of temperature in screenhouses used for tomato cultivation, reporting that these parameters are strongly affected by the degree of porosity of the screen [24]. Although these works have not been developed for the warm climate conditions of the Central American Caribbean
According to the above, the objective of this work was to determine through 3D CFD simulation the thermal and airflow patterns behavior of an insect proof screen house established in Guanacaste - Costa Rica. With the purpose of evaluating two configurations of the productive system used in different times of the year.
2.1 Experimental site and climatic conditions
The study area is in the coastal area of the canton of Abangares, province of Guanacaste in northwest Costa Rica (10º11' N, 85º10' W at an altitude of 10 m a.s.l.). This region has a warm tropical climate with a dry season, and according to the Köppen-Geiger climate classification, the area has an Aw climate [25]. The average multi-year average temperature is 27.7℃, with maximum and minimum averages of 36.9 and 21.1℃ (Figure 1a). The annual rainfall reaches a value of 1669.7 mm, distributed during the months of May to November (Figure 1a). The average wind speed oscillates in the year between 0.2 and 1.4 ms-1 (Figure 1b), with predominant directions between SE-SSE.
2.2 Description of the screenhouse
The development of the experimental study was carried out in a flat roof SH with a covered floor area of 1,496 m2, where the longitudinal section was in an east-west direction (E-W). The geometric characteristics of the structure were the following: width (X = 34 m), length (Z = 44 m) and height (Y = 5 m) (Figure 2a). The side walls and roof were covered with a porous insect-proof screen (Dimensions thread 16.1x10.2 and porosity ε = 0.33). Inside the screenhouse, small semicircular tunnels of 2.2 m of height and 1.2 m of width were built located along the longitudinal axis of the SH and on top of the cultivation beds, these tunnels were covered with polyethylene to be used during the rainy season, in order to avoid or reduce to the maximum the wetting of the foliage (Figure 2b).
Figure 1. Meteorological characteristics for the canton of Abangares, province of Guanacaste in northwest Costa Rica
Figure 2. Dimensions and interior detail of the screenhouse
2.3 Fundamental equations and physical models
The models explained in this section, describe the physical principles that govern the study analyzed, the models selected are those reported in the literature used for problems similar to this research and that have shown appropriate computational performance and numerical results adjusted to real behavior. The governing flow equations presented in Eq. (1). They represent as diffusion-convection equations of fluids for three conservation laws, which include the transport, impulse and energy equations of a compressible fluid and in a three-dimensional (3D) field in a steady state.
$\nabla(\rho \phi \vec{v})=\nabla(\Gamma \nabla \phi)+\mathrm{S} \phi$ (1)
where, ρ is the density of the fluid (kgm-3), ∇ is the nabla operator, ϕ represents the concentration of the transported quantity in a dimensional form (the momentum, the scalars mass and energy conservation equations), $\vec{v}$ is the speed vector (ms-1), Γ is the diffusion coefficient (m2s-1), and S represents the source term [26].
The turbulent nature of the air flow was simulated using the standard turbulence model k-ε, a model widely used and validated in studies focused on greenhouses, which has shown an adequate fit and accuracy with a low computational cost [27, 28]. Because wind speeds are lower in some areas inside the screenhouse, the effects of buoyancy influenced by the change in air density will be present [29, 30]. Therefore, they were modeled using the Boussinesq approximation, which is calculated using Eq. (2) and Eq. (3).
$\left(\rho-\rho_{0}\right) g=-\rho_{0} \beta\left(\mathrm{T}-\mathrm{T}_{0}\right) \mathrm{g}$ (2)
$\beta=-\left(\frac{1}{\rho}\right)\left(\frac{\partial \rho}{\partial \mathrm{T}}\right)_{\mathrm{p}}=\frac{1}{\rho} \frac{\mathrm{p}}{\mathrm{RT}^{2}}=\frac{1}{\mathrm{T}}$ (3)
where, g is the gravitational constant in (m s-2); β is the volumetric thermal expansion coefficient (°K-1); $\rho_{0}$ is the reference density in (kg m-3): R is the gas constant (J K-1 mol-1); p is the pressure on Pa, and $\mathrm{T}_{0}$ is the reference temperature (℃).
Likewise, the energy equation and the selected radiation model were considered, i.e. the one of discrete ordinates (DO) with angular discretization. The DO model has been widely used in greenhouse studies [31-34] and screenhouses [1]. This model allows calculating, by means of Eq. (4), the radiation and convective exchanges between the roof, the ceiling and the walls of a structure which, in the case of greenhouses, are treated as semi-transparent media. It is also possible to carry out the climate analysis in night conditions, simulating and solving the phenomenon of radiation from the floor of the greenhouse to the outside environment. For this purpose, the sky is considered as a black body with an equivalent temperature (TC) for two predominant scenarios of cloudy and wet nights and clear wet nights [35-37].
$\nabla .\left(I_{\lambda}\left( \begin{matrix} \Rightarrow \\ r \\\end{matrix},\begin{matrix} \Rightarrow \\ s \\\end{matrix} \right) \begin{matrix} \Rightarrow \\ s \\\end{matrix} \right)+\left(a_{\lambda}+\sigma_{s}\right) I_{\lambda}\left( \begin{matrix} \Rightarrow \\ r \\\end{matrix},\begin{matrix} \Rightarrow \\ s \\\end{matrix} \right)$
$=a_{\lambda} n^{2} \frac{\sigma T^{4}}{\pi}+\frac{\sigma_{S}}{4 \pi} \int_{0}^{4 \pi} I_{\lambda}{\left( \begin{matrix} \Rightarrow \\ r \\\end{matrix},\begin{matrix} \Rightarrow \\ s \\\end{matrix}' \right)} \Phi{\left( \begin{matrix} \Rightarrow \\ s \\\end{matrix}.\begin{matrix} \Rightarrow \\ s \\\end{matrix} '\right)} d \Omega^{\prime}$ (4)
where, $I_{\lambda}$ is the intensity of the radiation at a wavelength; $\begin{matrix} \Rightarrow \\ r \\\end{matrix}$, $\begin{matrix} \Rightarrow \\ s \\\end{matrix}$ they are the vectors that indicate the position and direction, respectively; $\begin{matrix} \Rightarrow \\ s \\\end{matrix}$` is the direction vector of dispersion; $\sigma_{s}, a_{\lambda}$ are the coefficients of dispersion and spectral absorption; n is the refractive index; ∇ is the divergence operator; σ is the Stefan-Boltzmann constant (5.669×10−8Wm−2°K−4), Φ,T and Ω are the phase function, the local temperature (°K) and the solid angle, respectively.
The presence of insect screens was modeled using equations derived from the flow of a free and forced fluid through porous materials, taking into account their main characteristics of porosity and permeability [38, 39]. These equations can be derived using Eq. (5), which represents the Forchheimer equation.
$\frac{\partial p}{\partial x}=\frac{\mu}{K} u+\rho \frac{C f}{\sqrt{K}} u|u|$ (5)
where, u is the air velocity (ms-1); μ is the dynamic viscosity of the fluid (kgm-1s-1), K is the permeability of the medium (m2); Cf is the net inertial factor; ρ is the air density (kgm-3), and ∂x is the thickness of the porous material (m). The inertia factor Cf and the permeability of the screen K have been evaluated in different experimental studies from tests in the wind tunnel, the numerical results obtained in the experiments are adjusted to equations showing correlation with the porosity (ε) of the screen. The aerodynamic parameters for the insect-proof porous screens commonly used in protected agriculture are obtained by Eq. (6) and Eq. (7), which are the mathematical expressions that best fit the data obtained in the wind tunnel [39-42].
$C_{f}=0.00342 \varepsilon^{-2.5917}$ (6)
$K=2 \mathrm{X} 10^{-7} \varepsilon^{3.5331}$ (7)
Figure 3. Meshing of the computer domain
2.4 Computational domain and generation of the mesh
The construction of the computational domain, the meshing and the evaluation of the quality of the mesh were carried out following the existing guidelines of the good practices of CFD simulation, where the minimum criteria to be met for these three parameters that are directly related to the precision of the results and the required computational effort are established. The ANSYS ICEM CFD 18.2 preprocessing software was used to generate a large computational domain composed of the screenhouse (Figure 3b) and its surroundings, in order to guarantee an appropriate definition of the atmospheric boundary layer and avoid the generation of forced flows with velocities and unrealistic behaviors [43]. The dimensions of the computational domain were 184, 75 and 194 m for the X, Y and Z axes, respectively (Figure 3a). This size was determined following the recommendations of numerical studies of the wind environment around the buildings [44]. The computational domain was divided into an unstructured mesh of hexahedral elements composed of a total of 7,787,701 discretized volumes in space. This number of elements was obtained after verifying the independence of the numerical solutions from the airflow and the temperature behavior at a total of 7 different sized meshes where the one with the highest number of elements presented a value of 12,123,456 and the one with the lowest number of elements was 1,345,123. The independence test was performed following the procedure reported and used successfully by Villagran et al. [34]. The quality parameters evaluated in the mesh were the variation of cell-to-cell size, which showed that 92.3% of the cells in the mesh were within the high-quality range (0.9-1), and on the other hand, the criterion of orthogonality was evaluated, where the minimum value obtained was 0.92, results that are classified within the adequate quality range [45, 46].
2.5 Boundary conditions and convergence criteria
The CFD ANSYS FLUENT 18.2 processing software was used to perform the simulations under the conditions set forth in Table 1. It was run from a computer that was composed of an Intel® Xeon W-2155 processor with twenty cores at 3.30 GHz and 128 GB of RAM in a Windows 10 64-bit operating system. The semi-implicit solution method for the pressure-velocity equation (SIMPLE) was applied to solve the flow field of the simulated fluid. The convergence criteria of the model were established in 10-6 for all the equations considered [47]. With this computer equipment and the simulation criteria established to have an efficiency of results as well as agility and communication effort, the average simulation time was 53 hours for approximately 7,900 iterations.
The upper limit of the domain and the surfaces parallel to the flow were set with boundary conditions of symmetrical properties so as not to generate frictional losses of the air flow in contact with these surfaces. The simulations considered the atmospheric characteristics of the air and the physical and optical properties of the materials within the computational domain, are summarized in Table 1. At the lower limit and the walls of the greenhouse a non-slip wall boundary condition was fixed, at the left boundary, the entry condition for the average wind speed was imposed through a logarithmic profile [48]. The profile was linked to the main CFD module using the user-defined function and using the Eq. (8).
$v(y)=\frac{v^{*}}{K} \ln \left(\frac{y+y_{o}}{y_{o}}\right)$ (8)
where, $y_{o}$ is the roughness of the surface, that for this case, was set at 0.03 m according to the response standard of the local terrain, v* is the friction velocity v(y) is the average wind speed at height and above the ground level and K is the von Karman constant with a value of 0.42, the leeward limit was considered as an edge condition of pressure output type. The model is not considered without the presence of cultivation, since we want to obtain a solution independent of any type and size of plant. In addition, other boundary conditions imposed in the computational domain and the physical and optical properties of materials taken from works such as those of Flores Velasquez et al. [42] and Villagran et al. [49] are summarized in Table 1.
Table 1. Settings of the computational fluid dynamics (CFD) model simulations and boundary conditions
Boundary conditions
Entry domain
Velocity inlet logarithmic profile (Air velocity a 2 m height), and atmospheric pressure.
Output domain
Pressure outlet (Zero pressure and same condition of turbulence).
Treatment of porous medium
Screen porous jump, viscosity effect (α)=3.98 e-9 and drag coefficient (C2) 19185.
Constant from the ground, Boussinesq hypothesis activated in the buoyancy effect of the turbulence model.
Physical and optical properties of the materials used
Density (ρ, kg m-3)
Thermal conductivity (k, W m-1 K-1)
Specific heat (Cp, J K-1 kg-1)
Coefficient of thermal expansion (K-1)
Absorptivity
Scattering coefficient
Emissivity
Table 2. Initial boundary conditions for simulated configurations
Diurnal Period
Wind speed [ms-1]
Wind direction [°]
Air Temperature [°C]
Solar radiation [W m-2]
Dry configuration (DC)
Rain configuration (RC)
Nocturnal Period
Tc* [°C]
* Equivalent temperature of the sky.
Table 3. Initial boundary conditions to validate simulation
2.6 Measurements and experimental procedure
During the development of the experimental phase between July 01 and July 10, 2018, and in order to obtain data for the validation of the CFD model, ten-minute records of climatic variables inside and outside the SH were made. Outside, a conventional I-Metos weather station (Pessl Instruments Gmbh, Weiz, Austria) was used, located 50 m from the greenhouse and equipped with temperature sensors (range: -30℃ to 99℃, accuracy: ± 0.1℃), relative humidity (range: 10% to 95%, Accuracy: ± 1%), global solar radiation (range: 0 Wm-2 to 2,000 Wm-2, accuracy: ± 2%), wind speed (range: 0 ms-1 to 70 ms-1, precision: ± 5%), wind direction (range: 0° to 360°, resolution: 2° precision: ± 7°) and precipitation (range: 6.5 cm per measurement period; resolution: 0.01 cm; precision: ± 0.1%). The indoor air temperature of the screenhouse was registered by nine data-loggers type sensors HOBO® Pro RH-Temp H08-032-08 (Onset Computer Corp., Pocasset, USA) This measured the temperature in a range from −20℃ to 70℃ with accuracy of ±0.3℃, sensors that were located at a height of Y = 1.8 m above the ground level just at the center line of the screenhouse at X = 17 m and distributed uniformly on the longitudinal Z-axis = 40 m. additionally these devices were covered with a capsule that acted as a protective shield against direct solar radiation.
2.7 Simulated scenarios
The validated CFD numerical model was used as a simulation tool to determine the thermal and aerodynamic behavior of the screenhouse, evaluating two specific configurations, one with rain (RC) and the other one dry (DC), under diurnal and nocturnal climate conditions, establishing the initial conditions listed in Table 2.
2.8 Validation of the model developed
The validation of the CFD model was performed by comparing temperature data obtained experimentally in the SH and the data obtained by numerical simulation for two specific conditions, the initial boundary conditions were determined from the average values of the climatic variables obtained for the experimental period considering a specific time for day and night, respectively (Table 3). Validation is a necessary phase in order to adequately verify the results obtained from the numerical model and establish total independence of these parameter results such as the quality and size of the mesh [50].
Another way to evaluate the performance and accuracy of numerical models is through the calculation of some goodness-of-fit criteria that compare measured and simulated data. In this case, the mean absolute error (MAE) was calculated with Eq. (8), the mean square error (MSE) with Eq. (9) and finally the mean percentage error (MAPE) with Eq. (10).
$M A E=\frac{1}{n} \sum_{i=1}^{n}|X m i-X s i|$ (9)
$M S E=\frac{1}{n} \sum_{i=1}^{n}|X m i-X s i|^{2}$ (10)
$M A P E=\frac{\sum_{i=1}^{n} \frac{|X m i-X s i|}{|X m i|}}{n}$ (11)
where, Xmi is the value measured, Xsi is the simulated value and n the number of data compared. Once it is verified that the values of the goodness-of-fit criteria are close to 0, the model is validated and can be used to develop CFD simulations under the scenarios considered in this investigation.
3.1 Validation of the CFD model
The fit and performance of the CFD model were tested through a quantitative analysis. For the diurnal period it was found that the absolute differences between the values of the simulated and measured points ranged between 0.25°C and 1.08°C, meanwhile, for the night period, such differences were 0.17°C and 0.83°C. The Figure 4 shows the trend of the simulated and measured data under climatic conditions for the day and night period, it can be seen that the qualitative and quantitative behavior of the data sets are similar, which allows us to deduce at first that the CFD model makes adequate temperature predictions for SH studied.
On the other hand, for the goodness-of-fit criteria used to evaluate the numerical model, values of 0.70℃ and 0.55℃ were obtained for the MAE and MSE, respectively, and an MAPE value of 1.46% for the daytime condition, while for the night-time simulation conditions, values of 0.54℃ and 0.32℃ were obtained for the MAE and MSE, respectively, and an MAPE value of 1.32%. These values obtained for the temperature are in the same order of magnitude as those found by Ali et al. [51].
These experimental results allow us to conclude that the CFD numerical model has adequate capacity to predict the temperature behavior within the SH. Although no experimental measurement of airflow patterns was performed, it is known that the thermal behavior indoors is dependent on airflow patterns, therefore this model can be used as a tool to perform aerodynamic and thermal analysis within the SH structure.
3.2 Daytime period
3.2.1 Air flows
In Figure 5, the behavior of the wind speed inside the SH for the evaluated scenarios RC and DC was observed. For DC an air flow is observed with an average speed of 0.21 ms-1 and maximum and minimum values of 0.49 ms-1 and 0.06 ms-1, respectively (Figure 5a). The behavior of the flow shows a pattern with a higher air velocity in the roof area of the SH over the central length of the structure and that is directed towards the leeward wall. This behavior has already been described in previous studies [11, 42, 52]. For the DC scenario, two converging flow zones can be observed between the ground and the deck area, the zone located between the windward wall and X = 12 m, with a flow of low speeds and in the opposite direction to the outside air flow. On the contrary, the zone between X = 12 m and the leeward window shows higher velocities and a flow that has the same direction of the external flow for the upper average height of the SH, and an air movement for the height at the lower half in the inverse direction to the external flow. Likewise, the interaction area between the windward wall and the roof area of the SH has vectors of low intensity and speed, this is caused by the loss of impulse generated on the air flow by the presence of the insect-proof porous screen (Figure 5a). For the RC case, it was observed that the displacement of the air moves in a single convective cell, clearly differentiated in comparison with DC; this shows a movement in the same direction of the external air flow with average velocity values of 0.24 ms-1, just in the area above the small plastic tunnels and a reverse flow direction in the lower area of these with an average speed of 0.36 ms-1 (Figure 5b).
Figure 4. Comparison of simulated and measured temperature data
Figure 5. Simulated air velocity field inside the screenhouse (m s–1). (a) The configuration of DC, and (b) The configuration RC for the diurnal period
Figure 6. Normalized air velocity (Vint/Vext) inside the screenhouse during the diurnal period for the DC and RC configurations
In order to compare the airflow velocities inside the structure for both RC and DC, the normalized wind speed (VN) was calculated for the heights above the ground level Y = 1 m and Y = 2 m, respectively. This velocity represents the relationship between the interior and the exterior velocity of the air. In Figure 6 the VN curves for RC and DC in each of the heights evaluated along the width of the SH can be observed. For RC-1m a reduction of the air flow was found, which oscillated between 56% and 99.6% in comparison with the external air, meanwhile, the zone with lower velocities appears over the 5 m adjacent to the lateral leeward and windward walls. On the contrary, the highest velocities occurred on the area between 8 m and 30 m of the width of the SH, because the reduction of air flow is influenced by the strong pressure drops that are generated when the external air makes contact with the screen [52]. In the case of RC-2m, a higher reduction in airflow is observed, influenced mainly by the presence of the plastic covering the tunnels; in this case, the air flow values are below, between 80% and 98.8% respect to the external airspeed and its behavior is more homogeneous over the length evaluated in the width of the SH.
In the case of DC-1m and DC-2m, lower flow reductions are observed compared to the RC scenario. In this case, the reductions in airflow are between 26% and 88% in relation to the external wind speed; nonetheless, these values coincide with previous studies conducted by Flores-Velazquez et al. [53]. The behavior for both DC-1m and DC-2m is very similar, the greatest reductions in flow are observed between the windward wall and the zone that is 10 m adjacent to the wall; on the contrary, the lower reduction rates can be observed between 10 m and 29 m of the width of the SH, meanwhile, in the area between 29 m and the leeward wall, an increase in the air reduction indexes is observed again (Figure 6). This allows deducing that the presence of plastic tunnels inside the SH, generates greater reductions in air flow and spatial behaviors differentiated from this flow in RC compared with DC.
3.2.2 Thermal behavior
In Figure 7, the spatial behavior of the temperature inside the SH at a height of 2 m above the ground level is shown. For the DC scenario, an average temperature of 37.6 ± 0.2°C was obtained with maximum and minimum values of 38.3°C and 36.9°C, respectively. Qualitatively it was observed that the areas of higher temperature were generated near the windward side wall just where the lowest airflow velocity values are presented; on the contrary, the lower temperature zones were found in the areas near the front and leeward walls and a small area of the windward wall as well as over the area of greater air flow (Figure 7a). The vertical distribution of the temperature for this case showed a behavior directly related to the air movements as was shown by Teitel et al. [2], finding an area with values higher than 38°C, just in the area of interaction of the two convective airflow cells generated in DC, area that extends from the ground to the SH cover (Figure 7b).
In the RC scenario, the mean temperature value found was 35.5 ± 0.4℃ with maximum and minimum values of 36.5℃ and 34.1℃, respectively. The spatial distribution in the interior volume showed three areas adjacent to the windward wall as the zones of higher temperature, and these zones expanded heterogeneously across the width of the SH. On the other hand, the zones of low temperature were located near the leeward lateral wall and expanded on an area of the central part of the SH (Figure 7c). The distribution of the vertical temperature profile shows high-temperature zones just above the plastic tunnels and another zone with similar values, located on the area between the windward wall and the first plastic tunnel, an area that has low airspeeds and little air exchange (Figure 7d).
Figure 7. Simulated temperature profiles (°C) inside the screenhouse. (a) Top view of the DC configuration at 2 m of height, (b) Front view of the DC configuration, (c) Top view of the RC configuration at 2 m of height, and (d) Front view of the RC configuration for the diurnal period
Figure 8. Thermal gradient profile width of the screenhouse for DC and RC configurations during the diurnal period
The thermal gradient was calculated (∆T) for both DC and RC; this ∆T represents the difference between the air temperature outside and inside the SH. Figure 8 shows the ∆T for a height above the ground level of 1m and 2 m. In general, the average value of ∆T for RC is superior in 0.7℃ and 1.1℃ compared to the DC scenario for both 1m and 2m, respectively. Additionally, it was observed that the behavior of ∆T in RC-2m presents greater variability between nearby points obtaining values of ∆T between 1.7℃ and 2.5℃; this can be directly related to the presence of plastic tunnels (Figure 8).
3.3 Night period
The distribution patterns of the air flow for RC and DC are presented in Figure 9. In the case of DC, two air movement cells can be observed; one moves in a clockwise direction from the central area of the SH towards the leeward wall with average speeds of 0.11 ms-1 with some zones of greater speed in the area adjacent to the roof and floor of the structure. On the other hand, the movement cell included between the windward wall presents a clockwise displacement in the upper part of the SH with average velocity values of 0.07 ms-1. This cell is complemented by another that shows a counter-clockwise displacement in the lower part of the structure, with average speeds of 0.10 ms-1 (Figure 9a).
This behavior differs from what was observed by Montero et al. [36] in greenhouses covered with impermeable plastic walls and may be influenced by air leaks through the porous material that occurs both on the front and side walls of the SH. For RC, a clearly differentiated air flow behavior was observed in two zones. A flow in the upper part of the screenhouse just above the plastic tunnels shows slight upward-downward currents that move from the lateral wall of the windward side of the leeward side wall, with an average velocity of this flow of approximately 0.13 ms-1. The other flow moves through the lower area of the plastic tunnels with two main characteristics, the first, a displacement contrary to the flow of the outside air, and the second, a more accelerated air velocity with approximate average values of 0.23 ms-1; this higher speed can be influenced by a greater air movement generated by free convection from a warm zone with lower air density (Figure 9b).
In Figure 10, the behavior of the wind speed (VN) can be observed for the two heights evaluated in RC and DC. The average reduction of the indoor air velocity compared to the outside air for DC was 68% and 81% for DC-1m and DC-2m, respectively. For this scenario, the air displacement patterns move in the direction of the flow of the outside air, except for DC-2m in the area between X = 4 m and X = 8 m on the width of the SH. In the RC scenario, an air movement is observed that moves in the opposite direction to the external airflow for the two evaluated heights, the VN values show a reduction of the airspeed in ranges of 31% and 88%, where the flow pattern that showed a more homogeneous velocity behavior was obtained for RC-1m, unlike that obtained in RC-2m, where high and low-velocity vectors are observed in relatively close points; this is clearly influenced by the presence of the plastic tunnels (Figure 10).
Figure 9. Simulated air velocity field inside the screenhouse (m s–1). (a) DC configuration; and (b) RC configuration during the nocturnal period
Figure 10. Normalized air velocity (Vint/Vext) inside the screenhouse for the DC and RC configurations during the nocturnal period
The spatial distribution of the temperature inside the SH for the night period can be seen in Figure 11. For the DC scenario an average temperature value of 20.4 ± 0.2°C, the spatial behavior of this variable was homogeneous inside the structure, the minimum and maximum values obtained under this condition were 20.2°C and 21.3°C, respectively. The areas of higher temperature were located near the front and side walls of the structure, and a small cell generated towards the center of the same, meanwhile the low-temperature zones were located in the central area of the SH between the coordinates of X = 7 m and X = 14 m over the width dimension of the structure (Figure 11a). The vertical distribution of the temperature for DC shows a behavior where the soil in the central part of the structure is the zone of higher temperature with values close to 21.5°C, approximately in 20% of the volume evaluated. Additionally, low-temperature areas can be observed with mean values of 20.2°C in the central zone adjacent to the higher temperature area (Figure 11b).
Figure 11. Simulated temperature profiles (°C) inside the screenhouse. (a) Top view of the DC configuration at 2 m of height, (b) Front view of the DC configuration, (c) Top view of the RC configuration at 2 m of height; and (d) Front view of the RC configuration during the nocturnal period
Figure 12. Thermal gradient profile width of the screenhouse for the DC and RC configurations during the nocturnal period
The behavior for RC is presented in Figure 11 c and d. The average value of the temperature for this case was 23.9 ± 0.4°C; under these conditions a heterogeneity was observed in the spatial distribution of temperature where there are two zones clearly differentiated, one of them located between the central area of the screenhouse and the wall of the rear facade with average temperature values of 24.3°C, meanwhile, the area of low temperatures was located from the middle zone of SH to the front wall with values of 23.4°C (Figure 11c). The vertical distribution of the temperature shows the presence of two higher temperature zones with average values of 24.2°C, the first located between the soil and the cultivation beds, and a second located on the lower part of the plastic tunnels. On the other hand, the zones of lower temperature with average values of 23.3°C were located near the side walls and the roof of the screenhouse (Figure 11d).
Figure 12 shows the ∆T calculated for RC and DC for heights above the ground level of 1m and 2 m. One of the main differences observed is the numerical value of ΔT for each case, are on the one hand, a positive ∆T which occurs in the RC scenario for the two evaluated heights, with an average ∆T value of 0.7°C, and areas with ∆T values of 0.05°C, just over the areas surrounding the windward and leeward side walls, and areas with ΔT of 1.3°C in the central area of the SH. So it can be inferred that the presence of plastic tunnels may be influencing the thermal behavior, whereas for RC-2m, a behavior with greater variability between nearby points was found (Figure 12).
The opposite occurred in the DC scenario where the structure enters thermal inversion conditions; this phenomenon is characterized by lower indoor air temperatures compared to the values of the outside air temperature. Numerically, this could be checked when analyzing the ∆T values generated under this condition, finding an average value of ∆T of -0.5°C; however, these values are within the range of those reported in previous studies by Teitel et al. [21]. The maximum ΔT value was 0.3°C and occurred in the central zone of the SH and the minimum ∆T value was -0.9°C, which was located in DC-1m in an area on the central zone of the SH and moved in the direction of the lateral leeward window (Figure 12). The thermal inversion phenomenon occurs due to the cooling generated by infrared thermal radiation, poor ventilation and the presence of climatic conditions of low humidity and clear skies [36].
The research results are as follows:
(1) CFD 3D simulation proved to be an optimal, valid and accurate tool to determine the microclimatic behavior of a screenhouse during the day and night period under the climatic conditions of the study region.
(2) The presence of small tunnels inside the structure (RC) generates a negative effect on the speed and distribution of air patterns which translates into thermal conditions with values of ΔT up to 1.1°C, compared to the scenario where the tunnels are not used (DC).
(3) The presence of the small tunnels (RC) for the night period allows to improve the microclimate of the screenhouse limiting the phenomenon of thermal inversion characteristic of the DC scenario.
(4) Any modification to the cultivation system under screen-house structures generates both positive and negative effects on the microclimate, therefore, these modifications cannot be made by following the farmers' criteria alone. It is recommended that in future studies, starting from the base that this investigation leaves with a validated CFD model, other variables of interest are included like, different commercial anti-insect meshes that have defined their aerodynamic properties, evaluations to shorter temporary scales that allow to simulate different meteorological conditions for the day and the night, evaluations with some type of crop or another geometric configuration for the screen-house, by the side of the experimentation it is recommendable to validate the flow patterns through sonic anemometry.
The authors wish to thank Corporación Colombiana de Investigación Agropecuaria (AGROSAVIA) and Instituto Nacional de Innovación y Transferencia en Tecnología Agropecuaria de Costa Rica (INTA) for their technical and administrative support in this study. The research was funded by The Regional Fund of Agricultural Research and Technological Development (FONTAGRO) as part of the project "Innovations for horticulture in protected environments in tropical zones: an option for sustainable intensification of family farming in the context of climate change in LAC".
screen houses
Rain configuration
Dry configuration
Observed temperature data (°C)
Simulated temperature data (°C)
Gravitational acceleration, (m.s-2)
Thermal conductivity (W.m-1. K-1)
The normalized wind speed
Absolute mean error (°C)
Mean square error (°C)
Mean absolute percentage error (%)
The semi-implicit solution method for the pressure-velocity equation
Sϕ
source term
TC*
Temperature of the sky (°C)
components of speed (ms-1)
user defined function
Air speed (m.s-1)
the length of the roughness coefficient (m)
Greek symbols
Γϕ
the diffusion coefficient
thermal gradient (°C)
$\beta$
thermal expansion coefficient (K-1)
turbulent kinetic energy dissipation rate
(m2.s-3)
dynamic viscosity (kg.m-1.s-1)
μt
turbulent viscosity (kg.m-1.s-1)
ρ0
density (Kg.m-3)
Concentration of the transported quantity in a dimensional form
[1] Bartzanas, T., Katsoulas, N., Kittas, C. (2012). Solar radiation distribution in screenhouses: A CFD approach. In VII International Symposium on Light in Horticultural Systems, 956: 449-456. https://doi.org/10.17660/ActaHortic.2012.956.52
[2] Teitel, M., Liang, H., Tanny, J., Garcia-Teruel, M., Levi, A., Ibanez, P.F., Alon, H. (2017). Effect of roof height on microclimate and plant characteristics in an insect-proof screenhouse with impermeable sidewalls. Biosystems Engineering, 162: 11-19. https://doi.org/10.1016/j.biosystemseng.2017.07.001
[3] Tanny, J., Cohen, S. (2003). The effect of a small shade net on the properties of wind and selected boundary layer parameters above and within a citrus orchard. Biosystems Engineering, 84(1): 57-67. https://doi.org/10.1016/S1537-5110(02)00233-7
[4] Shahak, Y., Gal, E., Offir, Y., Ben-Yakir, D. (2008, October). Photoselective shade netting integrated with greenhouse technologies for improved performance of vegetable and ornamental crops. In International Workshop on Greenhouse Environmental Control and Crop Production in Semi-Arid Regions, 797: 75-80. https://doi.org/10.17660/ActaHortic.2008.797.8
[5] Manja, K., Aoun, M. (2019). The use of nets for tree fruit crops and their impact on the production: A review. Scientia Horticulturae, 246: 110-122. https://doi.org/10.1016/J.SCIENTA.2018.10.050
[6] Tanny, J. (2013). Microclimate and evapotranspiration of crops covered by agricultural screens: A review. Biosystems Engineering, 114(1): 26-43. https://doi.org/10.1016/j.biosystemseng.2012.10.008
[7] Möller, M., Cohen, S., Pirkner, M., Israeli, Y., Tanny, J. (2010). Transmission of short-wave radiation by agricultural screens. Biosystems Engineering, 107(4): 317-327. https://doi.org/10.1016/j.biosystemseng.2010.09.005
[8] Ilić, Z.S., Milenković, L., Šunić, L., Fallik, E. (2015). Effect of coloured shade‐nets on plant leaf parameters and tomato fruit quality. Journal of the Science of Food and Agriculture, 95(13): 2660-2667. https://doi.org/10.1002/jsfa.7000
[9] Mahmood, A., Hu, Y., Tanny, J., Asante, E.A. (2018). Effects of shading and insect-proof screens on crop microclimate and production: A review of recent advances. Scientia Horticulturae, 241: 241-251. https://doi.org/10.1016/j.scienta.2018.06.078
[10] Teitel, M., Peiper, U.M., Zvieli, Y. (1996). Shading screens for frost protection. Agricultural and Forest Meteorology, 81(3-4): 273-286. https://doi.org/10.1016/0168-1923(95)02321-6
[11] Tanny, J., Pirkner, M., Teitel, M., Cohen, S., Shahak, Y., Shapira, O., Israeli, Y. (2014). The effect of screen texture on air flow and radiation transmittance: laboratory and field experiments. Acta horticulturae, (1015): 45-51.
[12] Tanny, J., Haijun, L., Cohen, S. (2006). Airflow characteristics, energy balance and eddy covariance measurements in a banana screenhouse. Agricultural and Forest Meteorology, 139(1-2): 105-118. https://doi.org/10.1016/j.agrformet.2006.06.004
[13] Pirkner, M., Tanny, J., Shapira, O., Teitel, M., Cohen, S., Shahak, Y., Israeli, Y. (2014). The effect of screen type on crop micro-climate, reference evapotranspiration and yield of a screenhouse banana plantation. Scientia Horticulturae, 180: 32-39. https://doi.org/10.1016/j.scienta.2014.09.050
[14] Cohen, S., Raveh, E., Li, Y., Grava, A., Goldschmidt, E.E. (2005). Physiological responses of leaves, tree growth and fruit yield of grapefruit trees under reflective shade screens. Scientia Horticulturae, 107(1): 25-35. https://doi.org/10.1016/j.scienta.2005.06.004
[15] Desmarais, G., Ratti, C., Raghavan, G.S.V. (1999). Heat transfer modelling of screenhouses. Solar Energy, 65(5): 271-284. https://doi.org/10.1016/S0038-092X(99)00002-X
[16] Siqueira, M.B., Katul, G.G., Tanny, J. (2012). The effect of the screen on the mass, momentum, and energy exchange rates of a uniform crop situated in an extensive screenhouse. Boundary-layer Meteorology, 142(3): 339-363. https://doi.org/10.1007/s10546-011-9682-5
[17] Meneses, J.F., Baptista, F.J., Bailey, B.J. (2007). Comparison of humidity conditions in unheated tomato greenhouses with different natural ventilation management and implications for climate and Botrytis cinerea control. Acta Horticulturae, 801(801): 1013-1020. https://doi.org/10.17660/ActaHortic.2008.801.120
[18] Teitel, M., Garcia-Teruel, M., Ibanez, P.F., Tanny, J., Laufer, S., Levi, A., Antler, A. (2015). Airflow characteristics and patterns in screenhouses covered with fine-mesh screens with either roof or roof and side ventilation. Biosystems Engineering, 131: 1-14. https://doi.org/10.1016/j.biosystemseng.2014.12.010
[19] Villagran, E., Bojaca, C.R. (2019). CFD simulation of the increase of the roof ventilation area in a traditional colombian greenhouse: Effect on air flow patterns and thermal behavior. International Journal of Heat and Technology, 37(3): 881-892. https://doi.org/10.18280/ijht.370326
[20] Mesmoudi, K., Meguellati, K., Bournet, P.E. (2017). Thermal analysis of greenhouses installed under semi arid climate. International Journal of Heat and Technology, 35(3): 474-486. https://doi.org/10.18280/ijht.350304
[21] Teitel, M., Garcia-Teruel, M., Alon, H., Gantz, S., Tanny, J., Esquira, I., Soger M., Levi, A., Schwartz, A., Antler, A. (2014). The effect of screenhouse height on air temperature. Acta Horticulturae, 1037: 517-523. https://doi.org/10.17660/ActaHortic.2014.1037.64
[22] Norton, T., Sun, D.W., Grant, J., Fallon, R., Dodd, V. (2007). Applications of computational fluid dynamics (CFD) in the modelling and design of ventilation systems in the agricultural industry: A review. Bioresource Technology, 98(12): 2386-2414. https://doi.org/10.1016/j.biortech.2006.11.025
[23] Bournet, P.E., Boulard, T. (2010). Effect of ventilator configuration on the distributed climate of greenhouses: A review of experimental and CFD studies. Computers and Electronics in Agriculture, 74(2): 195-217. https://doi.org/10.1016/j.compag.2010.08.007
[24] Flores-Velazquez, J., Ojeda, W., Villarreal-Guerrero, F., Rojano, A. (2015). Effect of crops on natural ventilation in a screenhouse evaluated by CFD simulations. In International Symposium on New Technologies and Management for Greenhouses-GreenSys2015, 1170: 95-102. https://doi.org/10.17660/ActaHortic.2017.1170.10
[25] Peel, M.C., Finlayson, B.L., McMahon, T.A. (2007). Updated world map of the Köppen-Geiger climate classification. Hydrology and Earth System Sciences Discussions, 4(2): 439-473. https://doi.org/10.5194/hess-11-1633-2007.
[26] Piscia, D., Montero, J.I., Bailey, B., Muñoz, P., Oliva, A. (2013). A new optimisation methodology used to study the effect of cover properties on night-time greenhouse climate. Biosystems Engineering, 116(2): 130-143. https://doi.org/10.1016/J.BIOSYSTEMSENG.2013.07.005
[27] Drori, U., Dubovsky, V., Ziskind, G. (2005). Experimental verification of induced ventilation. Journal of Environmental Engineering, 131(5): 820-826. https://doi.org/10.1061/(ASCE)0733-9372
[28] Villagrán, E.A., Bojacá, C.R. (2019). Effects of surrounding objects on the thermal performance of passively ventilated greenhouses. Journal of Agricultural Engineering, 50(1): 20-27. https://doi.org/10.4081/jae.2019.856
[29] Villagrán, E.A., Bojacá, C.R. (2019). Determination of the thermal behavior of a Colombian hanging greenhouse applying CFD simulation. Revista Ciencias Técnicas Agropecuarias, 28(3).
[30] Villagrán, E.A., Bojacá, C.R. (2019). Simulacion del microclima en un invernadero usado para la producción de rosas bajo condiciones de clima intertropicaL. Chilean Journal of Agricultural & Animal Sciences, 35(2): 137-150. https://doi.org/10.4067/s0719-38902019005000308
[31] Baxevanou, C., Fidaros, D., Bartzanas, T., Kittas, C. (2018). Yearly numerical evaluation of greenhouse cover materials. Computers and Electronics in Agriculture, 149: 54-70. https://doi.org/10.1016/j.compag.2017.12.006
[32] Nebbali, R., Roy, J.C., Boulard, T. (2012). Dynamic simulation of the distributed radiative and convective climate within a cropped greenhouse. Renewable Energy, 43: 111-129. https://doi.org/10.1016/J.RENENE.2011.12.003
[33] Yu, Y., Xu, X., Hao, W. (2018). Study on the wall optimization of solar greenhouse based on temperature field experiment and CFD simulation. International Journal of Heat and Technology, 36: 847-854. https://doi.org/10.18280/ijht.360310
[34] Villagrán, E.A., Romero, E.J.B., Bojacá, C.R. (2019). Transient CFD analysis of the natural ventilation of three types of greenhouses used for agricultural production in a tropical mountain climate. Biosystems Engineering, 188: 288-304. https://doi.org/10.1016/j.biosystemseng.2019.10.026
[35] Iglesias, N., Montero, J.I., Muñoz, P., Antón, A. (2009). Estudio del clima nocturno y el empleo de doble cubierta de techo como alternativa pasiva para aumentar la temperatura nocturna de los invernaderos utilizando un modelo basado en la Mecánica de Fluidos Computacional (CFD). Horticultura Argentina, 28: 18-23.
[36] Camacho, J.I.M., Muñoz, P., Guerrero, M.S., Cortés, E. M., Piscia, D. (2013). Shading screens for the improvement of the night time climate of unheated greenhouses. Spanish Journal of Agricultural Research, 1: 32-46. https://doi.org/10.5424/sjar/2013111-411-11
[37] Villagrán, E.A., Bojacá, C.R. (2019). Numerical evaluation of passive strategies for nocturnal climate optimization in a greenhouse designed for rose production (Rosa spp.). Ornamental Horticulture, 25(4): 351-364. https://doi.org/10.1590/2447-536X.V25I4.2087
[38] Campen, J.B. (2004). Greenhouse design applying CFD for Indonesian conditions. In International Conference on Sustainable Greenhouse Systems-Greensys2004, 691: 419-424. https://doi.org/10.17660/ActaHortic.2005.691.50
[39] Valera, D.L., Álvarez, A.J., Molina, F.D. (2006). Aerodynamic analysis of several insect-proof screens used in greenhouses. Spanish Journal of Agricultural Research, 4(4): 273-279. https://doi.org/10.5424/sjar/2006044-204
[40] Miguel, A.F., Van de Braak, N.J., Bot, G.P.A. (1997). Analysis of the airflow characteristics of greenhouse screening materials. Journal of Agricultural Engineering Research, 67(2): 105-112. https://doi.org/10.1006/jaer.1997.0157
[41] Teitel, M. (2007). The effect of screened openings on greenhouse microclimate. Agricultural and Forest Meteorology, 143(3-4): 159-175. https://doi.org/10.1016/j.agrformet.2007.01.005
[42] Flores-Velazquez, J., Montero, J.I. (2008). Computational fluid dynamics (CFD) study of large scale screenhouses. In International Workshop on Greenhouse Environmental Control and Crop Production in Semi-Arid Regions, 797: 117-122. https://doi.org/10.17660/ActaHortic.2008.797.14
[43] Bournet, P.E., Khaoua, S.O., Boulard, T. (2007). Numerical prediction of the effect of vent arrangements on the ventilation and energy transfer in a multi-span glasshouse using a bi-band radiation model. Biosystems Engineering, 98(2): 224-234. https://doi.org/10.1016/j.biosystemseng.2007.06.007
[44] Tominaga, Y., Mochida, A., Yoshie, R., Kataoka, H., Nozu, T., Yoshikawa, M., Shirasawa, T. (2008). AIJ guidelines for practical applications of CFD to pedestrian wind environment around buildings. Journal of Wind Engineering and Industrial Aerodynamics, 96(10-11): 1749-1761. https://doi.org/10.1016/j.jweia.2008.02.058
[45] ANSYS Fluent, V.18.0. Ansys Fluent Tutorial Guide. http://users.abo.fi/rzevenho/ansys%20fluent%2018%20tutorial%20guide.pdf, accessed on Nov. 21, 2019.
[46] Zhang, X., Wang, H., Zou, Z., Wang, S. (2016). CFD and weighted entropy based simulation and optimisation of Chinese Solar Greenhouse temperature distribution. Biosystems Engineering, 142: 12-26. https://doi.org/10.1016/j.biosystemseng.2015.11.006
[48] Richards, P.J., Hoxey, R.P. (1993). Appropriate boundary conditions for computational wind engineering models using the k-ϵ turbulence model. Journal of Wind Engineering and Industrial Aerodynamics, 46: 145-153. https://doi.org/10.1016/B978-0-444-81688-7.50018-8
[49] Villagrán, E.A., Bojacá, C. (2019). Study of natural ventilation in a Gothic multi-tunnel greenhouse designed to produce rose (Rosa spp.) in the high-Andean tropic. Ornamental Horticulture, 25(2): 133-143. https://doi.org/10.14295/oh.v25i2.2013
[50] Ramponi, R., Blocken, B. (2012). CFD simulation of cross-ventilation for a generic isolated building: impact of computational parameters. Building and Environment, 53: 34-48. https://doi.org/10.1016/j.buildenv.2012.01.004
[51] Ali, H.B., Bournet, P.E., Cannavo, P., Chantoiseau, E. (2018). Development of a CFD crop submodel for simulating microclimate and transpiration of ornamental plants grown in a greenhouse under water restriction. Computers and Electronics in Agriculture, 149: 26-40. https://doi.org/10.1016/J.COMPAG.2017.06.021
[52] Tanny, J., Teitel, M., Barak, M., Esquira, Y., Amir, R. (2007). The effect of height on screenhouse microclimate. In International Symposium on High Technology for Greenhouse System Management: Greensys2007, 801: 107-114. https://doi.org/10.17660/ActaHortic.2008.801.6
[53] Flores-Velazquez, J., Villarreal Guerrero, F., Lopez, I.L., Montero, J.I., Piscia, D. (2012, July). 3-Dimensional thermal analysis of a screenhouse with plane and multispan roof by using computational fluid dynamics (CFD). In Ist International Symposium on CFD Applications in Agriculture, 1008: 151-158. https://doi.org/10.17660/ActaHortic.2013.1008.19 | CommonCrawl |
Endurance running exercise is an effective alternative to estradiol replacement for restoring hyperglycemia through TBC1D1/GLUT4 pathway in skeletal muscle of ovariectomized rats
Mizuho Kawakami1,
Naoko Yokota-Nakagi1,
Akira Takamata1 &
Keiko Morimoto1
The Journal of Physiological Sciences volume 69, pages 1029–1040 (2019)Cite this article
Menopause is a risk factor for impaired glucose metabolism. Alternative treatment of estrogen for postmenopausal women is required. The present study was designed to investigate the effects of 5-week endurance running exercise (Ex) by treadmill on hyperglycemia and signal pathway components mediating glucose transport in ovariectomized (OVX) placebo-treated rats, compared with 4-week 17β-estradiol (E2) replacement or pair-feeding (PF) to the E2 group. Ex improved the hyperglycemia and insulin resistance index in OVX rats as much as E2 or PF did. However, Ex had no effect on body weight gain in the OVX rats. Moreover, Ex enhanced the levels of GLUT4 and phospho-TBC1D1 proteins in the gastrocnemius of the OVX rats, but E2 or PF did not. Instead, the E2 increased the Akt2/AS160 expression and activation in the OVX rats. This study suggests that endurance Ex training restored hyperglycemia through the TBC1D1/GLUT4 pathway in muscle by an alternative mechanism to E2 replacement.
Postmenopausal women are at higher risk for metabolic disorders, such as metabolic syndrome and type 2 diabetes than premenopausal women [1, 2]. Because estrogens play an important role in the control of energy homeostasis in females, estrogen deficiency in menopausal status is associated with visceral fat accumulation [3, 4], impaired glucose tolerance, and insulin resistance. Similarly, ovariectomized (OVX) rats, an animal model widely used for studying the pathology of human menopause, develop body weight, visceral fat accumulation, and impairment of whole-body glucose homeostasis [5, 6]. Recently, we found that 17β-estradiol (E2) replacement restored the impairment of insulin sensitivity by increasing the activation of the insulin signaling pathway in the gastrocnemius muscle of OVX rats [7]. These findings suggest that E2 replacement restores glucose metabolism as its direct action in OVX rats. In addition, the inhibitory effect of estrogen against abdominal obesity may be partly associated with restoring the insulin sensitivity, since visceral fat accumulation contributes to glucose intolerance [2, 8].
Estrogen replacement in postmenopausal women is usually performed in combination with progesterone, a treatment known as hormone replacement therapy (HRT). The metabolic impact of HRT varies depending on the dose of the estrogen component, the type of progesterone, and the route of administration [10,11,12]. Previous studies have reported that HRT exerts a beneficial effect on the glucose metabolism [9]; however, it deteriorates insulin sensitivity, attributed to progesterone or high doses of estrogen [10,11,12]. Additionally, the general efficacy and safety of HRT is controversial due to the risks associated, including stroke and coronary heart disease, as well as an elevated risk of breast cancer, which were increased in the HRT trials performed by the Women's Health Initiative [13]. Therefore, it is essential to develop alternative treatments that restore the positive glucose metabolic effects of estrogen.
Several human studies show that aerobic exercise (Ex) is insulin-sensitizing and that training is an effective substitute or adjunct for HRT [14, 15]. As evidenced in rodent studies, Ex training initiated at the onset of OVX maintained normal skeletal muscle glucose uptake, prevented visceral adipose accretion, and improved whole-body glucose tolerance in OVX rats [16, 17]. However, to our knowledge, mechanisms underlying the abilities of Ex training to improve glucose metabolism under reduced estrogen function are not fully understood.
Skeletal muscle is the major tissue responsible for uptake of glucose from the blood, accounting for 70–85% of whole-body glucose disposal [18]. Insulin and Ex/muscle contraction are two widely studied physiological stimuli that increase glucose uptake via the activation of intracellular signaling cascades [19,20,21]. The signaling mechanism by which insulin stimulates muscle glucose uptake is relatively well known, and involves phosphorylation of protein kinase B (Akt) and the Rab-GTPase activating protein (Rabs), an Akt substrate of 160 kDa (AS160) [22, 23]. In contrast, the signaling mechanism by which Ex acts is not fully understood, although studies have shown that activation of AMP-activated protein kinase (AMPK), an energy sensing kinase, is positively correlated with increases in muscle glucose uptake [24]. Furthermore, the downstream regulators of AMPK are still debated, while AS160 or TBC1 (Tre-2, BUB2, CDC16) domain family member 1 (TBC1D1), another Rabs of AS160 (also known as TBC1D4), is reported as a glucose uptake regulator in Ex/muscle contraction [19, 21, 25].
It is important to define the differences in molecular mechanism underlying beneficial effects of Ex training on glucose uptake in muscle of OVX rats compared with E2 replacement, whereby Ex is a critical alternative to estrogen replacement [14, 16]. Recently, several researchers have reported the effects of Ex training on glucose transporter 4 (GLUT4), Akt protein, or mRNA level in OVX rats [17, 26, 27], but those findings were inconsistent. In this study, we focused on the effects of Ex training on signal pathway components that mediate glucose uptake in skeletal muscle and adipose tissues of OVX rats, because our previous study showed that beneficial effects of E2 replacement on insulin sensitivity were mediated by enhancing activation of the Akt2/AS160 pathway in the gastrocnemius muscle, but not in liver [7].
In addition, whether estrogen reduction in the menopausal phase directly impairs the glucose uptake mechanism [28, 29] remains unclear, or whether estrogen deficiency-induced hyperphagia induces visceral fat accumulation, which promotes insulin resistance resulting in the impairment of glucose uptake as an indirect result of estrogen deficiency [30]. A previous study reported that pair-feeding (PF) with sham-operated female rats failed to improve insulin action at the whole-body or skeletal muscle level in OVX rats, suggesting ovarian hormone deprivation to be involved in the progression of insulin resistance as a direct cause [31]. Therefore, as per the second aim of this study, we also examined the effects of PF on plasma glucose levels and the signaling pathway components that mediate glucose transport in OVX rats fed with the same diet as the E2-replaced OVX rats. This experiment may give an answer to above-mentioned question whether estrogen directly restores glucose metabolism, or whether estrogen-induced anorexia and following leanness prevents deterioration of it. The present study may give first data simultaneously showing the effects of Ex training, E2 replacement, and PF on insulin-dependent or independent signaling pathways in muscle or adipose tissue of OVX rats.
This study was designed to test an initial hypothesis, that is whether Ex training in the form of endurance running improves hyperglycemia and the insulin resistance index in the basic condition without muscle contraction through the AMPK-TBC1D1/GLUT4 pathway, which is different from the pathway activated by E2 replacement in skeletal muscle of OVX rats. Furthermore, the second hypothesis is that is whether E2 directly restores glucose metabolism, or whether E2-induced anorexia and following leanness prevents deterioration of it in OVX rats.
The Nara Women's University Committee on Animal Experiments approved the experimental protocol. In total, 24 female Wistar rats were used in this study. The rats were housed in standard rat cages (length: 40 cm, width: 25 cm, and depth: 25 cm) under controlled temperature and light conditions (26 ± 1 °C, a 12:12-h light–dark cycle, with lights on at 6:00 a.m.). Tap water and rodent chow (Oriental Yeast, Tokyo, Japan) were provided ad libitum.
Preparation for experiments
Ovariectomy and E2 (or placebo) replacement
Nine-week-old female rats were ovariectomized, followed by E2 or placebo (Pla) replacement as previously described [7, 32, 33]. In brief, after a 4-week-recovery period from OVX, the rats aged 13 weeks were assigned randomly to either the Pla (n = 18)- or the E2 (n = 6)-treated group, and were subcutaneously implanted with either E2 (1.5 mg/60-day release) or Pla pellets (Innovative Research of America, Sarasota, FL, USA). The Pla group rats were divided into control (Pla; n = 6), PF (Pla/PF; n = 6), and Ex (Pla/Ex; n = 6) groups.
Experimental protocols
PF study
Two days after Pla replacement, the Pla/PF group was pair-fed to the E2 group, i.e., given the average food intake of the E2 group in the previous day from 13 to 17 weeks of age. Food intake and body weight were monitored daily.
Endurance running Ex training
Before the Ex training protocol, the Pla/Ex group rats were familiarized with Ex by running at 10 m/min for 30 min/day on a custom-built, five-lane motorized rodent treadmill (KN-73, Natume, Tokyo, Japan) in the hours before dark for 2 weeks from 10 to 12 weeks of age, during which the rats had the intensity of Ex gradually increased. From 12 to 17 weeks of age, the rats ran 17 m/min of treadmill running for 60 min/day, 5 day/week for 5 weeks. The intensity of the running Ex may be moderate, as previous researchers have estimated that running at 28 m/min as high intensity or 8 m/min as low intensity elicited ~ 75% or ~ 45% of maximal O2 uptake in female rats [34, 35].
Sampling for estimation of plasma glucose, insulin, and signaling pathway
All the rats fasted for 16 h before blood and tissue sampling, with free access to water. On the day of sampling, after the rats were deeply anesthetized by a pentobarbital sodium (45 mg/kg body weight) [36], blood samples were collected from cardiac puncture in the four groups. After euthanasia, the gastrocnemius muscles and mesenteric adipose tissues were excised and immediately frozen in liquid nitrogen, then stored at − 50 °C until further processing of Western blotting. Parts of these tissues were stored in RNA stabilization solution, until RT-qPCR analysis for AS160 and GLUT4 mRNAs was performed. The wet weights of the intra-abdominal (mesenteric, kidney-genital, and retroperitoneal) and subcutaneous (inguinal) adipose tissues were measured. The total visceral fat weight was calculated by the sum of the intra-abdominal fat weights.
Analytical methods for plasma glucose, insulin, and E2
The plasma glucose concentration was measured by a glucose oxidase method using a glucose assay kit (Wako Pure Chemical Industries, Osaka, Japan). Plasma insulin concentration was determined by the use of a rat insulin ELISA kit (FUJIFILM Wako Shibayagi, Gunma, Japan). Using these variables, insulin resistance was assessed by a homeostasis model assessment of the insulin resistance index (HOMA-IR), calculated using the following formula [37,38,39]:
$${\text{HOMA}}{-}{\text{IR}} = {\text{fasting glucose concentration}}({\text{mmol}}/{\text{l}}) \times {\text{fasting insulin concentration}}\left( {\mu{\text{IU}}/{\text{ml}}} \right)/ 2 2. 5$$
The E2 concentrations were measured commercially by an electro-chemiluminescence immunoassay (SRL Co, Nara, Japan).
Immunoblotting
Isolated muscle and mesenteric adipose tissue were immediately homogenized in homogenization buffer [320 mM sucrose; 10 mM Tris·HCl, pH 7.4; 1 mM EGTA; 10 mM β-mercaptoethanol; 50 mM NaF; 10 mM Na3VO4; 9 tablets of cOmplete EDTA-free protease inhibitor cocktail containing 0.2 mM PMSF, 20 μM leupeptin, and 0.15 μM pepstatin (Roche, Mannheim, Germany); 1% TritonX-100], as described previously [7]. The homogenates were centrifuged at 15,000g for 30 min at 4 °C. SDS samples containing equal amounts of protein were separated by SDS-PAGE on 10% polyacrylamide gels, and immunoblotted using a PVDF membrane (GE Healthcare, Buckinghamshire, UK) with the following antibodies: antibodies for Akt and phospho (p)-Akt Ser473, p-Akt Thr308, Akt2, p-Akt2 Ser474, AMPKα, p-AMPKα Thr172, and p-AS160 Thr642 were from Cell Signaling Technology (Danvers, MA, USA). The AS160 and p-TBC1D1 Ser237 antibody were from MILLIPORE (Temecula, CA, USA), and GLUT4, TBC1D1, and Tubulin antibody from Abcam (Cambridge, MA, USA). Goat anti-rabbit horseradish peroxidase-conjugated secondary antibody was obtained from Promega (Madison, WI, USA). The enhanced chemiluminescence (ECL, GE Healthcare Life Sciences, Buckinghamshire, UK) system was used for protein detection. Imaging and densitometry were performed using the imaging system Ez-Capture (ATTO, Tokyo, Japan) and image processing program CS Analyzer (ATTO, Tokyo, Japan).
RNA isolation and RT-qPCR
Total RNA was extracted using the TRI Reagent Solution (Ambion, Austin, TX, USA) according to the manufacturer's protocol. The amount of total RNA extracted was determined, and its purity (absorption ratio of optical density 260 nm and 280 nm > 1.9) was verified spectrophotometrically using a Nanodrop 2000 (Thermo Fisher Scientific, Waltham, MA, USA). The cDNA was synthesized using the High-Capacity RNA-to-cDNA kit (Applied Biosystems, Waltham, MA, USA). RT-qPCR was performed using a StepOne Software v2.1 system (Applied Biosystems). The commercially available TaqMan Gene Expression Assay (Applied Biosystems) for AS160 (Rn01468356_m1), GLUT4 (Rn00562597_m1), and β-2M (Rn00560856_m1) were used in this study. For the analysis, gene expression levels of AS160 were normalized using β-2M as a housekeeping gene, and expressed with respect to the average value for the Pla group. All reactions were performed in duplicate. The thermal cycling conditions were as follows: 95 °C for 20 s, followed by 40 cycles at 95 °C for 1 s and 60 °C for 20 s. No amplification of fragments occurred in the control samples without reverse transcriptase. The mRNA quantity was calculated using the ΔΔCt (comparative Ct) method under the assumption that primer efficiencies were relatively similar.
All values were expressed as means ± SE. Two-way repeated-measures ANOVA for each pair-wise comparison among four groups was used to analyze the effects of E2, PF, and Ex on body weight and food intake. One-way ANOVA was used for the comparison of the adipose tissue weight, plasma E2 and glucose concentrations, insulin concentrations, HOMA-IR, and signaling protein and mRNA levels among the four groups, and was followed by a post hoc Tukey's HSD test. We considered a value of P < 0.05 to be statistically significant.
Characterization of rats studied
As shown in Fig. 1a, food intake in the E2 group was markedly decreased at 14 and 15 weeks of age, 1–2 weeks after E2 pellet implantation, compared with that at 13 weeks (P < 0.001) or the Pla group (P < 0.001 and P < 0.01, respectively). After that, the intake in the E2 groups came to be similar to the Pla group at 16 weeks of age. In contrast, food intake in the Pla/Ex group was increased at 15 weeks of age compared with 14 weeks (P < 0.05), and returned to the same level as the Pla group.
Characterization of rats studied. Data are expressed as means ± SE. Line graphs represent course of change in mean food intake per day (a) and body weight (b) in the placebo (Pla, n = 6)-, the 17β-estradiol (E2, n = 6)-treated, the placebo/pair-feeding (Pla/PF, n = 6), and the placebo/exercise (Pla/Ex, n = 6) groups. Two-way repeated-measures ANOVA revealed significant differences in food intake and body weight between the four groups. **P < 0.01, ***P < 0.001: E2 vs. Pla. +P < 0.05, ++P < 0.01, +++P < 0.001: Pla/PF vs. Pla. φP < 0.05: Pla/Ex vs. Pla. †P < 0.05: Pla/PF vs. E2. ###P < 0.001: Pla/Ex vs. E2. §P < 0.05: Pla/Ex vs. Pla/PF. There was an interaction of time and group effects in food intake (PTime×Group < 0.05: E2 vs. Pla or Pla/Ex, Pla/Ex vs. Pla) and body weight (PTime×Group < 0.05: E2 vs. Pla/PF or Pla/Ex, PTime×Group < 0.01, E2 vs. Pla, Pla/PF vs. Pla or Pla/Ex). Bar graphs represent wet weights of visceral (the sum of weights of the mesenteric, kidney-genital, and retroperitoneal adipose tissues) (c), inguinal (d) adipose tissues per body weights, and plasma E2 concentration (e) in the Pla (n = 6)-, the E2 (n = 6)-treated, the Pla/PF (n = 6), and the Pla/Ex (n = 6) groups at 17 weeks of age. One-way ANOVA followed by a post hoc Tukey's HSD test revealed differences in wet weights of the visceral adipose tissues per body weights between the Pla and E2 or Pla/PF groups (***P < 0.001), and inguinal adipose tissues between the Pla and every other group (***P < 0.001). There is a difference in plasma E2 concentration between the E2 and every other group (***P < 0.001). OVX, ovariectomy. BW body weight
The body weight in the E2 group was significantly lighter than that in the Pla group at 15–17 weeks of age (Fig. 1b). In contrast, the Pla/PF group showed heavier body weight than the E2 group, resulting in a significant difference in the time course of body weight between the E2 and Pla/PF groups (interaction: P < 0.05), though they were still lighter than those in the Pla group. In addition, body weights in the Pla/Ex group were similar to the Pla group, but heavier than both the E2 and Pla/PF groups (Fig. 1b).
The wet weights of total visceral (the sum of mesenteric, kidney-genital, and retroperitoneal) adipose tissues per body weights were significantly lighter in the E2 and Pla/PF groups than in the Pla group (Fig. 1c). The weights of inguinal subcutaneous adipose tissues per body weights were significantly lighter in the E2, Pla/PF, and Pla/Ex groups than the Pla group (Fig. 1d). Plasma E2 concentrations were significantly higher in the E2 group than in the other Pla groups (Fig. 1e).
Effects of E2, PF, and Ex on plasma glucose, insulin, and HOMA-IR
Fasting plasma glucose concentration was significantly lower in the E2, Pla/PF, and Pla/Ex groups than in the Pla group (Fig. 2a). In contrast, there was no significant difference in fasting plasma insulin among the Pla, E2, Pla/PF, and Pla/Ex groups (Fig. 2b). HOMA-IR indices were significantly lower in the E2, Pla/PF, and Pla/Ex groups than in the Pla group (Fig. 2c).
Plasma concentrations of glucose (mmol/l) (a), insulin (μIU/ml) (b), and homeostasis model assessment of insulin resistance (HOMA-IR) index (c) in the placebo (Pla, n = 6)-, the 17β-estradiol (E2, n = 6)-treated, the placebo/pair-feeding (Pla/PF, n = 6), and the placebo/exercise (Pla/Ex, n = 6) groups. Data are expressed as means ± SE and were analyzed by one-way ANOVA. This was followed by a post hoc Tukey's HSD test. *P < 0.05, **P < 0.01, and ***P < 0.001, differences between the Pla and every other group
Effects of E2, PF, and Ex on insulin signaling and AMPK pathway in basic condition
To reveal the molecular mechanism accounting for the effects of E2, PF, and Ex on plasma glucose and insulin, we investigated signaling pathway components mediating glucose transport, the Akt/AS160, and AMPK/TBC1D1 pathways, as well as GLUT4, in the gastrocnemius muscle (Fig. 3) and mesenteric adipose tissue (Fig. 4).
Representative blots and relative values of protein kinase B (Akt) and phospho (p)-Akt Ser473, and p-Akt Thr308 (a), Akt2, and p-Akt2 Ser474 (b), Akt substrate of 160 kDa (AS160) and p-AS160 Thr642 (c), AMPKα and p-AMPKα Thr172 (d), TBC1D1 and p-TBC1D1 Ser237 (e), and GLUT4 (f) in the gastrocnemius of rats in the placebo (Pla, n = 6)-, the 17β-estradiol (E2, n = 6)-treated, the placebo/pair-feeding (Pla/PF, n = 6), and the placebo/exercise (Pla/Ex, n = 6) groups. Data are expressed as means ± SE and were analyzed by one-way ANOVA. This was followed by a post hoc Tukey's HSD test. *P < 0.05, **P < 0.01, and ***P < 0.001, differences between the two groups
Representative blots and relative values of protein kinase B (Akt) and phospho (p)-Akt Ser473, and p-Akt Thr308 (a), Akt2 and p-Akt2 Ser474 (b), Akt substrate of 160 kDa (AS160) and p-AS160 Thr642 (c), AMPKα and p-AMPKα Thr172 (d), and TBC1D1 and p-TBC1D1 Ser237 (e) in the mesenteric adipose tissues of rats in the placebo (Pla, n = 6)-, the 17β-estradiol (E2, n = 6)-treated, the placebo/pair-feeding (Pla/PF, n = 6), and the placebo/exercise (Pla/Ex, n = 6) groups. Data are expressed as means ± SE and were analyzed by one-way ANOVA. This was followed by a post hoc Tukey's HSD test. *P < 0.05, **P < 0.01, and ***P < 0.001, differences between the two groups
The quantity of Akt protein in the muscle was similar between the four groups (Fig. 3a). The relative levels of p-Akt Ser473 and p-Akt Thr308 were significantly higher in the E2 group than those in the Pla and Pla/Ex groups, but were not different between the Pla and the Pla/Ex groups. In addition, p-Akt Thr308 was higher in the Pla/PF group than the Pla and Pla/Ex groups. Figure 3b shows that Akt2 and p-Akt2 Ser474 protein levels in the muscle were increased in E2 group compared to the Pla group (P < 0.01 and P < 0.001, respectively). In contrast, PF increased only p-Akt2 Ser474 (P < 0.001), but Ex had no effects on Akt2 and p-Akt2 Ser474. Furthermore, Fig. 3c shows that AS160 and p-AS160 Thr642 protein levels were increased in the E2 group compared with the Pla, Pla/PF, and Pla/Ex groups, and compared with the Pla group, respectively. Moreover, p-AMPKα Thr172 in the muscle was increased in the E2 group compared to the Pla group, with no change in the protein level (Fig. 3d). Interestingly, p-TBC1D1 Ser237 in the Pla-Ex group was higher than in the Pla, E2, and Pla/PF groups, with no differences in TBC1D1 protein levels among the four groups (Fig. 3e). In addition, GLUT4 protein level was significantly higher in the Pla/Ex group than in any other group (Fig. 3f).
In the mesenteric adipose tissue, the amounts of Akt and Akt2 proteins, as well as their phosphorylated protein levels, were similar among the four groups (Fig. 4a, b). AS160 and p-AS160 Thr642 protein levels were increased in the E2 group compared with the Pla and Pla/Ex groups (Fig. 4c). The p-AMPKα Thr172 levels were higher in the E2, Pla/PF, and Pla/Ex groups than in the Pla group, with no differences in AMPKα protein levels among the four groups (Fig. 4d). In contrast, TBC1D1 and p-TBC1D1 were not different among the four groups (Fig. 4e). GLUT4 was not detected in the mesenteric adipose tissue of any group.
AS160 and GLUT4 mRNA levels in the gastrocnemius muscle and mesenteric adipose tissue
The levels of AS160 and GLUT4 mRNAs in the gastrocnemius muscle or mesenteric adipose tissue of the four groups were determined by RT-qPCR. As shown in Fig. 5a, the relative level of AS160 mRNA in the muscle was higher in the E2 group than those in the Pla (P < 0.01), Pla/PF, and Pla/Ex groups. In contrast, the relative GLUT4 mRNA in the muscle and AS160 mRNA levels in the mesenteric adipose tissues were similar among the four groups (Fig. 5b, c).
The relative values of Akt substrate of 160 kDa (AS160) (a) and GLUT4 (b) mRNA levels in the gastrocnemius, and AS160 mRNA levels in the mesenteric adipose tissue (c) of rats in the placebo (Pla, n = 6)-, the 17β-estradiol (E2, n = 6)-treated, the placebo/pair-feeding (Pla/PF, n = 6), and the placebo/exercise training (Pla/Ex, n = 6) groups. Data are evaluated as \(2^{{ - \Delta \Delta C_{\text{t}} }}\) using β-2M as a housekeeping gene and expressed as means ± SE. One-way ANOVA followed by a post hoc Tukey's HSD test. **P < 0.01 and ***P < 0.001, difference between the two groups
The present study demonstrated that endurance running Ex training improved hyperglycemia by the activation of the TBC1D1/GLUT4 pathway in the muscle of OVX rats. The mechanism varied from that of E2 replacement, which restored hyperglycemia via the activated Akt2/AS160 pathway in the muscle, or from that of PF of the E2 replaced rats.
Endurance Ex training did not affect body weight gain in the OVX rats despite a decrease in inguinal fat accumulation. It is likely that Ex training might increase lean body mass instead of subcutaneous adipose tissues. In contrast, the E2 replacement suppressed body weight compared with the OVX rats by reducing both visceral and inguinal fat accumulations. In addition, PF partially compensated the suppressive effect of E2 replacement on body weight gain in the OVX rats. In our previous study using a radiotelemetry system [32, 33], we confirmed that the 24-h locomotor activities of freely moving rats did not differ between Pla and E2 groups (24-h average: 2.40 ± 0.39 counts/min vs. 2.31 ± 0.14 counts/min in Pla and E2 groups, respectively). Further study is required to confirm the locomotor activity of rats in the PF or Ex group. Therefore, E2 replacement may suppress body weight gain not only by reducing energy intake, but also by enhancing the energy metabolism in OVX rats. These findings are at least partially consistent with several previous studies that demonstrated a direct effect of estrogen on the energy metabolism [40,41,42], and with some other studies, showing that the anorexigenic effect of estrogen was a major contributor to the suppression of adiposity and body weight [43, 44].
The present study shows that 4-week E2 replacement or 5-week Ex training in OVX rats reduced the basal level of plasma glucose without affecting plasma insulin levels. This result was inconsistent with previously reported findings that resting basal levels of both insulin and glucose were not different among OVX, E2-treated, and endurance Ex-trained OVX rats [17, 27]. In contrast, our previous study using male rats showed that the resting levels of blood glucose in Ex-trained rats were lower than those in untrained rats [45]. These discrepancies may depend on experimental conditions: notably, intensity and duration of Ex training, conditions for blood sampling, dose of estrogen replacement, or period after OVX. In our study design, an intensity of the Ex training on a treadmill (17 m/min) might be moderate, because previous investigations have chosen low-intensity (8 m/min) or high-intensity (28 m/min) treadmill running to train female Sprague-Dawley rats [35] based on the finding that a running speed at 8 m/min and 28 m/min in female rats elicited ~ 45% and ~ 75% of maximal O2 uptake, respectively [34]. Additionally, in this study, blood was collected under 16-h fasting conditions from cardiac puncture 4 weeks after E2 replacement and 5 weeks after Ex training started in the OVX rats. Therefore, the duration of each intervention and the moderate intensity of Ex training may be appropriate to cause differences in basal plasma glucose levels.
In our study design, a 3-week-recovery duration was required after OVX and before the Ex training to achieve stable low levels of plasma E2. This was needed to evaluate the effects of Ex training in the OVX rats characterized by low plasma E2 levels, similar to postmenopausal women. Therefore, the present results suggest that Ex training can restore the developed hyperglycemia in the OVX rats. These findings showed the effectiveness of Ex training as an alternative treatment for postmenopausal women. In contrast, rats in the E2 group were administered E2 replacement for 4 weeks after a 4-week-recovery period from OVX to ensure that the plasma levels were stabilized at moderately high levels of E2 (136.9 ± 25.4 pg/ml), as seen in a postmenopausal model replaced by E2, which were within the physiological range for intact female rats in proestrus reported in previous studies [46, 47].
To assess the anorexigenic effect of E2 replacement on glucose homeostasis, we included a Pla/PF group of rats in our experiments. Food restriction by PF in the Pla/PF group ameliorated hyperglycemia in the OVX rats, but failed to mimic the effects of E2 replacement on signal pathway components mediating glucose transport. E2 increased Akt2 and AS160 protein levels, their phosphorylation, and AS160 mRNA level, but PF increased only phospho-Akt2. These findings show that the effects of E2 replacement on the transcriptional upregulation of AS160 were not mediated by PF-induced metabolic changes in OVX rats, suggesting direct E2 action, most likely via the estrogen receptor. On the other hand, a previous study reported that even in obese male Zucker rats, food restriction throughout the first year of life did not alter the development of hyperplastic obesity and insulin resistance [48].
Our study did not determine how OVX induces glucose intolerance, as our experiment did not include a group of sham-operated rats. However, the fact that E2 replacement restored the Akt2/AS160 pathway suggests that OVX impairs the signal pathway that mediates glucose transport. Unlike E2 replacement, Ex had no activating effect on the Akt/AS160 pathway in the OVX rats. Alternatively, the present study revealed that Ex training enhanced the TBC1D1/GLUT4 pathway in the muscle of the OVX rats, and improved hyperglycemia similar to E2 replacement.
Recent studies have reported the effects of Ex training on the signal pathway components, especially GLUT4 in OVX rats [17, 26, 27]. These findings were inconsistent, because it was reported that chronic Ex increased the GLUT4 protein levels of skeletal muscles from OVX rats [17], that Ex reduced the mRNA expression of GLUT4 in gastrocnemius [27], or that it had no effects on GLUT4 protein level in hindlimb muscles of OVX rats [26].
Here, we have provided evidence for the first time that Ex training enhances basal levels of phosphorylated TBC1D1 Ser237, as well as GLUT4 protein, in the gastrocnemius muscle of OVX rats (18 h after final training session). Actually, TBC1D1 abundances did not differ from AS160 among multiple rat muscles with divergent fiber type profiles, including the soleus, EDL, and tibialis anterior muscles [19].
We did not clarify why GLUT4 protein was increased in the Pla/Ex group without increased mRNA levels. There is some controversy as to the mechanism for the Ex-induced increase of GLUT4 protein levels; however, the majority of studies reported increased GLUT4 protein levels rather than mRNA levels. Gurley et al. reported that voluntary wheel running Ex increased muscle GLUT4 protein levels and improved fasting plasma insulin, but did not increase muscle GLUT4 mRNA in high-fat diet-induced obese mice, suggesting that a post-transcriptional mechanism regulated muscle GLUT4 protein expression in response to Ex [49]. Similarly, a post-transcriptional mechanism might explain our results, showing an Ex training-induced increase in muscle GLUT4 protein expression in OVX rats. Our data suggest the E2 upregulates AS160 gene expressions most likely by the transcriptional activation function of estrogen receptor (ER) and at least partially by autoregulation of ER mRNA stabilities [50]. Taken together, the cellular mechanism underlying the beneficial effects of endurance Ex on the plasma glucose level might be distinct from that of E2 replacement.
In summary, this is a report showing endurance running Ex training which improves OVX-induced hyperglycemia and HOMA-IR, an indicator for insulin resistance, via activation of the TBC1D1/GLUT4 pathway in gastrocnemius by an alternative mechanism from action of E2 replacement or PF diet. Further study is required to identify the effects of endurance Ex training on insulin- and contraction-stimulated glucose uptake and signaling pathways, on the basis of comparison with the effects of E2 replacement. Our results provide insights into the alternative effects of endurance Ex training on glucose metabolism under reduced estrogen function in postmenopausal women.
Carr MC (2003) The emergence of the metabolic syndrome with menopause. J Clin Endocrinol Metab 88:2404–2411
Park YW, Zhu S, Palaniappan L, Heshka S, Carnethon MR, Heymsfield SB (2003) The metabolic syndrome: prevalence and associated risk factor findings in the US population from the Third National Health and Nutrition Examination Survey, 1988–1994. Arch Intern Med 163:427–436
Lee CG, Carr MC, Murdoch SJ, Mitchell E, Woods NF, Wener MH, Chandler WL, Boyko EJ, Brunzell JD (2009) Adipokines, inflammation, and visceral adiposity across the menopausal transition: a prospective study. J Clin Endocrinol Metab 94:1104–1110
Tchernof A, Desmeules A, Richard C, Laberge P, Daris M, Mailloux J, Rhéaume C, Dupont P (2004) Ovarian hormone status and abdominal visceral adipose tissue metabolism. J Clin Endocrinol Metab 89:3425–3430
Richard D, Rochon L, Deshaies Y (1987) Effects of exercise training on energy balance of ovariectomized rats. Am J Physiol Regul Integr Comp Physiol 253:R740–R745
Zoth N, Weigt C, Laudenbach-Leschowski U, Diel P (2010) Physical activity and estrogen treatment reduce visceral body fat and serum levels of leptin in an additive manner in a diet induced animal model of obesity. J Steroid Biochem Mol Biol 122:100–105
Kawakami M, Yokota-Nakagi N, Uji M, Yoshida KI, Tazumi S, Takamata A, Uchida Y, Morimoto K (2018) Estrogen replacement enhances insulin-induced AS160 activation and improves insulin sensitivity in ovariectomized rats. Am J Physiol Endocrinol Metab 315:E1296–E1304
Després JP, Lemieux I (2006) Abdominal obesity and metabolic syndrome. Nature 444:881–887
Gower BA, Muñoz J, Desmond R, Hilario-Hailey T, Jiao X (2006) Changes in intra-abdominal fat in early postmenopausal women: effects of hormone use. Obesity 14:1046–1055
Sites CK, L'Hommedieu GD, Toth MJ, Brochu M, Cooper BC, Fairhurst PA (2005) The effect of hormone replacement therapy on body composition, body fat distribution, and insulin sensitivity in menopausal women: a randomized, double-blind, placebo-controlled trial. J Clin Endocrinol Metab 90:2701–2707
Spencer CP, Godsland IF, Cooper AJ, Ross D, Whitehead MI, Stevenson JC (2000) Effects of oral and transdermal 17β-estradiol with cyclical oral norethindrone acetate on insulin sensitivity, secretion, and elimination in postmenopausal women. Metabolism 49:742–747
Soranna L, Cucinelli F, Perri C, Muzj G, Giuliani M, Villa P, Lanzone A (2002) Individual effect of E2 and dydrogesterone on insulin sensitivity in post-menopausal women. J Endocrinol Investig 25:547–550
Howard BV, Rossouw JE (2013) Estrogens and cardiovascular disease risk revisited: the Women's Health Initiative. Curr Opin Lipidol 24:493–499
Evans EM, Van Pelt RE, Binder EF, Williams DB, Ehsani AA, Kohrt WM (2001) Effects of HRT and exercise training on insulin action, glucose tolerance, and body composition in older women. J Appl Physiol 90:2033–2040
Brown MD, Korytkowski MT, Zmuda JM, McCole SD, Moore GE, Hagberg JM (2000) Insulin sensitivity in postmenopausal women: independent and combined associations with hormone replacement, cardiovascular fitness, and body composition. Diabetes Care 23:1731–1736
Latour MG, Shinoda M, Lavoie JM (2001) Metabolic effects of physical training in ovariectomized and hyperestrogenic rats. J Appl Physiol 90:235–241
Saengsirisuwan V, Pongseeda S, Prasannarong M, Vichaiwong K, Toskulkao C (2009) Modulation of insulin resistance in ovariectomized rats by endurance exercise training and estrogen replacement. Metabolism 58:38–47
DeFronzo RA, Jacot E, Jequier E, Maeder E, Wahren J, Felber JP (1981) The effect of insulin on the disposal of intravenous glucose. Results from indirect calorimetry and hepatic and femoral venous catheterization. Diabetes 30:1000–1007
Cartee GD (2015) Roles of TBC1D1 and TBC1D4 in insulin- and exercise-stimulated glucose transport of skeletal muscle. Diabetologia 58:19–30
Kramer HF, Witczak CA, Taylor EB, Fujii N, Hirshman MF, Goodyear LJ (2006) AS160 regulates insulin- and contraction-stimulated glucose uptake in mouse skeletal muscle. J Biol Chem 281:31478–31485
Sakamoto K, Holman GD (2008) Emerging role for AS160/TBC1D4 and TBC1D1 in the regulation of GLUT4 traffic. Am J Physiol Endocrinol Metab 295:E29–E37
Gonzalez E, McGraw TE (2006) Insulin signaling diverges into Akt-dependent and -independent signals to regulate the recruitment/docking and the fusion of GLUT4 vesicles to the plasma membrane. Mol Biol Cell 17:4484–4493
Lansey MN, Walker NN, Hargett SR, Stevens JR, Keller SR (2012) Deletion of Rab GAP AS160 modifies glucose uptake and GLUT4 translocation in primary skeletal muscles and adipocytes and impairs glucose homeostasis. Am J Physiol Endocrinol Metab 303:E1273–E1286
Friedrichsen M, Mortensen B, Pehmøller C, Birk JB, Wojtaszewski JF (2013) Exercise-induced AMPK activity in skeletal muscle: role in glucose uptake and insulin sensitivity. Mol Cell Endocrinol 366:204–214
Kramer HF, Witczak CA, Fujii N, Jessen N, Taylor EB, Arnolds DE, Sakamoto K, Hirshman MF, Goodyear LJ (2006) Distinct signals regulate AS160 phosphorylation in response to insulin, AICAR, and contraction in mouse skeletal muscle. Diabetes 55:2067–2076
MacDonald TL, Ritchie KL, Davies S, Hamilton MJ, Cervone DT, Dyck DJ (2015) Exercise training is an effective alternative to estrogen supplementation for improving glucose homeostasis in ovariectomized rats. Physiol Rep 3:e12617
Zoth N, Weigt C, Zengin S, Selder O, Selke N, Kalicinski M, Piechotta M, Diel P (2012) Metabolic effects of estrogen substitution in combination with targeted exercise training on the therapy of obesity in ovariectomized Wistar rats. J Steroid Biochem Mol Biol 130:64–72
Szmuilowicz ED, Stuenkel CA, Seely EW (2009) Influence of menopause on diabetes and diabetes risk. Nat Rev Endocrinol 5:553–558
Toth MJ, Sites CK, Eltabbakh GH, Poehlman ET (2000) Effect of menopausal status on insulin-stimulated glucose disposal: comparison of middle-aged premenopausal and early postmenopausal women. Diabetes Care 23:801–806
DeNino WF, Tchernof A, Dionne IJ, Toth MJ, Ades PA, Sites CK, Poehlman ET (2001) Contribution of abdominal adiposity to age-related differences in insulin sensitivity and plasma lipids in healthy nonobese women. Diabetes Care 24:925–932
Prasannarong M, Vichaiwong K, Saengsirisuwan V (2012) Calorie restriction prevents the development of insulin resistance and impaired insulin signaling in skeletal muscle of ovariectomized rats. Biochim Biophys Acta 1822:1051–1061
Tazumi S, Omoto S, Nagatomo Y, Kawahara M, Yokota-Nakagi N, Kawakami M, Takamata A, Morimoto K (2018) Estrogen replacement attenuates stress-induced pressor responses through vasorelaxation via β2-adrenoceptors in peripheral arteries of ovariectomized rats. Am J Physiol Heart Circ Physiol 314:H213–H223
Morimoto K, Kurahashi Y, Shintani-Ishida K, Kawamura N, Miyashita M, Uji M, Tan N, Yoshida K (2004) Estrogen replacement suppresses stress-induced cardiovascular responses in ovariectomized rats. Am J Physiol Heart Circ Physiol 287:H1950–H1956
Patch LD, Brooks GA (1980) Effects of training on VO2 max and VO2 during two running intensities in rats. Pflügers Arch 386:215–219
Mitchell TW, Turner N, Hulbert AJ, Else PL, Hawley JA, Lee JS, Bruce CR, Blanksby SJ (2004) Exercise alters the profile of phospholipid molecular species in rat skeletal muscle. J Appl Physiol 97:1823–1829
Torbati D, Ramirez J, Hon E, Camacho MT, Sussmane JB, Raszynski A, Wolfsdorf J (1999) Experimental critical care in rats: gender differences in anesthesia, ventilation, and gas exchange. Crit Care Med 27:1878–1884
Antunes LC, Elkfury JL, Jornada MN, Foletto KC, Bertoluci MC (2016) Validation of HOMA-IR in a model of insulin-resistance induced by a high-fat diet in Wistar rats. Arch Endocrinol Metab 60:138–142
Bonora E, Targher G, Alberiche M, Bonadonna RC, Saggiani F, Zenere MB, Monauni T, Muggeo M (2000) Homeostasis model assessment closely mirrors the glucose clamp technique in the assessment of insulin sensitivity: studies in subjects with various degrees of glucose tolerance and insulin sensitivity. Diabetes Care 23:57–63
Turner RC, Holman RR, Matthews D, Hockaday TD, Peto J (1979) Insulin deficiency and insulin resistance interaction in diabetes: estimation of their relative contribution by feedback analysis from basal plasma insulin and glucose concentrations. Metabolism 28:1086–1096
D'Eon TM, Souza SC, Aronovitz M, Obin MS, Fried SK, Greenberg AS (2005) Estrogen regulation of adiposity and fuel partitioning. Evidence of genomic and non-genomic regulation of lipogenic and oxidative pathways. J Biol Chem 280:35983–35991
Heine PA, Taylor JA, Iwamoto GA, Lubahn DB, Cooke PS (2000) Increased adipose tissue in male and female estrogen receptor-α knockout mice. Proc Natl Acad Sci USA 97:12729–12734
Weigt C, Hertrampf T, Flenker U, Hülsemann F, Kurnaz P, Fritzemeier KH, Diel P (2015) Effects of estradiol, estrogen receptor subtype-selective agonists and genistein on glucose metabolism in leptin resistant female Zucker diabetic fatty (ZDF) rats. J Steroid Biochem Mol Biol 154:12–22
Gao Q, Mezei G, Nie Y, Rao Y, Choi CS, Bechmann I, Leranth C, Toran-Allerand D, Priest CA, Roberts JL, Gao XB, Mobbs C, Shulman GI, Diano S, Horvath TL (2007) Anorectic estrogen mimics leptin's effect on the rewiring of melanocortin cells and Stat3 signaling in obese animals. Nat Med 13:89–94
Geary N, Asarian L, Korach KS, Pfaff DW, Ogawa S (2001) Deficits in E2-dependent control of feeding, weight gain, and cholecystokinin satiation in ER-α null mice. Endocrinology 142:4751–4757
Watanabe T, Morimoto A, Sakata Y, Tan N, Morimoto K, Murakami N (1992) Running training attenuates the ACTH responses in rats to swimming and cage-switch stress. J Appl Physiol 73:2452–2456
Butcher RL, Collins WE, Fugo NW (1974) Plasma concentration of LH, FSH, prolactin, progesterone and estradiol-17β throughout the 4-day estrous cycle of the rat. Endocrinology 94:1704–1708
Widdop RE, Denton KM (2012) The arterial depressor response to chronic low-dose angiotensin II infusion in female rats is estrogen dependent. Am J Physiol Regul Integr Comp Physiol 302:R159–R165
Cleary MP, Muller S, Lanza-Jacoby S (1987) Effects of long-term moderate food restriction on growth, serum factors, lipogenic enzymes and adipocyte glucose metabolism in lean and obese Zucker rats. J Nutr 117:355–360
Gurley JM, Griesel BA, Olson AL (2016) Increased skeletal muscle GLUT4 expression in obese mice after voluntary wheel running exercise is posttranscriptional. Diabetes 65:2911–2919
Ing NH (2005) Steroid hormones regulate gene expression posttranscriptionally by altering the stabilities of messenger RNAs. Biol Reprod 72:1290–1296
This study was funded by Grant-in-Aid for Scientific Research from Nara Women's University.
Department of Environmental Health, Faculty of Human Life and Environment, Nara Women's University, Kita-Uoya Nishi-machi, Nara, 630-8506, Japan
Mizuho Kawakami, Naoko Yokota-Nakagi, Akira Takamata & Keiko Morimoto
Mizuho Kawakami
Naoko Yokota-Nakagi
Akira Takamata
Keiko Morimoto
Concept/design: KM and MK; acquisition of data: MK, KM, and NY-N; data analysis and interpretation: KM and MK; drafting of the manuscript: KM, MK, and AT; critical revision of the manuscript: KM, MK, and AT; approval of the article: all authors.
Correspondence to Keiko Morimoto.
All authors declare that they have no conflict of interest.
All procedures performed in this study were in accordance with the guidelines on the use and care of laboratory animals as put forward by the Physiological Society of Japan and under the control of the Ethics Committee of Animal Care and Experimentation, Nara Women's University, Japan.
Kawakami, M., Yokota-Nakagi, N., Takamata, A. et al. Endurance running exercise is an effective alternative to estradiol replacement for restoring hyperglycemia through TBC1D1/GLUT4 pathway in skeletal muscle of ovariectomized rats. J Physiol Sci 69, 1029–1040 (2019). https://doi.org/10.1007/s12576-019-00723-3
Issue Date: November 2019
Estradiol replacement
Hyperglycemia
TBC1D1/GLUT4 pathway
Running exercise training
Ovariectomized rat | CommonCrawl |
26.3: Anti-derivatives and integrals
[ "article:topic", "license:ccbysa", "showtoc:no", "authorname:martinetal" ]
https://phys.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fphys.libretexts.org%2FBookshelves%2FUniversity_Physics%2FBook%253A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_(Martin_Neary_Rinaldo_and_Woodman)%2F26%253A_Calculus%2F26.03%253A_Anti-derivatives_and_integrals
\(\require{cancel}\)
University Physics
Book: Introductory Physics - Building Models to Describe Our World (Martin et al.)
26: Calculus
Martin, Neary, Rinaldo, & Woodman
Assistant Professor (Physics) at Queen's University
Common anti-derivative and properties
Common uses of integrals in Physics - from a sum to an integral
In the previous section, we were concerned with determining the derivative of a function \(f(x)\). The derivative is useful because it tells us how the function \(f(x)\) varies as a function of \(x\). In physics, we often know how a function varies, but we do not know the actual function. In other words, we often have the opposite problem: we are given the derivative of a function, and wish to determine the actual function. For this case, we will limit our discussion to functions of a single independent variable.
Suppose that we are given a function \(f(x)\) and we know that this is the derivative of some other function, \(F(x)\), which we do not know. We call \(F(x)\) the anti-derivative of \(f(x)\). The anti-derivative of a function \(f(x)\), written \(F(x)\), thus satisfies the property: \[\begin{aligned} \frac{dF}{dx}=f(x)\end{aligned}\] Since we have a symbol for indicating that we take the derivative with respect to \(x\) (\(\frac{d}{dx}\)), we also have a symbol, \(\int dx\), for indicating that we take the anti-derivative with respect to \(x\): \[\begin{aligned} \int f(x) dx &= F(x) \\ \therefore \frac{d}{dx}\left(\int f(x) dx\right) &= \frac{dF}{dx}=f(x)\end{aligned}\] Earlier, we justified the symbol for the derivative by pointing out that it is like \(\frac{\Delta f}{\Delta x}\) but for the case when \(\Delta x\to 0\). Similarly, we will justify the anti-derivative sign, \(\int f(x) dx\), by showing that it is related to a sum of \(f(x)\Delta x\), in the limit \(\Delta x\to 0\). The \(\int\) sign looks like an "S" for sum.
While it is possible to exactly determine the derivative of a function \(f(x)\), the anti-derivative can only be determined up to a constant. Consider for example a different function, \(\tilde F(x)=F(x)+C\), where \(C\) is a constant. The derivative of \(\tilde F(x)\) with respect to \(x\) is given by: \[\begin{aligned} \frac{d\tilde{F}}{dx}&=\frac{d}{dx}\left(F(x)+C\right)\\ &=\frac{dF}{dx}+\frac{dC}{dx}\\ &=\frac{dF}{dx}+0\\ &=f(x)\end{aligned}\] Hence, the function \(\tilde F(x)=F(x)+C\) is also an anti-derivative of \(f(x)\). The constant \(C\) can often be determined using additional information (sometimes called "initial conditions"). Recall the function, \(f(x)=x^2\), shown in Figure A2.2.1 (left panel). If you imagine shifting the whole function up or down, the derivative would not change. In other words, if the origin of the axes were not drawn on the left panel, you would still be able to determine the derivative of the function (how steep it is). Adding a constant, \(C\), to a function is exactly the same as shifting the function up or down, which does not change its derivative. Thus, when you know the derivative, you cannot know the value of \(C\), unless you are also told that the function must go through a specific point (a so-called initial condition).
In order to determine the derivative of a function, we used Equation A2.2.1. We now need to derive an equivalent prescription for determining the anti-derivative. Suppose that we have the two pieces of information required to determine \(F(x)\) completely, namely:
the function \(f(x)=\frac{dF}{dx}\) (its derivative).
the condition that \(F(x)\) must pass through a specific point, \(F(x_0)=F_0\).
Figure A2.3.1: Determining the anti-derivative, \(F(x)\), given the function \(f(x) = 2x\) and the initial condition that \(F(x)\) passes through the point \((x_{0}, F_{0}) = (1, 3)\).
The procedure for determining the anti-derivative \(F(x)\) is illustrated above in Figure A2.3.1. We start by drawing the point that we know the function \(F(x)\) must go through, \((x_0,F_0)\). We then choose a value of \(\Delta x\) and use the derivative, \(f(x)\), to calculate \(\Delta F_0\), the amount by which \(F(x)\) changes when \(x\) changes by \(\Delta x\). Using the derivative \(f(x)\) evaluated at \(x_0\), we have: \[\begin{aligned} \frac{\Delta F_0}{\Delta x} &\approx f(x_0)\;\;\;\; (\text{in the limit} \Delta x\to 0 )\\ \therefore \Delta F_0 &= f(x_0) \Delta x\end{aligned}\] We can then estimate the value of the function \(F_1=F(x_1)\) at the next point, \(x_1=x_0+\Delta x\), as illustrated by the black arrow in Figure A2.3.1 \[\begin{aligned} F_1&=F(x_1)\\ &=F(x+\Delta x) \\ &\approx F_0 + \Delta F_0\\ &\approx F_0+f(x_0)\Delta x\end{aligned}\] Now that we have determined the value of the function \(F(x)\) at \(x=x_1\), we can repeat the procedure to determine the value of the function \(F(x)\) at the next point, \(x_2=x_1+\Delta x\). Again, we use the derivative evaluated at \(x_1\), \(f(x_1)\), to determine \(\Delta F_1\), and add that to \(F_1\) to get \(F_2=F(x_2)\), as illustrated by the grey arrow in Figure A2.3.1: \[\begin{aligned} F_2&=F(x_1+\Delta x) \\ &\approx F_1+\Delta F_1\\ &\approx F_1+f(x_1)\Delta x\\ &\approx F_0+f(x_0)\Delta x+f(x_1)\Delta x\end{aligned}\] Using the summation notation, we can generalize the result and write the function \(F(x)\) evaluated at any point, \(x_N=x_0+N\Delta x\): \[\begin{aligned} F(x_N) \approx F_0+\sum_{i=1}^{i=N} f(x_{i-1}) \Delta x\end{aligned}\] The result above will become exactly correct in the limit \(\Delta x\to 0\):
\[F(x_N) = F(x_0)+\lim_{\Delta x\to 0}\sum_{i=1}^{i=N} f(x_{i-1}) \Delta x\]
Let us take a closer look at the sum. Each term in the sum is of the form \(f(x_{i-1})\Delta x\), and is illustrated in Figure A2.3.2 for the same case as in Figure A2.3.1 (that is, Figure A2.3.2 shows \(f(x)\) that we know, and Figure A2.3.1 shows \(F(x)\) that we are trying to find).
Figure A2.3.2: The function \(f(x) = 2x\) and illustration of the terms \(f(x_{0})∆x\) and \(f(x_{1})∆x\) as the area between the curve \(f(x)\) and the \(x\) axis when \(∆x → 0\).
As you can see, each term in the sum corresponds to the area of a rectangle between the function \(f(x)\) and the \(x\) axis (with a piece missing). In the limit where \(\Delta x\to 0\), the missing pieces (shown by the hashed areas in Figure A2.3.2) will vanish and \(f(x_i)\Delta x\) will become exactly the area between \(f(x)\) and the \(x\) axis over a length \(\Delta x\). The sum of the rectangular areas will thus approach the area between \(f(x)\) and the \(x\) axis between \(x_0\) and \(x_N\): \[\begin{aligned} \lim_{\Delta x\to 0}\sum_{i=1}^{i=N} f(x_{i-1}) \Delta x=\text{Area between f(x) and x axis from $x_0$ to $x_N$}\end{aligned}\]
Re-arranging Equation A2.3.1 gives us a prescription for determining the anti-derivative: \[\begin{aligned} F(x_N) - F(x_0)&=\lim_{\Delta x\to 0}\sum_{i=1}^{i=N} f(x_{i-1}) \Delta x\end{aligned}\] We see that if we determine the area between \(f(x)\) and the \(x\) axis from \(x_0\) to \(x_N\), we can obtain the difference between the anti-derivative at two points, \(F(x_N)-F(x_0)\)
The difference between the anti-derivative, \(F(x)\), evaluated at two different values of \(x\) is called the integral of \(f(x)\) and has the following notation:
\[\int_{x_0}^{x_N}f(x) dx=F(x_N) - F(x_0)=\lim_{\Delta x\to 0}\sum_{i=1}^{i=N} f(x_{i-1}) \Delta x\]
As you can see, the integral has labels that specify the range over which we calculate the area between \(f(x)\) and the \(x\) axis. A common notation to express the difference \(F(x_N) - F(x_0)\) is to use brackets: \[\begin{aligned} \int_{x_0}^{x_N}f(x) dx=F(x_N) - F(x_0) =\big [ F(x) \big]_{x_0}^{x_N}\end{aligned}\]
Recall that we wrote the anti-derivative with the same \(\int\) symbol earlier: \[\begin{aligned} \int f(x) dx = F(x)\end{aligned}\] The symbol \(\int f(x) dx\) without the limits is called the indefinite integral. You can also see that when you take the (definite) integral (i.e. the difference between \(F(x)\) evaluated at two points), any constant that is added to \(F(x)\) will cancel. Physical quantities are always based on definite integrals, so when we write the constant \(C\) it is primarily for completeness and to emphasize that we have an indefinite integral.
As an example, let us determine the integral of \(f(x)=2x\) between \(x=1\) and \(x=4\), as well as the indefinite integral of \(f(x)\), which is the case that we illustrated in Figures A2.3.1 and A2.3.2. Using Equation A2.3.2, we have: \[\begin{aligned} \int_{x_0}^{x_N}f(x) dx&=\lim_{\Delta x\to 0}\sum_{i=1}^{i=N} f(x_{i-1}) \Delta x \\ &=\lim_{\Delta x\to 0}\sum_{i=1}^{i=N} 2x_{i-1} \Delta x \end{aligned}\] where we have: \[\begin{aligned} x_0 &=1 \\ x_N &=4 \\ \Delta x &= \frac{x_N-x_0}{N}\end{aligned}\] Note that \(N\) is the number of times we have \(\Delta x\) in the interval between \(x_0\) and \(x_N\). Thus, taking the limit of \(\Delta x\to 0\) is the same as taking the limit \(N\to\infty\). Let us illustrate the sum for the case where \(N=3\), and thus when \(\Delta x=1\), corresponding to the illustration in Figure A2.3.2: \[\begin{aligned} \sum_{i=1}^{i=N=3} 2x_{i-1} \Delta x &=2x_0\Delta x+2x_1\Delta x+2x_2\Delta x\\ &=2\Delta x (x_0+x_1+x_2) \\ &=2 \frac{x_3-x_0}{N}(x_0+x_1+x_2) \\ &=2 \frac{(4)-(1)}{(3)}(1+2+3) \\ &=12\end{aligned}\] where in the second line, we noticed that we could factor out the \(2\Delta x\) because it appears in each term. Since we only used 4 points, this is a pretty coarse approximation of the integral, and we expect it to be an underestimate (as the missing area represented by the hashed lines in Figure A2.3.2 is quite large).
If we repeat this for a larger value of N, \(N=6\) (\(\Delta x = 0.5\)), we should obtain a more accurate answer: \[\begin{aligned} \sum_{i=1}^{i=6} 2x_{i-1} \Delta x &=2 \frac{x_6-x_0}{N}(x_0+x_1+x_2+x_3+x_4+x_5)\\ &=2\frac{4-1}{6} (1+1.5+2+2.5+3+3.5)\\ &=13.5\end{aligned}\]
Writing this out again for the general case so that we can take the limit \(N\to\infty\), and factoring out the \(2\Delta x\): \[\begin{aligned} \sum_{i=1}^{i=N} 2x_{i-1} \Delta x &=2 \Delta x\sum_{i=1}^{i=N}x_{i-1}\\ &=2 \frac{x_N-x_0}{N}\sum_{i=1}^{i=N}x_{i-1}\end{aligned}\] Now, consider the combination: \[\begin{aligned} \frac{1}{N}\sum_{i=1}^{i=N}x_{i-1}\end{aligned}\] that appears above. This corresponds to the arithmetic average of the values from \(x_0\) to \(x_{N-1}\) (sum the values and divide by the number of values). In the limit where \(N\to \infty\), then the value \(x_{N-1}\approx x_N\). The average value of \(x\) in the interval between \(x_0\) and \(x_N\) is simply given by the value of \(x\) at the midpoint of the interval: \[\begin{aligned} \lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{i=N}x_{i-1}=\frac{1}{2}(x_N+x_0)\end{aligned}\] Putting everything together: \[\begin{aligned} \lim_{N\to\infty}\sum_{i=1}^{i=N} 2x_{i-1} \Delta x &=2 (x_N+x_0)\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{i=N}x_{i-1}\\ &=2 (x_N-x_0)\frac{1}{2}(x_N+x_0)\\ &=x_N^2 - x_0^2\\ &=(4)^2 - (1)^2 = 15\end{aligned}\] where in the last line, we substituted in the values of \(x_0=1\) and \(x_N=4\). Writing this as the integral: \[\begin{aligned} \int_{x_0}^{x_N}2x dx=F(x_N) - F(x_0)=x_N^2 - x_0^2\end{aligned}\] we can immediately identify the anti-derivative and the indefinite integral: \[\begin{aligned} F(x) &= x^2 +C \\ \int 2xdx&=x^2 +C\end{aligned}\] This is of course the result that we expected, and we can check our answer by taking the derivative of \(F(x)\): \[\begin{aligned} \frac{dF}{dx}=\frac{d}{dx}(x^2+C) = 2x\end{aligned}\] We have thus confirmed that \(F(x)=x^2+C\) is the anti-derivative of \(f(x)=2x\).
Exercise \(\PageIndex{1}\)
The quantity \(\int_{a}^{b}f(t)dt\) is equal to
the area between the function \(f(t)\) and the \(f\) axis between \(t=a\) and \(t=b\)
the sum of \(f(t)\Delta t\) in the limit \(\Delta t\to 0\) between \(t=a\) and \(t=b\)
the difference \(f(b) - f(a)\).
Table A2.3.1 below gives the anti-derivatives (indefinite integrals) for common functions. In all cases, \(x,\) is the independent variable, and all other variables should be thought of as constants:
Function, \(f(x)\)
Anti-derivative, \(F(x)\)
\(f(x)=a\) \(F(x)=ax+C\)
\(f(x)=x^n\) \(F(x)=\frac{1}{n+1}x^{n+1}+C\)
\(f(x)=\frac{1}{x}\) \(F(x)=\ln(|x|)+C\)
\(f(x)=\sin(x)\) \(F(x)=-\cos(x)+C\)
\(f(x)=\cos(x)\) \(F(x)=\sin(x)+C\)
\(f(x)=\tan(x)\) \(F(x)=-\ln(|\cos(x)|)+C\)
\(f(x)=e^x\) \(F(x)=e^x+C\)
\(f(x)=\ln(x)\) \(F(x)=x\ln(x)-x+C\)
Table A2.3.1: Common indefinite integrals of functions.
Note that, in general, it is much more difficult to obtain the anti-derivative of a function than it is to take its derivative. A few common properties to help evaluate indefinite integrals are shown in Table A2.3.2 below.
Anti-derivative
Equivalent anti-derivative
\(\int (f(x)+g(x)) dx\) \(\int f(x)dx+\int g(x) dx\) (sum)
\(\int (f(x)-g(x)) dx\) \(\int f(x)dx-\int g(x) dx\) (subtraction)
\(\int af(x) dx\) \(a\int f(x)dx\) (multiplication by constant)
\(\int f'(x)g(x) dx\) \(f(x)g(x)-\int f(x)g'(x) dx\) (integration by parts)
Table A2.3.2: Some properties of indefinite integrals.
Integrals are extremely useful in physics because they are related to sums. If we assume that our mathematician friends (or computers) can determine anti-derivatives for us, using integrals is not that complicated.
The key idea in physics is that integrals are a tool to easily performing sums. As we saw above, integrals correspond to the area underneath a curve, which is found by summing the (different) areas of an infinite number of infinitely small rectangles. In physics, it is often the case that we need to take the sum of an infinite number of small things that keep varying, just as the areas of the rectangles.
Consider, for example, a rod of length, \(L\), and total mass \(M\), as shown in Figure A2.3.3. If the rod is uniform in density, then if we cut it into, say, two equal pieces, those two pieces will weigh the same. We can define a "linear mass density", \(\mu\), for the rod, as the mass per unit length of the rod: \[\begin{aligned} \mu = \frac{M}{L}\end{aligned}\] The linear mass density has dimensions of mass over length and can be used to find the mass of any length of rod. For example, if the rod has a mass of \(M=5\text{kg}\) and a length of \(L=2\text{m}\), then the mass density is: \[\begin{aligned} \mu=\frac{M}{L}=\frac{(5\text{kg})}{(2\text{m})}=2.5\text{kg/m}\end{aligned}\] Knowing the mass density, we can now easily find the mass, \(m\), of a piece of rod that has a length of, say, \(l=10\text{cm}\). Using the mass density, the mass of the \(10\text{cm}\) rod is given by: \[\begin{aligned} m=\mu l=(2.5\text{kg/m})(0.1\text{m})=0.25\text{kg}\end{aligned}\] Now suppose that we have a rod of length \(L\) that is not uniform, as in Figure A2.3.3, and that does not have a constant linear mass density. Perhaps the rod gets wider and wider, or it has a holes in it that make it not uniform. Imagine that the mass density of the rod is instead given by a function, \(\mu(x)\), that depends on the position along the rod, where \(x\) is the distance measured from one side of the rod.
Figure A2.3.3: A rod with a varying linear density. To calculate the mass of the rod, we consider a small mass element \(∆m_{i}\) of length \(∆x\) at position \(x_{i}\). The total mass of the rod is found by summing the mass of the small mass elements.
Now, we cannot simply determine the mass of the rod by multiplying \(\mu(x)\) and \(L\), since we do not know which value of \(x\) to use. In fact, we have to use all of the values of \(x\), between \(x=0\) and \(x=L\).
The strategy is to divide the rod up into \(N\) pieces of length \(\Delta x\). If we label our pieces of rod with an index \(i\), we can say that the piece that is at position \(x_i\) has a tiny mass, \(\Delta m_i\). We assume that \(\Delta x\) is small enough so that \(\mu(x)\) can be taken as constant over the length of that tiny piece of rod. Then, the tiny piece of rod at \(x=x_i\), has a mass, \(\Delta m_i\), given by: \[\begin{aligned} \Delta m_i = \mu(x_i) \Delta x\end{aligned}\] where \(\mu(x_i)\) is evaluated at the position, \(x_i\), of our tiny piece of rod. The total mass, \(M\), of the rod is then the sum of the masses of the tiny rods, in the limit where \(\Delta x\to 0\): \[\begin{aligned} M &= \lim_{\Delta x\to 0}\sum_{i=1}^{i=N}\Delta m_i \\ &= \lim_{\Delta x\to 0}\sum_{i=1}^{i=N} \mu(x_i) \Delta x\end{aligned}\] But this is precisely the definition of the integral (Equation A2.3.1), which we can easily evaluate with an anti-derivative: \[\begin{aligned} M &=\lim_{\Delta x\to 0}\sum_{i=1}^{i=N} \mu(x_i) \Delta x \\ &= \int_0^L \mu(x) dx \\ &= G(L) - G(0)\end{aligned}\] where \(G(x)\) is the anti-derivative of \(\mu(x)\).
Suppose that the mass density is given by the function: \[\begin{aligned} \mu(x)=ax^3\end{aligned}\] with anti-derivative (Table A2.3.1): \[\begin{aligned} G(x)=a\frac{1}{4}x^4 + C\end{aligned}\] Let \(a=5\text{kg/m}^{4}\) and let's say that the length of the rod is \(L=0.5\text{m}\). The total mass of the rod is then: \[\begin{aligned} M&=\int_0^L \mu(x) dx \\ &=\int_0^L ax^3 dx \\ &= G(L)-G(0)\\ &=\left[ a\frac{1}{4}L^4 \right] - \left[ a\frac{1}{4}0^4 \right]\\ &=5\text{kg/m}^{4}\frac{1}{4}(0.5\text{m})^4 \\ &=78\text{g}\\\end{aligned}\]
With a little practice, you can solve this type of problem without writing out the sum explicitly. Picture an infinitesimal piece of the rod of length \(dx\) at position \(x\). It will have an infinitesimal mass, \(dm\), given by: \[\begin{aligned} dm = \mu(x) dx\end{aligned}\] The total mass of the rod is the then the sum (i.e. the integral) of the mass elements \[\begin{aligned} M = \int dm\end{aligned}\] and we really can think of the \(\int\) sign as a sum, when the things being summed are infinitesimally small. In the above equation, we still have not specified the range in \(x\) over which we want to take the sum; that is, we need some sort of index for the mass elements to make this a meaningful definite integral. Since we already know how to express \(dm\) in terms of \(dx\), we can substitute our expression for \(dm\) using one with \(dx\): \[\begin{aligned} M = \int dm = \int_0^L \mu(x) dx\end{aligned}\] where we have made the integral definite by specifying the range over which to sum, since we can use \(x\) to "label" the mass elements.
One should note that coming up with the above integral is physics. Solving it is math. We will worry much more about writing out the integral than evaluating its value. Evaluating the integral can always be done by a mathematician friend or a computer, but determining which integral to write down is the physicist's job!
26.2: Derivatives
26.4: Summary
Ryan Martin et al.
© Copyright 2022 Physics LibreTexts | CommonCrawl |
Large-time regular solutions to the modified quasi-geostrophic equation in Besov spaces
DCDS Home
Initial boundary value problems for a system of parabolic conservation laws arising from chemotaxis in multi-dimensions
July 2019, 39(7): 3767-3787. doi: 10.3934/dcds.2019153
Classification of linear skew-products of the complex plane and an affine route to fractalization
Núria Fagella , Àngel Jorba , Marc Jorba-Cuscó and Joan Carles Tatjer
Departament de Matemàtiques i Informàtica, Barcelona Graduate School of Mathematics (BGSMath), Universitat de Barcelona (UB), Gran Via de les Corts Catalanes 585, 08007 Barcelona, Spain
Received February 2018 Revised November 2018 Published April 2019
Fund Project: Work supported by the Maria de Maeztu Excellence Grant MDM-2014-0445 and the grant 2017 SGR 1374. N. Fagella has been partially supported by the grants MTM2014-52209-C2-2-P and MTM2017-86795-C3-3-P, A. Jorba, M. Jorba-Cuscó and J.C. Tatjer have been supported by the grant MTM2015-67724-P
Figure(2)
Linear skew products of the complex plane,
$ \left. \begin{array}{rcl} \theta & \mapsto & \theta+\omega,\\ z & \mapsto & a(\theta)z, \end{array} \right\} $
$ \theta\in {\mathbb T} $
$ z\in {\mathbb C} $
$ \frac{\omega}{2\pi} $
is irrational, and
$ \theta\mapsto a(\theta) \in {\mathbb C}\setminus \{0\} $
is a smooth map, appear naturally when linearizing dynamics around an invariant curve of a quasi-periodically forced complex map. In this paper we study linear and topological equivalence classes of such maps through conjugacies which preserve the skewed structure, relating them to the Lyapunov exponent and the winding number of
$ \theta\mapsto a(\theta) $
. We analyze the transition between these classes by considering one parameter families of linear skew products. Finally, we show that, under suitable conditions, an affine variation of the maps above has a non-reducible invariant curve that undergoes a fractalization process when the parameter goes to a critical value. This phenomenon of fractalization of invariant curves is known to happen in nonlinear skew products, but it is remarkable that it also occurs in simple systems as the ones we present.
Keywords: Reducibility, winding number, Lyapunov exponent, complex fibered maps, topological classification.
Mathematics Subject Classification: Primary: 37C60; Secondary: 30D05, 37D25.
Citation: Núria Fagella, Àngel Jorba, Marc Jorba-Cuscó, Joan Carles Tatjer. Classification of linear skew-products of the complex plane and an affine route to fractalization. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3767-3787. doi: 10.3934/dcds.2019153
L. V. Ahlfors, Complex Analysis: An Introduction of the Theory of Analytic Functions of one Complex Variable, Second edition. McGraw-Hill Book Co., New York-Toronto-London, 1966. Google Scholar
K. Bjerklöv, SNA's in the quasi-periodic quadratic family, Comm. Math. Phys., 286 (2009), 137-161. doi: 10.1007/s00220-008-0626-y. Google Scholar
J.-L. Figueras and À. Haro, Different scenarios for hyperbolicity breakdown in quasiperiodic area preserving twist maps, Chaos, 25 (2015), 123119, 16pp. doi: 10.1063/1.4938185. Google Scholar
J.-L. Figueras and À. Haro, A note on the fractalization of saddle invariant curves in quasiperiodic systems, Discrete and Continuous Dynamical Systems - S, 9 (2016), 1095-1107. doi: 10.3934/dcdss.2016043. Google Scholar
J.-L. Figueras and T. O. Timoudas, Sharp $\frac{1}{2}$-Hölder continuity of the Lyapunov exponent at the bottom of the spectrum for a class of Schrödinger cocycles, Preprint, 2018.Google Scholar
G. Fuhrmann, M. Gröger and T. Jäger, Non-smooth saddle-node bifurcations Ⅱ: Dimensions of strange attractors, Ergodic Theory Dynam. Systems, 38 (2018), 2989-3011. doi: 10.1017/etds.2017.4. Google Scholar
G. Fuhrmann and J. Wang, Rectifiability of a class of invariant measures with one non-vanishing Lyapunov exponent, Discrete Contin. Dyn. Syst. Ser. A, 37 92017), 5747–5761. doi: 10.3934/dcds.2017249. Google Scholar
P. Glendinning, Global attractors of pinched skew products, Dyn. Syst., 17 (2002), 287-294. doi: 10.1080/14689360210160878. Google Scholar
G. H. Hardy and J. E. Littlewood, Some problems of diophantine approximation, Acta Math., 37 (1914), 193-239. doi: 10.1007/BF02401834. Google Scholar
À. Haro and R. de la Llave, Manifolds on the verge of a hyperbolicity breakdown, Chaos, 16 (2006), 013120, 8pp. doi: 10.1063/1.2150947. Google Scholar
À. Haro and C. Simó, To be or not to be a SNA: That is the question, Preprint, 2006.Google Scholar
T. H. Jäger, Quasiperiodically forced interval maps with negative Schwarzian derivative, Nonlinearity, 16 (2003), 1239-1255. doi: 10.1088/0951-7715/16/4/303. Google Scholar
T. H. Jäger, On the structure of strange non-chaotic attractors in pinched skew products, Ergodic Theory Dynam. Systems, 27 (2007), 493-510. doi: 10.1017/S0143385706000745. Google Scholar
À. Jorba, C. Núñez, R. Obaya and J. C. Tatjer, Old and new results on strange nonchaotic attractors, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 17 (2007), 3895-3928. doi: 10.1142/S0218127407019780. Google Scholar
À. Jorba, Numerical computation of the normal behaviour of invariant curves of $n$-dimensional maps, Nonlinearity, 14 (2001), 943-976. doi: 10.1088/0951-7715/14/5/303. Google Scholar
À. Jorba and J. C. Tatjer, A mechanism for the fractalization of invariant curves in quasi-periodically forced 1-D maps, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 537-567. doi: 10.3934/dcdsb.2008.10.537. Google Scholar
[17] A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, volume 54 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187.
A. Ya. Khinchin, Continued Fractions, Dover Publications, Inc., Mineola, NY, russian edition, 1997. With a preface by B. V. Gnedenko, Reprint of the 1964 translation. Google Scholar
S. Lang, Introduction to Diophantine Approximations, Springer-Verlag, New York, second edition, 1995. doi: 10.1007/978-1-4612-4220-8. Google Scholar
L. Nirenberg, A proof of the Malgrange preparation theorem, In Proceedings of Liverpool Singularities–-Symposium, I (1969/70), pages 97–105. Lecture Notes in Mathematics, Vol. 192. Springer, Berlin, 1971. Google Scholar
M. Ponce, Local dynamics for fibred holomorphic transformations, Nonlinearity, 20 (2007), 2939-2955. doi: 10.1088/0951-7715/20/12/011. Google Scholar
A. Prasad, V. Mehra and R. Ramaskrishna, Strange nonchaotic attractors in the quasiperiodically forced logistic map, Phys. Rev. E, 57 (1998), 1576-1584. doi: 10.1103/PhysRevE.57.1576. Google Scholar
A. Prasad, S. S. Negi and R. Ramaswamy, Strange nonchaotic attractors, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), 291-309. doi: 10.1142/S0218127401002195. Google Scholar
H. Rüssmann, On optimal estimates for the solutions of linear difference equations on the circle, Celestial Mech., 14 (1976), 33-37. doi: 10.1007/BF01247129. Google Scholar
J. Stark, Invariant graphs for forced systems, Phys. D, 109 (1997), 163-179. doi: 10.1016/S0167-2789(97)00167-X. Google Scholar
Figure 1. Invariant curve of (3) for c = 1. Plots for µ = 0:5, µ = 0:9, µ = 0:99 and µ = 0:999
Figure Options
Download as PowerPoint slide
Figure 2. Asymptotic growth of the invariant curve of (3) w.r.t. $\mu$ when $\mu\nearrow 1$, for $c = 1$. The horizontal axis shows $1-\mu$ and the symbols "+'' denote the computed values. The dotted line is the fitting function. Top: On the left, fitting $\|z_{\mu}\|_{\infty}$ by $1.54(1-\mu)^{-1/2}$. On the right, fitting of $\|z_{\mu}'\|_{\infty}$ by $0.41(1-\mu)^{-3/2}$. Bottom: On the left, fitting of the length of $z_{\mu}$ by $3.1(1-\mu)^{-3/2}$. On the right, fitting of $(z_{\mu}, 0)$ by $0.5(1-\mu)^{-1}$
Bastian Laubner, Dierk Schleicher, Vlad Vicol. A combinatorial classification of postsingularly finite complex exponential maps. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 663-682. doi: 10.3934/dcds.2008.22.663
C. Alonso-González, M. I. Camacho, F. Cano. Topological classification of multiple saddle connections. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 395-414. doi: 10.3934/dcds.2006.15.395
Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321
Dongkui Ma, Min Wu. Topological pressure and topological entropy of a semigroup of maps. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 545-556. doi: 10.3934/dcds.2011.31.545
Janusz Mierczyński, Wenxian Shen. Formulas for generalized principal Lyapunov exponent for parabolic PDEs. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1189-1199. doi: 10.3934/dcdss.2016048
Pedro Duarte, Silvius Klein, Manuel Santos. A random cocycle with non Hölder Lyapunov exponent. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4841-4861. doi: 10.3934/dcds.2019197
Boris Hasselblatt, Zbigniew Nitecki, James Propp. Topological entropy for nonuniformly continuous maps. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 201-213. doi: 10.3934/dcds.2008.22.201
Daniel Guan. Classification of compact complex homogeneous spaces with invariant volumes. Electronic Research Announcements, 1997, 3: 90-92.
César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 577-580. doi: 10.3934/dcds.2004.11.577
Xu Zhang, Guanrong Chen. Polynomial maps with hidden complex dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2941-2954. doi: 10.3934/dcdsb.2018293
Ralf Spatzier, Lei Yang. Exponential mixing and smooth classification of commuting expanding maps. Journal of Modern Dynamics, 2017, 11: 263-312. doi: 10.3934/jmd.2017012
Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3461-3474. doi: 10.3934/dcdsb.2015.20.3461
Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098
Àngel Jorba, Pau Rabassa, Joan Carles Tatjer. Superstable periodic orbits of 1d maps under quasi-periodic forcing and reducibility loss. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 589-597. doi: 10.3934/dcds.2014.34.589
Gabriel Fuhrmann, Jing Wang. Rectifiability of a class of invariant measures with one non-vanishing Lyapunov exponent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5747-5761. doi: 10.3934/dcds.2017249
Vladislav Kruglov, Dmitry Malyshev, Olga Pochinka. Topological classification of $Ω$-stable flows on surfaces by means of effectively distinguishable multigraphs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4305-4327. doi: 10.3934/dcds.2018188
Meiyu Su. True laminations for complex Hènon maps. Conference Publications, 2003, 2003 (Special) : 834-841. doi: 10.3934/proc.2003.2003.834
José S. Cánovas. Topological sequence entropy of $\omega$–limit sets of interval maps. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 781-786. doi: 10.3934/dcds.2001.7.781
José M. Amigó, Ángel Giménez. Formulas for the topological entropy of multimodal maps based on min-max symbols. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3415-3434. doi: 10.3934/dcdsb.2015.20.3415
Daniel Wilczak, Piotr Zgliczyński. Topological method for symmetric periodic orbits for maps with a reversing symmetry. Discrete & Continuous Dynamical Systems - A, 2007, 17 (3) : 629-652. doi: 10.3934/dcds.2007.17.629
Núria Fagella Àngel Jorba Marc Jorba-Cuscó Joan Carles Tatjer | CommonCrawl |
Norm inflation for the Boussinesq system
Positive solution branches of two-species competition model in open advective environments
A theoretical approach to understanding rumor propagation dynamics in a spatially heterogeneous environment
Linhe Zhu 1,, , Wenshan Liu 1,2, and Zhengdi Zhang 1,
School of Mathematical Sciences, Jiangsu University, Zhenjiang, 212013, China
School of Mathematical Sciences, Nanjing Normal University Nanjing, 210023, China
* Corresponding author: Linhe Zhu
Received March 2020 Revised July 2020 Published September 2020
Fund Project: The first author is supported by National Natural Science Foundation of China (Grant No.12002135), China Postdoctoral Science Foundation (Grant No.2019M661732), Natural Science Foundation of Jiangsu Province (Grant No.BK20190836) and Natural Science Research of Jiangsu Higher Education Institutions of China (Grant No.19KJB110001). The third author is supported by National Natural Science Foundation of China (Grant No.11872189)
Figure(8)
Most of the previous work on rumor propagation either focus on ordinary differential equations with temporal dimension or partial differential equations (PDE) with only consideration of spatially independent parameters. Little attention has been given to rumor propagation models in a spatiotemporally heterogeneous environment. This paper is dedicated to investigating a SCIR reaction-diffusion rumor propagation model with a general nonlinear incidence rate in both heterogeneous and homogeneous environments. In spatially heterogeneous case, the well-posedness of global solutions is established first. The basic reproduction number $ R_0 $ is introduced, which can be used to reveal the threshold-type dynamics of rumor propagation: if $ R_0 < 1 $, the rumor-free steady state is globally asymptotically stable, while $ R_0 > 1 $, the rumor is uniformly persistent. In spatially homogeneous case, after introducing the time delay, the stability properties have been extensively studied. Finally, numerical simulations are presented to illustrate the validity of the theoretical analysis and the influence of spatial heterogeneity on rumor propagation is further demonstrated.
Keywords: Spatial heterogeneity, Reaction-diffusion model, Basic reproduction number, Stability, Uniform persistence.
Mathematics Subject Classification: Primary:35K57;Secondary:92D25.
Citation: Linhe Zhu, Wenshan Liu, Zhengdi Zhang. A theoretical approach to understanding rumor propagation dynamics in a spatially heterogeneous environment. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020274
H. Amann, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, in: H.J. Schmeisser, H. Triebel (Eds.), Function Spaces, Differential Operators and Nonlinear Analysis (Friedrichroda, 1992), in: Teubner-Texte zur Mathematik, vol 133, Teubner, Stuttgart, 1993, 9-126. doi: 10.1007/978-3-663-11336-2_1. Google Scholar
L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic patch model, Siam Journal on Applied Mathematics, 67 (2007), 1283-1309. doi: 10.1137/060672522. Google Scholar
Y. L. Cai, Y. Kang, M. Banerjee and W. M. Wang, Complex Dynamics of a host-parasite model with both horizontal and vertical transmissions in a spatial heterogeneous environment, Nonlinear Analysis: Real World Applications, 40 (2018), 444-465. doi: 10.1016/j.nonrwa.2017.10.001. Google Scholar
Y. L. Cai, X. Z. Lian, Z. H. Peng and W. M. Wang, Spatiotemporal transmission dynamics for influenza disease in a heterogenous environment, Nonlinear Analysis: Real World Applications, 46 (2019), 178-194. doi: 10.1016/j.nonrwa.2018.09.006. Google Scholar
T. Chen, L. Chen, X. Xu, Y. F. Cai, H. B. Jiang and X. Q. Sun, Reliable sideslip angle estimation of four-wheel independent drive electric vehicle by information iteration and fusion, Mathematical Problems in Engineering, 2018 (2018), 9075372, 14pp. doi: 10.1155/2018/9075372. Google Scholar
D. J. Daley and D. G. Kendall, Epidemic and rumors, Nature, 204 (1964), 1118. Google Scholar
O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations, Journal of Mathematical Biology, 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar
Z. M. Guo, F. B. Wang and X. F. Zou, Threshold dynamics of an infective disease model with a fixed latent period and non-local infections, Journal of Mathematical Biology, 65 (2012), 1387-1410. doi: 10.1007/s00285-011-0500-y. Google Scholar
J. Groeger, Divergence theorems and the supersphere, Journal of Geometry And Physics, 77 (2014), 13-29. doi: 10.1016/j.geomphys.2013.11.004. Google Scholar
J. K. Hale, Asymptotic Behavior of Dissipative Systems, , American Mathematical Society, Providence, RI, 1988. Google Scholar
J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar
H. W. Hethcote, The mathematical of infectious diseases, SIAM Review, 42 (2000), 599-653. doi: 10.1137/S0036144500371907. Google Scholar
X. L. Lai and X. F. Zou, Repulsion effect on superinfecting virions by infected cells, Bulletin of Mathematical Biology, 76 (2014), 2806-2833. doi: 10.1007/s11538-014-0033-9. Google Scholar
J. R. Li, H. J. Jiang, Z. Y. Yu and C. Hu, Dynamical analysis of rumor spreading model in homogeneous complex networks, Applied Mathematics and Computation, 359 (2019), 374-385. doi: 10.1016/j.amc.2019.04.076. Google Scholar
X. Liang, L. Zhang and X. Q. Zhao, Basic reproduction ratios for periodic abstract functional differential equations (with application to a spatial model for lyme disease), Journal of Dynamic and Differential Equations, 31 (2019), 1247-1278. doi: 10.1007/s10884-017-9601-7. Google Scholar
Y. J. Lou and X. Q. Zhao, A reaction-diffusion malaria model with incubation period in the vector population, Journal of Mathematical Biology, 62 (2011), 543-568. doi: 10.1007/s00285-010-0346-8. Google Scholar
Y. T. Luo, L. Zhang, T. T. Zheng and Z. D. Teng, Analysis of a diffusive virus infection model with humoral immunity, Cell-to-cell Transmission and Nonlinear Incidence. Physica A, 535 (2019), 122415, 20pp. doi: 10.1016/j.physa.2019.122415. Google Scholar
D. Maki and M. Thomson, Mathematical Models and Applications, Prentice-Hall, Englewood Cliffs, 1973. Google Scholar
P. Miao, Z. D. Zhang, C. W. Lim and X. D. Wang, Hopf bifurcation and hybrid control of a delayed ecoepidemiological model with nonlinear incidence rate and Holling type Ⅱ functional response, Mathematical Problems in Engineering, 2018 (2018), 6052503, 12pp. doi: 10.1155/2018/6052503. Google Scholar
R. Peng and X. Q. Zhao, A reaction-diffusion SIS epidemic model in a time-periodic environment, Nonlinearity, 25 (2012), 1451-1471. doi: 10.1088/0951-7715/25/5/1451. Google Scholar
M. H. Protter and H. F. Weinberger, Maximum Principles in Differential Equations, Pren-tice Hall, Englewood Cliffs, 1967. Google Scholar
X. Ren, Y. Tian, L. Liu and X. Liu, A reaction-diffusion within-host HIV model with cell-to-cell transmission, Journal of Mathematical Biology, 76 (2018), 1831-1872. doi: 10.1007/s00285-017-1202-x. Google Scholar
H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems, in: Math. Surveys Monger. vol. 41, American Mathematical Society, Providence, RI, 1995. Google Scholar
H. L. Smith and X. Q. Zhao, Robust persistence for semidynamical systems, Nonlinear Analysis: Theory Methods & Applications, 47 (2001), 6169-6179. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar
S. T. Tang, Z. D. Teng and H. Miao, Global dynamics of a reaction-diffusion virus infection model with humoral immunity and nonlinear incidence, Computers and Mathematics with Applications, 78 (2019), 786-806. doi: 10.1016/j.camwa.2019.03.004. Google Scholar
H. R. Thieme, Spectral bound and reproduction number for infinite-dimensional population structure and time heterogeneity, SIAM Journal on Applied Mathematics, 70 (2009), 188-211. doi: 10.1137/080732870. Google Scholar
H. R. Thieme, Convergence results and a Poincare-Bendixson trichotomy for asymptotically autonomous differential equations, Journal of Mathematical Biology, 30 (1992), 755-763. doi: 10.1007/BF00173267. Google Scholar
P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Biosciences, 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
N. K. Vaidya, F. B. Wang and X. F. Zou, Avian influenza dynamics in wild birds with bird mobility and spatial heterogeneous environment, Discrete and Continuous Dynamical Systems, 17 (2012), 2829-2848. doi: 10.3934/dcdsb.2012.17.2829. Google Scholar
J. Wang, L. Zhao and R. Huang, SIRaRu rumor spreading model in complex networks, Physica A, 398 (2014), 43-55. doi: 10.1016/j.physa.2013.12.004. Google Scholar
W. Wang and X. Q. Zhao, Basic reproduction numbers for reaction-diffusion epidemic models, SIAM Journal on Applied Dynamical Systems, 11 (2012), 1652-1673. doi: 10.1137/120872942. Google Scholar
W. Wang and X. Q. Zhao, A nonlocal and time-delayed reaction-diffusion model of dengue transmission, SIAM Journal on Applied Mathematics, 71 (2011), 147-168. doi: 10.1137/090775890. Google Scholar
J. L. Wang, F. L. Xie and T. Kuniya, Analysis of a reaction-diffusion cholera epidemic model in a spatially heterogeneous environment, Communications in Nonlinear Science and Numerical Simulation, 80 (2020), 104951, 20pp. doi: 10.1016/j.cnsns.2019.104951. Google Scholar
W. Wang, W. B. Ma and X. L. Lai, Repulsion effect on superinfecting virions by infected cells for virus infection dynamic model with absorption effect and chemotaxis, Nonlinear Analysis: Real World Applications, 33 (2017), 253-283. doi: 10.1016/j.nonrwa.2016.04.013. Google Scholar
R. Wu and X. Q. Zhao, A reaction-diffusion model of vector-borne disease with periodic delays, Journal of Nonlinear Science, 29 (2019), 29-64. doi: 10.1007/s00332-018-9475-9. Google Scholar
J. Wu, Theory and Applications of Partial Functional Differential Equations, Springer-Verlag, New York, 1996. doi: 10.1007/978-1-4612-4050-1. Google Scholar
D. M. Xiao and S. G. Ruan, Global analysis of an epidemic model with nonmonotone incidence rate, Mathematical Biosciences, 208 (2007), 419-429. doi: 10.1016/j.mbs.2006.09.025. Google Scholar
Y. Yu, Z. D. Zhang and Q. S. Bi, Multistability and fast-slow analysis for van der Pol-Duffing oscillator with varying exponential delay feedback factor, Applied Mathematical Modelling, 57 (2018), 448-458. doi: 10.1016/j.apm.2018.01.010. Google Scholar
R. Zhang, Y. Wang, Z. D. Zhang and Q. S. Bi, Nonlinear behaviors as well as the bifurcation mechanism in switched dynamical systems, Nonlinear Dynamics, 79 (2015), 465-471. Google Scholar
C. Zhang, J. G. Gao, H. Q. Sun and J. L. Wang, Dynamics of a reaction-diffusion SVIR model in a spatial heterogeneous environment, Physica A, 533 (2019), 122049, 15pp. doi: 10.1016/j.physa.2019.122049. Google Scholar
X. Q. Zhao, Basic reproduction ratios for periodic compartmental models with time delay, Journal of Dynamic and Differential Equations, 29 (2017), 67-82. doi: 10.1007/s10884-015-9425-2. Google Scholar
X. Q. Zhao, Dynamical Systems in Population Biology, Springer, New York, 2003. doi: 10.1007/978-0-387-21761-1. Google Scholar
L. H. Zhu, G. Guan and Y. M. Li, Nonlinear dynamical analysis and control strategies of a network-based SIS epidemic model with time delay, Applied Mathematical Modelling, 70 (2019), 512-531. doi: 10.1016/j.apm.2019.01.037. Google Scholar
L. H. Zhu, W. S. Liu and Z. D. Zhang, Delay differential equations modeling of rumor propagation in both homogeneous and heterogeneous networks with a forced silence function, Applied Mathematics and Computation, 370 (2020), 124925, 22pp. doi: 10.1016/j.amc.2019.124925. Google Scholar
L. H. Zhu and X. Y. Huang, SIRaRu rumor spreading model in complex networks, Communications in Theoretical Physics, 72 (2020), 015002. Google Scholar
L. H. Zhu, M. X. Liu and Y. M. Li, The dynamics analysis of a rumor propagation model in online social networks, Physica A, 520 (2019), 118-137. doi: 10.1016/j.physa.2019.01.013. Google Scholar
L. H. Zhu, H. Y. Zhao and H. Y. Wang, Partial differential equation modeling of rumor propagation in complex networks with higher order of organization, Chaos, 29 (2019), 053106, 23pp. doi: 10.1063/1.5090268. Google Scholar
L. H. Zhu, X. Zhou, Y. M. Li and Y. X. Zhu, Stability and bifurcation analysis on a delayed epidemic model with information dependent vaccination, Physica Scripta, 94 (2019), 125202. doi: 10.1088/1402-4896/ab2f04. Google Scholar
L. H. Zhu, H. Y. Zhao and H. Y. Wang, Stability and spatial patterns of an epidemi-like rumor propagation model with diffusions, Physica Scripta, 94 (2019), 085007. Google Scholar
M. Zhu and Y. Xu, A time-periodic dengue fever model in a heterogeneous environment, Mathematics and Computers in Simulation, 155 (2019), 115-129. doi: 10.1016/j.matcom.2017.12.008. Google Scholar
Figure 1. The asymptotic behavior of the solution of system (4)
Download as PowerPoint slide
Figure 2. The uniform persistence of rumor propagation
Figure 3. Projection diagram in the $ tx $-plane
Figure 4. Distribution of rumor collectors and rumor-infective users at $ t = 0.5 $ for different diffusion coefficient $ D = 0.001,1,5 $
Figure 5. Two incidence functions
Figure 6. Contour surfaces of $ \mathcal{R}^0 $ with consideration of $ \beta,\theta,A\in[0,1] $
Figure 7. (a) The density of susceptible users. (b) The density of collectors. (c) The density of infective users. (d) The rumor-free equilibrium point $ E_0 $ is globally asymptotically stable
Figure 8. (a) The density of susceptible users. (b) The density of collectors. (c) The density of infective users. (d) The rumor-prevailing equilibrium point $ E^\star $ is locally asymptotically stable
Klemens Fellner, Jeff Morgan, Bao Quoc Tang. Uniform-in-time bounds for quadratic reaction-diffusion systems with mass dissipation in higher dimensions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 635-651. doi: 10.3934/dcdss.2020334
Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316
Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400
Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405
H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020433
Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283
Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321
Masaharu Taniguchi. Axisymmetric traveling fronts in balanced bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3981-3995. doi: 10.3934/dcds.2020126
Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033
Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049
Chihiro Aida, Chao-Nien Chen, Kousuke Kuto, Hirokazu Ninomiya. Bifurcation from infinity with applications to reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3031-3055. doi: 10.3934/dcds.2020053
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242
Shin-Ichiro Ei, Hiroshi Ishii. The motion of weakly interacting localized patterns for reaction-diffusion systems with nonlocal effect. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 173-190. doi: 10.3934/dcdsb.2020329
Nabahats Dib-Baghdadli, Rabah Labbas, Tewfik Mahdjoub, Ahmed Medeghri. On some reaction-diffusion equations generated by non-domiciliated triatominae, vectors of Chagas disease. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021004
El Haj Laamri, Michel Pierre. Stationary reaction-diffusion systems in $ L^1 $ revisited. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 455-464. doi: 10.3934/dcdss.2020355
Chungang Shi, Wei Wang, Dafeng Chen. Weak time discretization for slow-fast stochastic reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021019
Gheorghe Craciun, Jiaxin Jin, Casian Pantea, Adrian Tudorascu. Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1305-1335. doi: 10.3934/dcdsb.2020164
Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017
HTML views (143)
Linhe Zhu Wenshan Liu Zhengdi Zhang | CommonCrawl |
MinMax and enormous branches
Active 11 days ago
I have to build a KI for a made-up game similar to chess. As I did research for a proper solution, I came upon the MinMax algorithm, but I'm not sure it will work with the given game dynamics.
The challenge is that we have far more permutations per turn than in chess because of these game rules.
Six pieces on the board, with different ranges.
In average, there are 8 possible moves for a piece per turn.
The player can choose as many pieces to move as he likes. For example none, all of them, or some number in between (whereas in chess you can only move one.)
Actual questions:
Is it feasible to implement MinMax for the described game?
Can alpha-beta-pruning and a refined evaluation function help (despite of the large number of possible moves)?
If no, is there a proper alternative?
game-ai logic
Douglas Daseeco
josebertjosebert
$\begingroup$ By similar to Chess do you mean 2-player, non-chance, perfect information, sequential (turn-based) games involving moving and capturing tokens? How big is the gameboard and what accounts for the higher number of branches? Is the game natively finite, or can it get "loopy" (potentially infinite loops)? $\endgroup$ – DukeZhou♦ Nov 30 '18 at 18:44
-The player can choose as many pieces to move as he likes. For example none, all of them, or some number inbetween. (Whereas in chess you can only move one)
That quote specifically is the part that really causes the size of your legal action set to blow up. You have a combinatorial action space here. If each of your pieces has 8 legal moves, then that is:
8 legal moves for the first piece (or 9 if that count didn't already include the "do nothing" option)
for each of those, there are again 8 or 9 different choices for the second piece (leading to e.g. $8 \times 8 = 64$ possible combinations for just the first two pieces)
for each of those, again 8 choices for the third piece (leading to $64 \times 8 = 512$ possible combinations for just the first three pieces).
This blows up way too quickly, and there's really no hope of ever getting a decent player for this using any MiniMax-based algorithm (including things like alpha-beta pruning, principal variation search etc.).
In the kinds of games that you describe, you'll want to use algorithms that can exploit the "structure" of your action space. A raw enumeration of all possible combinations blows up quickly, but many algorithms can do reasonably well by re-phrasing the problem in such a way that you have more "depth" rather than "breadth". For example, instead of viewing a full combination of choices for all pieces as a single "action", you can treat the choices per piece as a separate "action".
Rather than making a single choice out of $8 \times 8 \times 8 \times \dots$ possibilities every turn, you want to have a search tree where your player makes one choice out of $8$ (for the first piece), followed immediately by another choice out of $8$ (for the second piece), etc. The opposing player only gets to make a choice after the current player has made choices for all pieces. With such a strategy, the breadth of your search tree will no longer be a problem, but the depth will become a problem. To address this, you'll additionally want to make sure that your methods can generalize across different depth levels.
A good place to look would be combinatorial versions of Monte-Carlo Tree Search, such as those described in:
https://project.dke.maastrichtuniversity.nl/games/files/msc/Roelofs_thesis.pdf
https://www.jair.org/index.php/jair/article/view/11053
(probably a few other publications by the author of that second link)
These algorithms are quite a bit more complicated than MiniMax though, MiniMax is a very basic algorithm in comparison.
Dennis SoemersDennis Soemers
$\begingroup$ MCTS is a simple sampling approach which is traversing the given game tree but isn't able to reduce it's size. Like the Minimax strategy it will produce a problem because of the large branch factor but didn't provide a walkthrough. And "combinatorial games" are not an algorithm but the term describes certain type of games. $\endgroup$ – Manuel Rodriguez Nov 30 '18 at 11:22
$\begingroup$ @ManuelRodriguez The combinatorial variants of MCTS which I provided references for can handle combinatorial action spaces better by generalizing observations across different parts of the search tree (similar intuition to RAVE-like enhancements for standard MCTS). I didn't use the term "combinatorial games" anywhere, so I also didn't imply anywhere that that would be an algorithm rather than a certain type of game. $\endgroup$ – Dennis Soemers Nov 30 '18 at 11:46
$\begingroup$ All games are algorithms, and "combinatorial game" is a fuzzy term that expands as the scope of CGT expands (all games are also combinatorial.) For more info see Constraint Logic: A Uniform Framework for Modeling Computation as Games and [Playing Games with Algorithms: Algorithmic Combinatorial Game Theory] (erikdemaine.org/papers/AlgGameTheory_GONC3/paper.pdf) (Demaine/Hearn) $\endgroup$ – DukeZhou♦ Nov 30 '18 at 18:49
A huge "branch depth" is a common problem in game AI. The best-practice method to overcome it are heuristics. Formalizing heuristics in game playing can be done with Domain specific languages. The assumption is, that a board game solver has certain commands like "pickup figure", "setsearch depth 17" or "search freefield". The board game solver is treated as a textadventure which provides a userinterface and allows to formalize all the heuristics in a convincingly way. From a performance perspective such a solver works similar to the minimax algorithm. He has to search in the game tree until he finds a solution. The difference is, that the search is fine granular like in a PDDL solver. Instead of occupying all the cpu cores with 100% the search in the game tree is declared as an art which follows rules.
In the cited paper, the manhattan distance was used as an evaluation function. Partial Evaluation is another promising approach. The idea is to divide the goal into subgoals and solve them separate.
Romein, John W., Henri E. Bal, and Dick Grune. "An Application Domain Specific Language for Describing Board Games." Parallel and Distributed Processing Techniques and Applications. Vol. 1. 1997.
$\begingroup$ Thank you! Since i'm completely new to the subject this raises another question: the evaluation function for minmax is a heuristic approach, to determine which states/conditions on the board are more favorable. Is this all it need to cope with the huge branch depth? $\endgroup$ – josebert Nov 30 '18 at 9:04
$\begingroup$ Like @josebert mentioned, MiniMax-style algorithms already use heuristic functions to evaluate states. There are some other ways in which they can also use heuristic functions (early pruning, move ordering, etc.), but unless you do that in an extremely aggressive fashion, it won't address a combinatorial explosion of the action space. It would likely degrade the performance of minimax too much (because heuristics can be inaccurate), and you'd honestly be better off with different styles of algorithms altogether. $\endgroup$ – Dennis Soemers Nov 30 '18 at 13:08
$\begingroup$ @DennisSoemers Perhaps we should separate between the minimax description in university context and solving board games in reality. In university context, the aim is to explain the minimax algorithm to the student. They are informed about the mathematical background. Describing Minimax as a "heuristic aware" algorithm is optimistic. In a broader sense the term heuristic is reserved for a strategy which will reduce the state space drastically. $\endgroup$ – Manuel Rodriguez Dec 3 '18 at 21:18
Not the answer you're looking for? Browse other questions tagged game-ai logic or ask your own question.
Input/output encoding for a neural network to learn a grid-based game
What else can boost iterative deepening with alpha beta pruning?
More effective way to improve the heuristics of an AI… evolution or testing between thousands of pre-determined sets of heuristics?
Connect 4 minimax does not make the best move
Using a DQN with a variable amount of Valid Moves per turn for a Board Game
Transposition table is only used for roughly 17% of the nodes - is this expected?
Why do neural nets and machine learning tend to work well with MCTS, but not with regular Minimax game-playing AI?
How do I solve the problem of positioning 11 pieces into a 8x8 puzzle? | CommonCrawl |
Interlaboratory proficiency processing scheme in CSF aliquoting: implementation and assessment based on biomarkers of Alzheimer's disease
Piotr Lewczuk1, 2Email authorView ORCID ID profile,
Amélie Gaignaux3,
Olga Kofanova3,
Natalia Ermann1,
Fay Betsou3,
Sebastian Brandner4,
Barbara Mroczko2,
Kaj Blennow5, 6,
Dominik Strapagiel7, 8,
Silvia Paciotti9,
Jonathan Vogelgsang10,
Michael H. Roehrl11,
Sandra Mendoza12,
Johannes Kornhuber1 and
Charlotte Teunissen13
Alzheimer's Research & Therapy201810:87
In this study, we tested to which extent possible between-center differences in standardized operating procedures (SOPs) for biobanking of cerebrospinal fluid (CSF) samples influence the homogeneity of the resulting aliquots and, consequently, the concentrations of the centrally analyzed selected Alzheimer's disease biomarkers.
Proficiency processing samples (PPSs), prepared by pooling of four individual CSF samples, were sent to 10 participating centers, which were asked to perform aliquoting of the PPSs into two secondary aliquots (SAs) under their local SOPs. The resulting SAs were shipped to the central laboratory, where the concentrations of amyloid beta (Aβ) 1–42, pTau181, and albumin were measured in one run with validated routine analytical methods. Total variability of the concentrations, and its within-center and between-center components, were analyzed with hierarchical regression models.
We observed neglectable variability in the concentrations of pTau181 and albumin across the centers and the aliquots. In contrast, the variability of the Aβ1–42 concentrations was much larger (overall coefficient of variation 31%), with 28% of the between-laboratory component and 10% of the within-laboratory (i.e., between-aliquot) component. We identified duration of the preparation of the aliquots and the centrifugation force as two potential confounders influencing within-center variability and biomarker concentrations, respectively.
Proficiency processing schemes provide objective evidence for the most critical preanalytical variables. Standardization of these variables may significantly enhance the quality of the collected biospecimens. Studies utilizing retrospective samples collected under different local SOPs need to consider such differences in the statistical evaluations of the data.
Laboratory standardization
Cerebrospinal fluid
A growing body of evidence supports application of the cerebrospinal fluid (CSF) biomarkers as diagnostic tools for Alzheimer's disease (AD) and other neurodegeneration disorders [1, 2]. Due to their physical–chemical properties, some of the AD CSF biomarkers are prone to undesired changes in ex-vivo human body fluid samples. It is known that hydrophobic molecules such as amyloid beta (Aβ) peptides, particularly Aβ1–42, absorb to certain plastic surfaces [3–5], or deteriorate following repeated freezing/thawing cycles [6, 7], leading to artificially reduced concentrations. These phenomena are generally unsystematic and hence uncontrollable; for example, after the third freezing/thawing cycle the concentrations of Aβ1–42 significantly decrease in some individual CSF samples, but increase in other samples [6]. Therefore, carefully designed, consequently applied, and continuously controlled preanalytical sample handling standardized operating procedures (SOPs) are of extreme importance. Another dimension of the problem arises when biobanking and multicenter studies come into play. Very generally spoken, two main scenarios are possible in such studies: either the samples (such as the CSF specimens) are collected, processed, and finally locally analyzed in each of the participating centers of a multicenter project; or, alternatively, they are locally collected, processed, and temporarily stored, until they are subsequently shipped to one central laboratory, where all of the analyses take place. The second scenario, for example, is a typical case for large CSF biomarker discovery and validation studies or clinical trials, in which samples are collected and stored in local repositories, and then sent to one central laboratory. If the samples are collected locally but measured centrally, the intercenter variability of the measurements is by definition eliminated, but the differences across the local collection and processing SOPs need to be critically addressed and controlled for. Certainly, preanalytical bias due to differences in processing methods can be minimized in prospective studies if SOP training along with the material needed for sample processing (like test tubes, puncture needles, syringes) are offered to all of the participants before the beginning of sample collection. However, preanalytical bias is unavoidable in retrospective studies, where already stored samples are used from existing repositories.
The concept of the SOP proficiency schemes has been widely applied in nucleic acid extraction methods from different types of matrices [8], but to our best knowledge it has never been implemented in the context of CSF processing. Hence, in this study we attempted to test to which extent differences in the local biobanking processing SOPs influence (in)homogeneity of the resulting aliquots and, in consequence, as an outcome measure, the concentrations of centrally analyzed selected CSF biomarkers. We included three CSF biomarkers in our scheme—Aβ1–42, pTau181, and albumin, reasoning that Aβ1–42 is considered the most preanalytically sensitive biomarker while pTau181 is regarded as the most robust one of the four core CSF AD biomarkers (the two others being Aβ1–40 and total-Tau). For example, compared to total-Tau, pTau181 is less prone to adhesion to test-tube plastics [3] and shows less alteration of the concentrations following repetitive thawing/refreezing cycles of the sample [6]. Albumin, known to be one of the preanalytically most robust proteins in the CSF [9], was also added to the panel as a reference analyte.
Sample preparation and study protocol
The workflow for the sample preparation is presented in Fig. 1. Briefly, in the Laboratory for Clinical Neurochemistry and Neurochemical Dementia Diagnostics, Erlangen, Germany, CSF from four subjects was pooled, immediately after the lumbar punctures, into one portion of approximately 25 mL, which assured anonymization and nontraceability of the individual samples. This volume was then centrifuged and portioned into 25 primary samples (proficiency processing samples (PPSs)), of 1 mL each, which were immediately frozen at − 80 °C. Ten of these PPSs were then used for homogeneity testing in the Erlangen Laboratory, and 10 PPSs were sent on dry ice to the participating processing laboratories via the logistic unit of the Integrated BioBank of Luxembourg (IBBL). To keep the protocol consistent for all of the participants, the PPSs to be processed by the Erlangen Laboratory also underwent postal circulation in the interlaboratory processing scheme. The participants were asked to thaw the PPSs, and to prepare two secondary aliquots (SAs) strictly according to their local biobanking SOPs; the sole exception was that the resulting SA needed to be 500 μL, irrespective of the volume usually prepared by a participant. The resulting SAs were then frozen according to local procedures, and sent back to the laboratory in Erlangen on dry ice by standard logistics. Each participant was asked to provide the details of the local SOPs via a webpage maintained by the IBBL. The requested information included: storage conditions (temperature and duration) of the PPSs and the resulting SAs, time between thawing of the PPSs and freezing of the resulting SAs, centrifugation data (force, duration, and temperature), and type of secondary storage tubes used.
Flow chart of the project. Aβ amyloid beta, CSF cerebrospinal fluid, IBBL Integrated BioBank of Luxembourg, SOP standardized operating procedure
Homogeneity testing
Intra-assay variation was tested by 10 repetitions of the measurements of each analyte of interest, and expressed as a corresponding coefficient of variation (CV). Homogeneity testing was performed with the 10 PPSs (1 mL each), stored locally in the Erlangen Laboratory. Briefly, these samples were handled in strictly the same way as the samples sent to the participating laboratories (Fig. 1), with the exception that they were neither sent out nor back (stages 5 and 7 of the protocol were omitted). From each of the 10 PPSs, two SAs (500 μL) were prepared and frozen at − 80 °C until they were assayed, mimicking the workflow for the PPS → SA preparation and the central analyses, followed in the intercenter scheme. Assays for homogeneity testing were those described in the next section.
Laboratory analyses
The 20 SAs (10 participants × 2 aliquots) were kept at − 80 °C from arrival at Erlangen until the analyses. Aβ1–42 was assayed in duplicate with an ELISA from IBL International (Hamburg, Germany), pTau181 was measured in duplicate with an ELISA from Fujirebio Europe (Ghent, Belgium), and albumin was analyzed with kinetic nephelometry on an Immage 800 nephelometer (Beckman Coulter), following the protocols provided by the vendors. All measurements were run on one ELISA plate (Aβ1–42 and pTau181) or in one analytical run (albumin).
For each analyte of interest, the variability and its components are reported as a set of four statistical metrics: the total unadjusted CV, the within-laboratory coefficient of variation, the between-laboratory coefficient of variation, and the intraclass correlation coefficient (ICC).
For the statistical modeling, the SAs were treated as level-1 units nested within PPSs (level-2 clusters). Mixed-effects variance-components models were used to decompose the total variability of a given analyte into the between-cluster (i.e., random intercept, ψ) and the within-cluster (i.e., residual, θ) variability. ICC, as a metric for the within-cluster agreement, was calculated as ICC = ψ / (ψ + θ). To enable direct comparison of the components of the variance across the three analytes and the two parts of the study, the variance components were normalized for the average concentration of a given analyte (μ):
$$ \mathrm{Between}\hbox{-} \mathrm{center}\ \mathrm{coefficient}\ \mathrm{of}\ \mathrm{variation}\kern0.5em =\kern0.5em \frac{\sqrt{\psi }}{\mu } $$
$$ \mathrm{Within}\hbox{-} \mathrm{center}\ \mathrm{coefficient}\ \mathrm{of}\ \mathrm{variation}=\frac{\sqrt{\theta }}{\mu } $$
Linear regression models were fitted to test whether the between-SA variability of an analyte's concentrations (defined as the absolute difference of the concentrations of an analyte in the two SAs prepared by a given center divided by the center-specific average of this analyte) depends on the explanatory variables, characterizing the biobanking SOPs of the participants. Mixed-effects models were then fitted to test whether the concentrations of the analytes depend on the explanatory variables, specific for the participants' SOPs. Pairwise correlations between continuous variables are presented as Spearman's rank correlation coefficients (ρ). For the hypotheses testing, p < 0.05 was considered significant. All analyses were performed with Stata 14.2 (StataCorp, College Station, TX, USA).
CVs of the intra-assay imprecision of the measurements were 2.9%, 3.9%, and 3.5% for Aβ1–42, pTau181, and albumin, respectively. The results of the homogeneity analyses are presented in Table 1 (left columns) and Fig. 2. The pTau181 and albumin results were characterized by very low overall variability (CV = 3.2% and 4%, respectively), which was comparable to the intra-assay imprecision of the analytical methods used. In the case of Aβ1–42, a CV of 12% was observed, which is considerably higher compared to the method's intra-assay imprecision. The coefficients of between-cluster (i.e., between-PPS) variation were acceptably low for all three analytes (< 0.1% for Aβ1–42 and albumin, and 3% for pTau181). In contrast, the coefficient of within-PPS variation (i.e., variation between the SAs obtained from a given PPS) of Aβ1–42 (12%) turned out higher than those of pTau181 (0.8%) and albumin (4%). The ICCs of Aβ1–42 and albumin (< 0.01 in both cases) were much lower than the ICC of pTau181 (0.93). In the case of Aβ1–42, a low ICC derives from a relatively high within-cluster (i.e., between-SA) variability compared to the between-cluster (i.e., between-PPS) variability. In the case of albumin, taking into consideration its low total variability (CV = 4%), the low ICC should be treated as a neglectable nuisance.
Overall coefficients of variation (CVs), parameters of variance-component models decomposing total variability into between-cluster and within-cluster variability, and corresponding intraclass correlation coefficients (ICCs)
Intracenter schemea
Intercenter schemeb
CV (%)c
\( \frac{\sqrt{\psi }}{\mu } \) (%)
\( \frac{\sqrt{\theta }}{\mu } \) (%)
Aβ1–42
< 0.1
< 0.01
pTau181
0.11 (0.88)d
μ represents overall average concentration of a given biomarker in a given scheme
Aβ amyloid beta, PPS proficiency processing sample, SA secondary sample
aIn the intracenter scheme, between-cluster (random intercept) variability (ψ) was the variability of the results obtained from 10 PPSs, and within-cluster (residual) variability (θ) was the variability of the results obtained in two SAs prepared from each PPS
bIn the interlaboratory scheme, between-cluster (random intercept) variability (ψ) was the variability of the results obtained from 10 PPSs sent to the participating laboratories, and within-cluster (residual) variability (θ) was the variability of the results obtained in two SAs prepared in each laboratory from the PPS
cUnadjusted total coefficient of variation of the results of the measurements of 20 SAs treated as 20 independent samples, irrespective of their origin from the PPSs
dICCs after exclusion of the two centers (numbers 7 and 8) with apparent failure in their standardized operating procedures
Results of homogeneity testing for Aβ1–42 (a), pTau181 (b), and albumin (c). Individual concentrations obtained in aliquots prepared from 10 primary samples presented as filled circles; averages presented as hollow circles. Aβ amyloid beta
Interlaboratory processing variability
Ten laboratories participated in the intercenter testing; the results of this part of the study are presented in Table 1 (right columns) and Fig. 3. In the case of Aβ1–42, we observed considerably large overall variability (CV = 31%), much larger than in the case of the other two analytes, as well as much larger than the 12% CV of Aβ1–42 in the homogeneity study. The between-center component of this variability was even more evident, with the corresponding coefficient of Aβ1–42 exceeding more than 10 times the coefficients of the other two analytes. In contrast, the coefficients of the within-center variability of all three analytes (10%, 7%, and 9% for Aβ1–42, pTau181, and albumin, respectively) were comparable.
Results of interlaboratory processing scheme, for analytes of interest: Aβ1–42 (a), pTau181 (b), and albumin (c). Concentrations obtained in aliquots prepared by a given laboratory from primary sample presented as filled circles; laboratory-specific averages presented as hollow circles. Aβ amyloid beta
Correlation of variability between Aβ1–42 concentrations measured in two aliquots prepared by a given laboratory and duration of preparation of these aliquots. Variability expressed as absolute difference between concentrations in the two aliquots prepared by a given laboratory divided by average of these two concentrations. Aβ amyloid beta
A reasonably large ICC of Aβ1–42 (0.89) indicates better within-center than between-center agreement between the SAs. Much lower ICCs in the case of pTau181 (0.11) and albumin (0.05) are consequences of large within-center variability in two laboratories (numbers 7 and 8). This contributed significantly to high within-center variability and, correspondingly, to the low ICCs of these two analytes in the whole scheme. After exclusion of the results of these two centers from the statistical analysis, the within-center coefficients of variation of both pTau181 and albumin dropped to 2%, and the ICCs of pTau181 and albumin increased to 0.88 and 0.92, respectively, indicating an excellent within-center agreement.
Additional file 1: Table S1 presents details of the center-specific protocols, considered as potential confounders. Linear regression models were applied to test which of these confounders could explain between-aliquot (i.e., within-center) variability of the Aβ1–42 concentrations. Among the variables tested—storage duration and temperature of the PPSs, force, duration, and temperature of the centrifugation, duration of the preparation of the secondary aliquots, and duration and temperature of the SAs storage at the local biobanks—only the effect of the duration of the preparation of the secondary aliquots turned out to be significant, both unadjusted (p < 0.001) and adjusted for other explanatory variables (p = 0.042). In particular, the between-aliquot variability of the Aβ1–42 concentrations was not statistically significantly associated with its center-specific average concentration (p = 0.76; Additional file 2: Figure S1). Due to a large diversity of the secondary storage tubes used for the aliquoting (practically each participant used a different type of the storage tubes), it was impossible to quantify effects of the biobanking storage tubes. The correlation between the duration of the SAs preparation and the variability in Aβ1–42 concentrations between the SAs in nine laboratories (one participant did not report this metric) is presented in Fig. 4.
Finally, mixed-effects models were fitted to test whether the center-specific confounders affect the concentrations of the individual analytes. For all three of them, the effect of the centrifugation force, unadjusted for other covariates, was positive and either significant (pTau181, p = 0.001) or borderline insignificant (Aβ1–42, p = 0.087; albumin, p = 0.077). The effects of other variables, unadjusted for one another, were insignificant for all three analytes. Interestingly, although all PPSs reached the participants in deeply frozen status, we observed that the lowest concentrations of Aβ1–42, but neither pTau181 nor albumin, were measured in the SAs prepared in the two geographically most-distant centers (numbers 1 and 2, the only two participants from the USA), although the between-aliquot agreement of the results from these two centers was excellent. Further, in one center (number 4), the PPS was erroneously stored at + 4 °C for a prolonged time which, apparently, affected neither the concentrations nor the between-aliquot variability of any of the three analytes. Pairwise correlations of the average concentrations of the three analytes turned out insignificant (p > 0.25 for all three pairs after Bonferroni correction for multiple correlations; data not shown).
In this paper, we report the results of a proficiency processing scheme, evaluating variation between aliquots of CSF samples arising from the differences across local biobanking procedures. Whereas we observed neglectable variability in the concentrations of two analytes (albumin and pTau181) across the laboratories and the aliquots, the variability in Aβ1–42 concentrations in the aliquots prepared by the 10 participating laboratories reached 31%. By decomposition of the total variability into within-laboratory and between-laboratory components, we showed that in addition to the variability between aliquots prepared by different laboratories, the aliquots prepared within a given laboratory can also significantly differ from one another. Finally, we conclude that the duration of the sample processing is probably the most important factor contributing to this variability.
For each analyte of interest, the variability and its components are reported as a set of four statistical metrics: the total unadjusted coefficient of variation, the within-laboratory coefficient of variation, the between-laboratory coefficient of variation, and the intraclass correlation coefficient. The application of coefficients, instead of nonnormalized metrics (like, for example, standard deviations expressed in the units of measurements), enables a direct comparison of the variability and its components for quantities (the concentrations of the analytes), measured on different scales. We believe that such an approach could be also applied for other proficiency testing schemes, irrespective of the analytes tested, since it provides the most comprehensive way to interpret the results. Ideally, the CV, the within-laboratory and the between-laboratory coefficients of variation should be as close as possible to 0, but with the between-laboratory coefficient higher than the within-laboratory coefficient, which would result, in an ideal case, in the ICC as close as possible to 1. The higher the CV, the larger the total variability of the results, and if a CV exceeds some triggering threshold level (which perhaps should be defined taking into consideration factors such as the measurement's method imprecision) the total variability should be decomposed and analyzed closer. In contrast, in cases with a low overall CV, it does not make much sense, we believe, to analyze the components of the variability in more detail. For example, in this study, the within-PPS variability (i.e., the variability between two aliquots obtained from a given primary sample) of albumin in the intralaboratory part is several fold larger (4%) compared to its between-PPS component (< 0.1%). As a matter of fact, the whole variability of the albumin's concentration seems to result exclusively from its between-aliquot component, which, in turn, causes seemingly a very poor agreement between the aliquots (ICC < 0.01). However, considering the overall low variability of the albumin concentrations, this would be an overinterpretation; in this particular case it is reasonable to conclude that the different biobanking procedures do not generate significant variability. An entirely different issue is Aβ1–42 in the interlaboratory study, with a very high total CV (31%), much larger compared to the coefficients of the two other analytes in the intercenter study, as well as the coefficients of all three analytes in the intracenter study (≤ 12% for all analytes). In this case, majority of the total variance comes from the between-laboratory component (28%), with a minor part (10%) resulting from the within-laboratory (i.e., between-aliquot) variability. This pattern tells us that the biobanking SOPs are inhomogeneous across the laboratories and, so long as Aβ1–42 is the analyte of interest, the origin of the aliquots from particular repositories has to be taken into account in the statistical analysis. Indeed, if aliquots from centers number 1 and number 10 were sent for a hypothetical biomarker discovery project to a central laboratory, the fact that the samples were prepared under different SOPs would be enough to misinterpret the measurement results as being "normal" (samples from laboratory number 10) or "pathologic" (laboratory number 1), irrespective of the real status of the patients.
Interestingly, pTau181 and albumin showed low total variability (CVs ≤ 10%), but with an unexpected distribution of its components: there was on average much larger discrepancy between the aliquots generated by the same laboratory (7% and 9%) than the discrepancy across the laboratories (≤ 2.5%). Such distribution of the variability components results in a low between-aliquot agreement, as expressed by the low ICCs (0.11 and 0.05). This pattern is brought about by two outlying centers (numbers 7 and 8; Fig. 3) for which the concentrations of pTau181 and albumin on average fitted very well to the concentrations in the aliquots prepared by the remaining participants, but with large discrepancies between the particular aliquots. Indeed, exclusion of the results from these two centers reduced the overall within-laboratory variability by a factor of four, and increased the between-aliquot agreement (as expressed by the ICCs) 8–18 times (Table 1).
Both low within-laboratory and between-laboratory variability of the pTau181 and albumin concentrations in this study indicate the homogeneity of the PPSs sent to the participants, and also the preanalytical robustness of these two analytes. Hence, we suggest that CSF biobanks may perhaps consider measurements of pTau181 and/or albumin in a series of their aliquots resulting from one patient's primary sample as a control measure to test whether the local procedures fulfill homogeneity criteria.
We observed that the duration of the preparation of the secondary aliquots and the centrifugation force are the two major confounders contributing to the between-aliquot variability of Aβ1–42 concentrations, and to the concentrations of the biomarkers, respectively. Although these covariates were identified as major confounding factors influencing biomarker concentrations in other studies [10–13], we feel that it is premature to derive any conclusions on their role as confounders in biobanking protocols before future studies in a similar setting are completed.
This study is not without limitations. One of these is that the primary samples, sent to the participants, were already pretreated before shipment. First, they were prepared from a pool of four individual CSF samples; and, second, they needed to be frozen. Therefore, in this scheme one additional freezing/thawing cycle was applied compared to an everyday situation, in which a locally collected body fluid sample is normally not frozen before further processing. We believe, however, that at least three arguments justify the procedure as it was applied in our study: first, two freezing/thawing cycles do not bring about more variability in the concentrations of the CSF AD biomarkers than one cycle does [6, 7]; second, certain large-scale projects apply an intermediate freezing/thawing cycle before the aliquots are eventually stored in a biobank [14]; and, third (and perhaps crucial), it is not possible, in schemes like this one, to reduce the number of the freezing/thawing cycles to one, if processing items (samples) are supposed to reach distant laboratories in the most standardized conditions.
Finally, considering that this is probably the first study of this kind, we do not think we could give any kind of detailed recommendations regarding the between-center variability acceptance criteria or ways to improve the CSF biobanking SOPs. We may only speculate that future acceptance criteria should consider at least precision of the analytical methods and the values of the clinically relevant critical concentrations. The former issue is of pure statistical matter, and might be achieved by further decomposition of the total variability by introduction of one additional level in the hierarchical regression models, leading to the intra-assay imprecision (f.e., between-duplicate variability, L1) nested within secondary aliquots (L2) nested within centers (L3). The latter issue is much more complex, as it needs to consider which extent of error, particularly around the biomarkers' diagnosis-relevant decision levels (laboratory cutoff values), is acceptable in a given study. Similarly, in this single study the centrifugation force and the duration of the preparation of the secondary aliquots seem of relevance for the biobanking quality, but we believe that further studies are warranted to confirm these observations.
We believe that proficiency processing schemes, like these reported in the literature [8] as well as the one presented here, provide objective evidence for the most critical preanalytical variables. Standardization of these variables may significantly enhance the quality of the prospectively collected biospecimens and prevent from misinterpretations of the results from the retrospectively collected samples. For example, in our study the duration of the preparation of the aliquots from a primary CSF sample seems to be the most critical variable affecting the within-laboratory Aβ1–42 variability. As for the between-laboratory variability, centrifugation conditions appear to be a critical factor; however, further studies with a larger number of participants are necessary to confirm this finding. In the future, other confounders also need to be addressed; for example, the type of pipette tips and the technique of how a primary sample is pipetted to prepare secondary aliquots may definitely contribute to the intercenter inhomogeneity. Finally, the higher the number of participating laboratories in further schemes, the more reliable will be the elucidation of the impact of the most critical processing variables on analytes of interest [15]. For this reason, proficiency processing schemes are needed to support development of preanalytical CEN/ISO standards (http://www.spidia.eu/).
Aβ:
Amyloid beta
CSF:
Coefficient of variation
ICC:
Intraclass coefficient
International Standardization Organization
PPS:
Proficiency processing sample
Secondary sample
SOP:
Standardized operating procedure
The research leading to these results has received support from the Innovative Medicines Initiative Joint Undertaking under EMIF grant agreement n° 115372, resources of which are composed of financial contribution from the European Union's Seventh Framework Programme (FP7/2007–2013) and EFPIA companies' in-kind contribution. This research was funded in part through the NIH/NCI Cancer Center Support Grant P30 CA008748. DS and participation in CSF proficiency testing was supported by the Polish Ministry of Science and Higher Education grant DIR/WK/2017/01. CT received grants from the European Commission, the Dutch Research Council (ZonMW), Association of Frontotemporal Dementia/Alzheimer's Drug Discovery Foundation, and Alzheimer Netherlands.
The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
PL, FB, CT, and JK contributed to study design, data collection, statistical analyses, and manuscript drafting. AG, OK, NE, and SB contributed to sample preparation, laboratory analyses, and data analyses. BM, KB, DS, SP, JV, MHR, and SM contributed to data collection. All authors read and approved the final manuscript.
This study does not use or report any samples or data of individual subjects, and hence ethical consent is not applicable.
PL received consultation and/or lecture honoraria from IBL International, Fujirebio Europe, AJ Roboscreen, and Roche. CT functioned in advisory boards of Fujirebio and Roche, received nonfinancial support in the form of research consumables from ADxNeurosciences and Euroimmun, and performed contract research or received grants from Probiodrug, Janssen Prevention Ca enter, Boehringer, Brainsonline, AxonNeurosciences, EIP farma, and Roche.
Additional file 1: Table S1. Details of the SOPs to prepare secondary aliquots (SAs) reported by the 10 participants. (DOCX 16 kb)
Additional file 2: Figure S1. Bland–Altman plot of differences between Aβ1–42 concentrations in two aliquots prepared by each participating center as a function of the center-specific average of Aβ1–42 concentrations. (PDF 86 kb)
Department of Psychiatry and Psychotherapy, Laboratory for Clinical Neurochemistry and Neurochemical Dementia Diagnostics, Universitätsklinikum Erlangen, and Friedrich-Alexander Universität Erlangen-Nürnberg, Schwabachanlage 6, 91054 Erlangen, Germany
Department of Neurodegeneration Diagnostics, Department of Biochemical Diagnostics, Medical University of Bialystok, University Hospital of Bialystok, Bialystok, Poland
Integrated BioBank of Luxembourg, Dudelange, Luxembourg
Department of Neurosurgery, Universitätsklinikum Erlangen, and Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany
Clinical Neurochemistry Laboratory, Sahlgrenska University Hospital, Mölndal, Sweden
Institute of Neuroscience and Physiology, Sahlgrenska Academy at University of Gothenburg, Mölndal, Sweden
Biobank Lab, Department of Molecular Biophysics, Faculty of Biology and Environmental Protection, University of Lodz, Lodz, Poland
BBMRI.pl Consortium, Wroclaw, Poland
Department of Experimental Medicine, University of Perugia, Perugia, Italy
Department of Psychiatry and Psychotherapy, University Medical Center Göttingen (UMG), Göttingen, Germany
Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
NYU Center for Biospecimen Research and Development (CBRD), New York, NY, USA
Neurochemistry Laboratory and Biobank, Department of Clinical Chemistry, VU University Medical Center, Amsterdam, The Netherlands
Lewczuk P, Riederer P, O'Bryant SE, Verbeek MM, Dubois B, Visser PJ, Jellinger KA, Engelborghs S, Ramirez A, Parnetti L, et al. Cerebrospinal fluid and blood biomarkers for neurodegenerative dementias: an update of the Consensus of the Task Force on Biological Markers in Psychiatry of the World Federation of Societies of Biological Psychiatry. World J Biol Psychiatry. 2018;19:244–328.View ArticlePubMedGoogle Scholar
Dubois B, Feldman HH, Jacova C, Hampel H, Molinuevo JL, Blennow K, DeKosky ST, Gauthier S, Selkoe D, Bateman R, et al. Advancing research diagnostic criteria for Alzheimer's disease: the IWG-2 criteria. Lancet Neurol. 2014;13:614–29.View ArticlePubMedGoogle Scholar
Lewczuk P, Beck G, Esselmann H, Bruckmoser R, Zimmermann R, Fiszer M, Bibl M, Maler JM, Kornhuber J, Wiltfang J. Effect of sample collection tubes on cerebrospinal fluid concentrations of tau proteins and amyloid beta peptides. Clin Chem. 2006;52:332–4.View ArticlePubMedGoogle Scholar
Perret-Liaudet A, Pelpel M, Tholance Y, Dumont B, Vanderstichele H, Zorzi W, Elmoualij B, Schraen S, Moreaud O, Gabelle A, et al. Risk of Alzheimer's disease biological misdiagnosis linked to cerebrospinal collection tubes. J Alzheimers Dis. 2012;31:13–20.View ArticlePubMedGoogle Scholar
Kofanova OA, Mommaerts K, Betsou F. Tube polypropylene: a neglected critical parameter for protein adsorption during biospecimen storage. Biopreserv Biobank. 2015;13:296–8.View ArticlePubMedGoogle Scholar
Zimmermann R, Lelental N, Ganslandt O, Maler JM, Kornhuber J, Lewczuk P. Preanalytical sample handling and sample stability testing for the neurochemical dementia diagnostics. J Alzheimers Dis. 2011;25:739–45.View ArticlePubMedGoogle Scholar
Vanderstichele H, Van Kerschaver E, Hesse C, Davidsson P, Buyse MA, Andreasen N, Minthon L, Wallin A, Blennow K, Vanmechelen E. Standardization of measurement of beta-amyloid(1-42) in cerebrospinal fluid and plasma. Amyloid. 2000;7:245–58.View ArticlePubMedGoogle Scholar
Gaignaux A, Ashton G, Coppola D, De Souza Y, De Wilde A, Eliason J, Grizzle W, Guadagni F, Gunter E, Koppandi I, et al. A biospecimen proficiency testing program for biobank accreditation: four years of experience. Biopreserv Biobank. 2016;14:429–39.View ArticlePubMedGoogle Scholar
Rosenling T, Stoop MP, Smolinska A, Muilwijk B, Coulier L, Shi S, Dane A, Christin C, Suits F, Horvatovich PL, et al. The impact of delayed storage on the measured proteome and metabolome of human cerebrospinal fluid. Clin Chem. 2011;57:1703–11.View ArticlePubMedGoogle Scholar
Schoonenboom NS, Mulder C, Vanderstichele H, Van Elk EJ, Kok A, Van Kamp GJ, Scheltens P, Blankenstein MA. Effects of processing and storage conditions on amyloid beta (1-42) and tau concentrations in cerebrospinal fluid: implications for use in clinical practice. Clin Chem. 2005;51:189–95.View ArticlePubMedGoogle Scholar
Kaiser E, Schonknecht P, Thomann PA, Hunt A, Schroder J. Influence of delayed CSF storage on concentrations of phospho-tau protein (181), total tau protein and beta-amyloid (1-42). Neurosci Lett. 2007;417:193–5.View ArticlePubMedGoogle Scholar
Bjerke M, Portelius E, Minthon L, Wallin A, Anckarsater H, Anckarsater R, Andreasen N, Zetterberg H, Andreasson U, Blennow K. Confounding factors influencing amyloid beta concentration in cerebrospinal fluid. Int J Alzheimers Dis. 2010;2010. https://doi.org/10.4061/2010/986310.
Leitao MJ, Baldeiras I, Herukka SK, Pikkarainen M, Leinonen V, Simonsen AH, Perret-Liaudet A, Fourier A, Quadrio I, Veiga PM, de Oliveira CR. Chasing the effects of pre-analytical confounders—a multicenter study on CSF-AD biomarkers. Front Neurol. 2015;6:153.View ArticlePubMedPubMed CentralGoogle Scholar
Shaw LM, Vanderstichele H, Knapik-Czajka M, Clark CM, Aisen PS, Petersen RC, Blennow K, Soares H, Simon A, Lewczuk P, et al. Cerebrospinal fluid biomarker signature in Alzheimer's disease neuroimaging initiative subjects. Ann Neurol. 2009;65:403–13.View ArticlePubMedPubMed CentralGoogle Scholar
Betsou F, Bilbao R, Case J, Chuaqui R, Clements JA, De Souza Y, De Wilde A, Geiger J, Grizzle W, Guadagni F, et al. Standard PREanalytical Code version 3.0. Biopreserv Biobank. 2018. https://doi.org/10.1089/bio.2017.0109. | CommonCrawl |
Research | Open | Published: 29 September 2015
Cost effectiveness and resource allocation of Plasmodium falciparum malaria control in Myanmar: a modelling analysis of bed nets and community health workers
Tom L. Drake1,2,
Shwe Sin Kyaw1,
Myat Phone Kyaw3,
Frank M. Smithuis2,4,
Nicholas P. J. Day1,2,
Lisa J. White1,2 &
Yoel Lubell1,2
Funding for malaria control and elimination in Myanmar has increased markedly in recent years. While there are various malaria control tools currently available, two interventions receive the majority of malaria control funding in Myanmar: (1) insecticide-treated bed nets and (2) early diagnosis and treatment through malaria community health workers. This study aims to provide practical recommendations on how to maximize impact from investment in these interventions.
A simple decision tree is used to model intervention costs and effects in terms of years of life lost. The evaluation is from the perspective of the service provider and costs and effects are calculated in line with standard methodology. Sensitivity and scenario analysis are undertaken to identify key drivers of cost effectiveness. Standard cost effectiveness analysis is then extended via a spatially explicit resource allocation model.
Community health workers have the potential for high impact on malaria, particularly where there are few alternatives to access malaria treatment, but are relatively costly. Insecticide-treated bed nets are comparatively inexpensive and modestly effective in Myanmar, representing a low risk but modest return intervention. Unlike some healthcare interventions, bed nets and community health workers are not mutually exclusive nor are they necessarily at their most efficient when universally applied. Modelled resource allocation scenarios highlight that in this case there is no "one size fits all" cost effectiveness result. Health gains will be maximized by effective targeting of both interventions.
Malaria in Myanmar is important not only because of the health burden to the country's own population, but because of the emergence of artemisinin resistant Plasmodium falciparum parasites in the region [1–3]. The burden of malaria in Myanmar is spatially heterogeneous and seasonal. An estimated 37 % of the population live in areas broadly considered at high risk of malaria (>1 case per 1000 population) and a further 23 % live in areas of low malaria risk (0–1 cases per 1000 population) [4]. Funds for malaria control and elimination in Myanmar have surged in recent years, including the Myanmar specific Three Millennium Development Goal (3MDG) fund and the Global Fund's Regional Artemisinin Initiative; a US$ 100 million fund of which US$ 40 million has been allocated to Myanmar. The financial resources available to Myanmar at this time are both unprecedented in size and potentially time limited. It is critical, therefore, that these resources are allocated efficiently; maximizing impact and improving financially sustainability.
While there are various malaria control tools currently available, two interventions receive the majority of malaria control funding in Myanmar (1) insecticide-treated bed nets (ITN), including long-lasting insecticide-treated nets and (2) early diagnosis and treatment through malaria community health workers (CHW). ITN are most effective against mosquitoes which are nocturnal, endophagic blood feeders whereas most species commonly found in Myanmar tend toward crepuscular and exophagic biting [5–7]. The evidence base for the cost effectiveness of ITN against malaria spread by the former type of mosquito is strong [8] and previous modelling analysis found that while changes in mosquito biting behaviour could reduce effectiveness, nevertheless ITN could remain a cost effective intervention [9]. Malaria CHW costs have been estimated in Cambodia [10], Nigeria [11] and across sub-Saharan Africa [12].
The malaria policy discourse in Myanmar is frequently framed as a choice between prioritizing universal coverage of either ITN or CHW. While ITN and CHW can be thought of as competing for limited resources they are not mutually exclusive interventions and are in many senses complimentary. It is also the case however that funding is not available for universal access to both interventions, nor has it been demonstrated that such scale-up would be an efficient use of scarce resources in all settings. The factors which determine the costs and effects of both interventions will vary across the country, and context is important in understanding cost effectiveness. This study evaluates the costs and effects of these key malaria control interventions in Myanmar with an emphasis on sensitivity and scenario analysis rather than a generalized cost effectiveness result. Furthermore, targeted allocation of these resources is illustrated by an allocation model for a region of Myanmar.
Financial costs are included from the perspective of the National Malaria Control Programme or other malaria intervention funders. In this analysis ITN distribution is assumed to be conducted though a dedicated distribution campaign. ITN cost is comprised of procurement cost (c p ), direct distribution costs (c d ) and programme management (c m ). Cost data were obtained from Three Millennium Development Goal (3MDG), a funding organization in Myanmar, with crosschecking of components against private sector quotations. A distribution of two nets per household is assumed with 10 % wastage (w) and a mean household size of 5.2 people. The primary time horizon is one year and as such the per person ITN cost is annualized according to the lifespan of the net (l), assumed to be three years, using a discount rate of 5 % (r) [13].
$$c_{ITN} = \frac{{(c_{p} + c_{d} + c_{m} )(1 + w)}}{{r^{ - 1} \left( {1 - \left( {1 + r} \right)^{ - l} } \right)}}$$
CHW costs are derived from separate detailed cost analysis currently under review. To briefly summarize, CHW costs are estimated using an ingredients based micro costing of six cost centres: patient services; training; monitoring and supervision, programme management; incentives and overheads. For this cost effectiveness analysis the cost of treatment (c ACT ) is separated from the remaining CHW cost per person covered (c CHW ). In addition to intervention costs, diagnosis and treatment direct costs for malaria cases treated by the basic health system are included (c ACT ).
CHW are an extension of the health system and therefore marginal utility will depend on locally specific access to treatment. The model must define a common metric to quantify the effects of ITN and CHW. The model calculates the number of years of life lost (YLL), a widely used metric for health impact, through treatment of cases or cases directly averted by bed nets. In this case YLL are likely to be similar to disability adjusted life years as the contribution of morbidity will be negligible compared with mortality. The model was developed in both R (version 3.1.2) and TreeAge (TreeAge Pro 2014, USA).
The probability tree (Fig. 1) traces an individual through a chronological series of event possibilities beginning with an annual probability of contracting malaria (m) which is adjusted by the protective effect of ITN (p), if applicable. Individuals with malaria have a probability they will receive treatment from a provider other than a CHW (a). If a CHW is available in the village there is a probability (q) that a malaria case will seek treatment from the CHW, from both those who would have received treatment elsewhere and from those who would not have received any treatment. Each case of malaria has a probability of death in absence of treatment (μ) and a mean number of YLLs lost per death (d). Treatment is assumed to be with an ACT. The direct reduction in mortality is assumed to be the same for ACT (r 1). The terminal payoffs are scaled by population (v) and calculate the net cost and net effects for each intervention arm for one village (or one township when applied in the resource allocation model, see below). Parameter values can be found in Table 1. For the purpose of this model only one provider is attended per person, individuals may seek treatment at a CHW instead of their previous provider. This is intended to reflect the greater marginal utility is areas with poor access to treatment, even when uptake at the CHW is equal.
Probability tree model of cost and impact for malaria community health workers and bed nets
Table 1 Parameter list and values for decision tree models
The model was developed as the simplest structure that incorporates the key relevant data and provides the desired output metrics of cost and years of life lost. The advantages of a simple model are ease of communication to end users, speed of development and flexibility of application.
Bed nets and community health workers are not universally applied interventions and a general estimate of intervention costs and effects misses important variation, particularly with respect to the sometimes extreme remoteness of different populations in Myanmar. Instead, intervention cost effectiveness is calculated in four illustrative accessibility or remoteness scenarios, whereby more remote settings are characterized by increased cost of programme delivery, increased CHW uptake and decreased baseline access to treatment (Table 2). Data are not available to support specific parameterizations for these assumption but the direction of trends are intuitive and supported by policy makers at the national malarial control programme and programme managers at an affiliated non-governmental organization, Medical Action Myanmar. In addition to the scenario analysis, univariate sensitivity analysis is undertaken to identify key determinants of intervention cost effectiveness. Probabilistic Sensitivity Analysis (PSA) can be found in the supporting documentation (Additional File 1). Quantified and non-quantified costs and consequences are summarized in Table 3 to aid interpretation and to highlight potentially important factors which are not included in the quantitative analysis, as recommended for economic evaluations of public health interventions by Weatherly and colleagues [17].
Table 2 Parameter values for four remoteness scenarios
Table 3 Cost-consequence summary of insecticide treated nets and malaria community health workers in Myanmar
Cost effectiveness ratios are calculated for each intervention against a common null comparator or "no additional intervention" baseline, which includes the number of YLLs expected in absence of intervention and the cost of treatment for patients who receive it. The marginal benefit of each in the presence of the other is not equal to the marginal benefit of each in isolation. A CHW in a village with good bed net coverage has lower impact than in the same village without bed net coverage because there are fewer cases to treat, and vice versa. For this reason the combined intervention arm is included explicitly as a model output rather than as a sum of separate interventions. Estimates are per year and reflect a village of 500 people with 25 malaria cases per year in absence of interventions.
An extension to standard cost effectiveness analysis, the second stage of this study applies a spatially explicit resource allocation model for a given budget. The model is applied to the Tier 1 or 'MARC' region of Myanmar, an area in the east of Myanmar identified as a priority area for malaria control. There are 52 townships in Tier 1 to which a fixed budget of US$ 10 million is allocated. Township specific data on population is from the 2014 census [18] and malaria incidence is based on routine health system surveillance records, currently managed by WHO Myanmar on behalf of the Ministry of Health (2013, unpublished). The malaria surveillance system in Myanmar is undergoing systemic improvements and data capture is not complete. All other parameter values are as reported in Table 1.
The allocation model uses the decision tree in Fig. 1 to calculate cost effectiveness ratios for all intervention options for each geographic patch, in this case a township. Once all scenario cost effectiveness ratios are calculated the model allocates the available budget starting with the most cost effective intervention. As the budget is allocated, the most cost effective intervention in a particular township may be replaced by a less cost effective, but more effective intervention. Dominated intervention scenarios, those where any increase in effect can be achieved by a more cost effective alternative, are excluded. The allocation process ceases when the remaining budget is less than the marginal cost of the next most cost effective intervention. It is worth noting that the optimal allocation of resources is not identified through sequential iteration and improvement of budget allocation options since the cost effectiveness ratios provide sufficient information to identify the allocation result directly. This is more accurate and computationally efficient than identification of a distribution of resources through iterative optimization or "brute-force" calculation of all or a large number of possible distribution scenarios. The resource allocation analysis is repeated to examine the impact of variations in bed net protective effectiveness, CHW uptake and cost sharing for integrated CHW programmes.
The cost effectiveness of malaria control in Myanmar is context dependent. CHW have greater potential effects, particularly in more remote settings, but are also more costly. In the scenario analysis, easily accessible village setting CHW avert 0.51 YLLs per year at a cost of US$ 556 (US$ 1089 per YLL averted). This rises in the very difficult to reach villages to 4.05 YLLs averted at a cost of US$ 2295 (US$ 567 per YLL averted), a higher cost but a more cost effective use of CHWs. Bed nets were consistently less costly and a modestly effective intervention. In the easily accessible village setting bed nets avert 1.24 YLLs at a cost of US$ 238 (US$ 193 per YLL averted), rising to 2.25YLL averted for US$ 750 (US$ 333 per YLL averted). In the very difficult to access village setting, a combination of both bed nets and CHW gives the greatest impact of 5.08 YLLs averted for a cost of US$ 3031 (US$ 597 per YLL averted). The above results are summarized in Table 4 and Fig. 2 and assume that CHW only provide malaria services (this assumption is relaxed in the resource allocation analysis).
Table 4 Costs and effects of malaria interventions in four remoteness scenarios
Costs and effects of malaria control in different accessibility scenarios. Circle indicates a dominated intervention. E easily accessible, M moderately accessible, D difficult to access, V very difficult to access
Sensitivity analysis
Univariate sensitivity analysis was conducted for the cost effectiveness of CHW (Fig. 3) and bed nets (Fig. 4) using the wide uncertainty ranges in Table 1. The key determinants of cost effectiveness for CHW are baseline access to treatment with an ACT and the likelihood that a person with malaria seeks treatment from the CHW. In reality these two factors may be related; low baseline access to treatment might be expected to increase treatment seeking at a CHW. Univariate sensitivity analysis treats these values as independent. The key determinants of bed net cost effectiveness are the untreated malaria mortality risk and the protective effect of the net. Changes in malaria incidence and mortality affect the magnitude of effects substantially but proportionally for all intervention options, and therefore do not affect intervention comparison.
Change in CHW cost effectiveness: univariate sensitivity analysis of all relevant parameters
Change in bed net cost effectiveness: univariate sensitivity analysis of all relevant parameters
Figure 5a presents an illustrative optimal allocation of an annual budget of US$ 10 million to CHW and ITN roll out in the 52 townships of the MARC region, Myanmar. Almost half of the townships are allocated both CHW and ITN, 12 townships receive ITN only and 15 townships are allocated to provide standard health services without CHW or ITN. Figure 5b–d present the scenario variations where key assumptions are varied in order to observe the effect on resource allocation. Panel b assumes a low ITN protective effect of 5 %, rather than the default 30 %. Panel c presents resource distribution assuming 95 % uptake of CHW by individuals with malaria, rather than 30 %. Panels b and c find that at the margin, CHW rather than ITN should be prioritized. The specific townships receiving these marginal interventions is likely to be an artefact of population size and the residual budget amount at the end of the allocation process. Panel d presents a cost-sharing scenario, where the benefits of an integrated CHW programme are represented by an assumption that funds allocated for malaria control need only fund 50 % of the total programme cost. Notably, the allocation of both CHW and ITN to the majority of Southern, and Western township and to the Kachin townships in the North, is robust to these scenario variations.
Township allocation of malaria interventions in the MARC region, Myanmar. Legends: Maps indicate allocation of US$ 10 million to bed nets and malaria community health workers in the MARC region, Myanmar. a Allocation using default parameter values detailed in Table 1. b Allocation assuming a lower bed net protective effect of 5 %. c Allocation assuming a higher uptake of community health workers; 95 % of malaria infections. d Allocation assuming 50 % cost-sharing for community health workers. For panels (b–d) all parameters other than the specified variation are the default values outlined in Table 1
Malaria intervention decisions in Myanmar are based on judgement supported by the limited available evidence. The average and incremental cost effectiveness ratios give decision makers a sense of "bang for buck" to inform these judgements while the resource allocation modelling highlights the importance of targeting both interventions to where they can have the greatest impact. This study finds that CHW have the potential for high impact on malaria, particularly in difficult to access areas, where availability of other services may be low and if CHW use is good. However, CHW are more costly and, if only delivering malaria services, are associated with higher cost-effectiveness ratios. ITN are a robustly cost effective intervention but the total health impact is expected to be lower in Myanmar due to the biting habits of the of the main mosquito vector species. The annualization of the ITN cost over the lifespan of the net, conservatively assumed to be three years, means the comparative cost is lower. Although the cost of health gains is low with ITN, in the context of planning for malaria elimination more impactful interventions will need to be considered.
The cost effectiveness of both CHW and ITN is sensitive to the baseline availability of treatment, indicating that services will be most cost effective when targeted to areas with poor access to malaria diagnosis and treatment. The utilization of CHW is also very important and investment is quality training, CHW supervision and community engagement may be important to implementing a cost effective CHW programme [19]. A further option available to planners seeking to improve the cost effectiveness of CHW programmes is to expand the package of services offered by CHW. This is already happening and many CHW are now also providing a basic health care package or providing additional services such as tuberculosis detection and treatment. Measures to improve the cost effectiveness of community health workers include expanding the scope of available services; strategies to improve the likelihood that community members seek treatment from the community health worker when they have fever; and targeting community health workers to where they will be most cost effective.
For several reasons the main analysis does not apply a cost effectiveness threshold. It is difficult to define an appropriate threshold for the cost per YLL or DALY averted; the budget context in Myanmar is complex with modest NMCP funds being supplemented by international aid. Moreover in the context of a drive towards elimination all interventions will cease to appear cost effective as the malaria burden decreases (in absence of a model for long term benefits). The use of measures such as cost-per DALY averted are, therefore, less relevant and highly uncertain [20, 21]. The most immediately relevant question is how to maximize impact from malaria funds available in Myanmar and for this no threshold is necessary.
An extension of standard cost effectiveness analysis to spatially (in this case township-wise) specific resource allocation modelling highlights the need for a paradigm shift in policy discussion from prioritizing universal coverage of the "most cost effective" intervention to targeting of both interventions and presents illustrative township specific recommendations. In this analysis, malaria burden and to a lesser extent population numbers determine the optimal distribution of resources. Future work will seek to include additional data specific to each township.
Part of the aim of this study is to formalize through a cost effectiveness framework the kind of intuitive judgements that many policy makers and influencers in Myanmar are discussing. There has been much debate regarding the various merits of bed nets and malaria CHW. This paper does not come down on either side of this debate but seeks to summarize the characteristics of each and highlight the importance of targeting both to areas where impact can be maximized.
This study has several limitations. The model does not include human population movement or malaria transmission dynamics. A malaria transmission model, incorporated into the cost effectiveness model, would be a useful extension. This would allow indirect effects to be incorporated into the analysis and allow provide projections of the impact on malaria transmission going forward. The analysis does not include benefits to the patient beyond malaria impact, such as reduced costs to access care nor are issues of service quality examined here. For CHW there is a strong interest in extending their ability to diagnose and treat other causes of illness and therefore higher health gains than accounted for here. The model considers malaria control in the general population and does not specifically include high-risk groups such as migrant or mobile populations. Resource allocation modelling is applied at the township level whereas in Myanmar townships make decisions to allocate malaria interventions on a village-by-village basis. Finally, township variation here is characterized by population and malaria burden. Costs, baseline access to treatment and treatment-seeking behaviour are not assumed to vary between townships.
Tun KM, Imwong M, Lwin KM, Win AA, Hlaing TM, Hlaing T et al. Spread of artemisinin-resistant Plasmodium falciparum in Myanmar: a cross-sectional survey of the K13 molecular marker. Lancet Infect Dis. 2015;15:415–21.
Ashley EA, Dhorda M, Fairhurst RM, Amaratunga C, Lim P, Suon S et al. Spread of artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2014;371:411–23.
Takala-Harrison S, Jacob CG, Arze C, Cummings MP, Silva JC, Dondorp AM, et al. Independent emergence of artemisinin resistance mutations among Plasmodium falciparum in Southeast Asia. J Infect Dis. 2014.
WHO. World Malaria Report. Geneva: World Health Organization; 2013.
Smithuis FM, Kyaw MK, Phe UO, van der Broek I, Katterman N, Rogers C, et al. The effect of insecticide-treated bed nets on the incidence and prevalence of malaria in children in an area of unstable seasonal transmission in western Myanmar. Malar J. 2013;12:363.
Kongmee M, Achee NL, Lerdthusnee K, Bangs MJ, Chowpongpang S, Prabaripai A, et al. Seasonal abundance and distribution of Anopheles larvae in a riparian malaria endemic area of western Thailand. Southeast Asian J Trop Med Public Health. 2012;43:601–13.
Shi W, Zhou X, Zhang Y, Zhou X, Hu L, Wang X, et al. An investigation on malaria vectors in western part of China–Myanmar border. Zhongguo Ji Sheng Chong Xue Yu Ji Sheng Chong Bing Za Zhi. 2011;29:134–7.
White MT, Conteh L, Cibulskis R, Ghani AC. Costs and cost-effectiveness of malaria control interventions–a systematic review. Malar J. 2011;10:337.
Briet OJ, Chitnis N. Effects of changing mosquito host searching behaviour on the cost effectiveness of a mass distribution of long-lasting, insecticidal nets: a modelling study. Malar J. 2013;12:215.
Yeung S, Damme WV, Socheat D, White NJ, Mills A. Cost of increasing access to artemisinin combination therapy: the Cambodian experience. Malar J. 2008;7:84.
Onwujekwe O, Uzochukwu B, Ojukwu J, Dike N, Shu E. Feasibility of a community health worker strategy for providing near and appropriate treatment of malaria in southeast Nigeria: an analysis of activities, costs and outcomes. Acta Trop. 2007;101:95–105.
McCord GC, Liu A, Singh P: Deployment of community health workers across rural sub-Saharan Africa: financial considerations and operational assumptions. Bull World Health Organ. 2013; 91:244–53B.
Drummond MF, Sculpher MJ, Torrance GW. Methods for the economic evaluation of health care programs. Oxford: Oxford University Press; 2005.
Lubell Y, Staedke SG, Greenwood BM, Kamya MR, Molyneux M, Newton PN, et al. Likely health outcomes for untreated acute febrile illness in the tropics in decision and economic models; a Delphi survey. PLoS One. 2011;6:e17439.
Lindblade KA, Mwandama D, Mzilahowa T, Steinhardt L, Gimnig J, Shah M, et al. A cohort study of the effectiveness of insecticide-treated bed nets to prevent malaria in an area of moderate pyrethroid resistance, Malawi. Malar J. 2015;14:31.
Lengeler C. Insecticide-treated bed nets and curtains for preventing malaria. Cochrane Database Syst Rev. 2004;2:CD000363.
Weatherly H, Drummond M, Claxton K, Cookson R, Ferguson B, Godfrey C, et al. Methods for assessing the cost-effectiveness of public health interventions: key challenges and recommendations. Health Policy Amst Neth. 2009;93:85–92.
Myanmar Census [http://countryoffice.unfpa.org/myanmar/census/].
Kok MC, Dieleman M, Taegtmeyer M, Broerse JEW, Kane SS, Ormel H, et al. Which intervention design factors influence performance of community health workers in low- and middle-income countries? A systematic review. Health Policy Plan. 2014. doi:10.1093/heapol/czu126.
Lubell Y. Investment in malaria elimination: a leap of faith in need of direction. Lancet Glob Health. 2014;2:e63–4.
Drake T. Priority setting in global health: towards a minimum DALY value. Health Econ. 2014;23:248–52.
TD and YL conceived of the study. TD and SSK completed the costing sections. TD developed the model and undertook the analyses. FS, ND, LJ, MPK and YL provided critical feedback during several iterations of the analysis and manuscript. All authors read and approved the final manuscript.
The authors would like to acknowledge the support of the National Malaria Control Programme, the Department of Medical Research and the World Health Organization, Myanmar Country Office.
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
Funding statement This work was supported by the Three Millennium Development Goal (3MDG) Fund, the Bill and Melinda Gates Foundation (BMGF) and the Wellcome Trust Major Overseas Programme in SE Asia (grant number 106698/Z/14/Z).
Mahidol-Oxford Tropical Medicine Research Unit, 420/6 Rajvithi Rd, Bangkok, 10400, Thailand
Tom L. Drake
, Shwe Sin Kyaw
, Nicholas P. J. Day
, Lisa J. White
& Yoel Lubell
Nuffield Department of Medicine, University of Oxford, Oxford, UK
, Frank M. Smithuis
Department of Medical Research, Ministry of Health, Yangon, Myanmar
Myat Phone Kyaw
Medical Action Myanmar, Yangon, Myanmar
Frank M. Smithuis
Search for Tom L. Drake in:
Search for Shwe Sin Kyaw in:
Search for Myat Phone Kyaw in:
Search for Frank M. Smithuis in:
Search for Nicholas P. J. Day in:
Search for Lisa J. White in:
Search for Yoel Lubell in:
Correspondence to Tom L. Drake.
Additional file 1. Cost effectiveness and resource allocation of malaria control in Myanmar: further sensitivity and scenario analyses. | CommonCrawl |
Chinese Physics C> In Press> Article
Chiral crossover characterized by Mott transition at finite temperature
Shijun Mao ,
School of Physics, Xi'an Jiaotong University, Xi'an 710049, China
We discuss the proper definition for the chiral crossover at finite temperature, based on Goldstone's theorem. Different from the commonly used maximum change in chiral condensate, we propose defining the crossover temperature using the Mott transition of pseudo-Goldstone bosons, which, by definition, guarantees Goldstone's theorem. We analytically and numerically demonstrate this property in the frame of a Pauli-Villars regularized NJL model. In an external magnetic field, we find that the Mott transition temperature shows an inverse magnetic catalysis effect.
chiral crossover ,
Goldstone's theorem ,
Mott transition
[1] M. L.Goldberger and S. B.Treiman, Phys. Rev. 110, 1178 (1958) doi: 10.1103/PhysRev.110.1178
[2] M. Gell-Mann, R. Oakes, and B. Renner, Phys. Rev. 175, 2195 (1968) doi: 10.1103/PhysRev.175.2195
[3] P. Scior, L.Semkal, and D. Smith, arXiv: 1710.0614
[4] S. R. Sharpe, arXiv: 9811006
[5] H. T. Ding, P. Hegde, O. Kaczmarek et al., Phy. Rev. Lett. 123, 062002 (2019) doi: 10.1103/PhysRevLett.123.062002
[6] A. Bazavov et al. (HotQCD Collaboration), Phys. Lett. B 795, 15-21 (2019)
[7] J. Goldstone, Nuovo Cim. 19, 154-164 (1961) doi: 10.1007/BF02812722
[8] J. Goldstone, A. Salam, and S. Weinberg, Phys. Rev. 127, 965-970 (1962) doi: 10.1103/PhysRev.127.965
[9] N. F. Mott, Rev. Mod. Phys. 40, 677 (1968) doi: 10.1103/RevModPhys.40.677
[10] J. Huefner, S. Klevansky, and P. Rehberg, Nucl. Phys. A 606, 260 (1996)
[11] P. Costa, M. Ruivo, and Y. Kalinovsky, Phys. Lett. B 560, 171 (2003)
[12] Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122, 345(1961) and 124, 246(1961)
[13] S. P. Klevansky, Rev. Mod. Phys. 64, 649 (1992) doi: 10.1103/RevModPhys.64.649
[14] M. K. Volkov, Phys. Part. Nucl. 24, 35 (1993)
[15] T. Hatsuda and T. Kunihiro, Phys. Rep. 247, 221 (1994) doi: 10.1016/0370-1573(94)90022-1
[16] M. Buballa, Phys. Rep. 407, 205 (2005)
[17] G. S. Bali, F. Bruckmann, G. Endrodi et al., JHEP 2012, 044 (2012)
[18] G. S. Bali, F. Bruckmann, G. Endrodi et al., Phys. Rev. D 86, 071502 (2012)
[20] Y. Hidaka and A. Yamatomo, Phys. Rev. D 87, 094502 (2013)
[21] V. Bornyakov, P. Buividovich, N. Cundy et al., Phys. Rev. D 90, 034501 (2014)
[22] M. D'Elia, F. Manigrasso, F. Negro et al., Phys. Rev. D 98, 054509 (2018)
[23] G. Endroedi, M. Giordano, S. D. Katz et al., arXiv: 1904.10296
[24] S. P. Klevansky and R. H. Lemmer, Phys. Rev. D 39, 3478 (1989)
[25] K. G. Klimenko, Theor. Math. Phys. 89, 1161 (1992)
[26] V. P. Gusynin, V. A. Miransky, and I. A. Shovkovy, Nucl. Phys. B 462, 249 (1996)
[27] V. A. Miransky and I. A. Shovkovy, Phys. Rep. 576, 1 (2015) doi: 10.1016/j.physrep.2015.02.003
[28] J. O. Anderson and W. R. Naylor, Rev. Mod. Phys. 88, 025001 (2016) doi: 10.1103/RevModPhys.88.025001
[29] F. Preis, A. Rebhan, and A. Schmitt, JHEP 1103, 033 (2011)
[30] K. Fukushima and Y. Hidaka, Phys. Rev. Lett. 110, 031601 (2013) doi: 10.1103/PhysRevLett.110.031601
[31] J. Y. Chao, P. C. Chu, and M. Huang, Phys. Rev. D 88, 054009 (2013)
[32] T. Kojo and N. Su, Phys. Lett. B 720, 192 (2013)
[33] F. Bruckmann, G. Endrodi, and T. G. Kovacs, JHEP 1304, 112 (2013)
[34] K. Kamikado and T. Kanazawa, JHEP 1403, 009 (2014)
[35] A. Ayala, M. Loewe, A. J. Mizher et al., Phys. Rev. D 90, 036001 (2014)
[36] A. Ayala, L. A. Hernandez, A. J. Mizher et al., Phys. Rev. D 89, 116017 (2014)
[37] R. L. S. Farias, K. P. Gomes, G. Krein et al., Phys. Rev. C 90, 025203 (2014)
[38] M .Ferreira, P. Costa, O. Lourenco et al., Phys. Rev. D 89, 116011 (2014)
[39] A. Ayala, C. A. Dominguez, L. A. Hernandez et al., Phys. Rev. D 92, 096011(2015); Phys. Lett. B 759, 99(2016)
[40] N. Mueller and J. M. Pawlowski, Phys. Rev. D 91, 116010 (2015)
[41] J. Braun, W. A.Mian, and S. Rechenberger, Phys. Lett. B 755, 265 (2016)
[42] S. J. Mao, Phys. Lett. B 758, 195(2016); Phys. Rev. D 94, 036007(2016); Phys. Rev. D 97, 011501(R)(2018).
[43] V. I. Ritus, Annals Phys. 69, 555 (1972) doi: 10.1016/0003-4916(72)90191-1
[44] C. N. Leung and S. Y. Wang, Nucl. Phys. B 747, 266 (2006)
[45] E. Elizalde, E. J. Ferrer, and V. de la Incera, Ann. Phys. (N.Y.) 295, 33 (2002) doi: 10.1006/aphy.2001.6203
[46] D. P. Menezes, M. B. Pinto, S. S. Avancini et al. ínez and C.Providência, Phys. Rev. C 79, 035807 (2009)
[47] E. J. Ferrer, V. L. Incera, J. P. Keith et al., Phys. Rev. C 82, 065802 (2010)
[48] K. Fukushima, D. E. Kharzeev, and H. J. Warringa, Nucl. Phys. A 836, 311 (2010)
[49] S. J. Mao, and Y. X. Wang, Phys. Rev. D 96, 034004 (2017)
[50] S. J. Mao, Phys. Rev. D 99, 056005 (2019)
[51] S. S. Avancini, R. L. S.Farias, and W. R. Tavares, Phys. Rev. D 99, 056009 (2019)
[52] J. Berges, D. U. Jungnickel, and C. Wetterich, Phys. Rev. D 59, 034010 (1999)
[53] J. Braun, B. Klein, H. J. Pirner et al., Phys. Rev. D 73, 074010 (2006)
[1] Chao Shi , Wenbao Jia , An Sun , Liping Zhang , Hongshi Zong . Chiral crossover transition in a finite volume. Chinese Physics C, doi: 10.1088/1674-1137/42/2/023101
[2] YAN Hua-Hua , MA Qing-Shan , ZHANG Jun . Mott-Hubbard transition of bosons in optical lattices with two-body interactions. Chinese Physics C, doi: 10.1088/1674-1137/35/6/003
[3] XU Xiao-Mei , FU Yong-Ping , LI Yun-De . η string formation in QCD chiral phase transition. Chinese Physics C, doi: 10.1088/1674-1137/34/4/004
[4] ZHANG Hui , SHU Song . Mean-field approximation for the chiral soliton in achiral phase transition. Chinese Physics C, doi: 10.1088/1674-1137/39/9/094104
[5] Liu Baohua , Li Jiarong . The Mean-Field Analysis for Chiral Symmetry Phase Transition. Chinese Physics C,
[6] Harleen Dahiya . Transition magnetic moments of JP=(3/2)+ decuplet to JP=(1/2)+ octet baryons in the chiral constituent quark model. Chinese Physics C, doi: 10.1088/1674-1137/42/9/093102
[7] Ya-Peng Zhao , Rui-Rui Zhang , Han Zhang , Hong-Shi Zong . Chiral phase transition from the Dyson-Schwinger equations in a finite spherical volume. Chinese Physics C, doi: 10.1088/1674-1137/43/6/063101
[8] Yonghui Xia , Qingwu Wang , Hongtao Feng , Hongshi Zong . Finite volume effects on the QCD chiral phase transition in the finite size dependent Nambu-Jona-Lasinio model. Chinese Physics C, doi: 10.1088/1674-1137/43/3/034101
[9] Shen Kun , Qiu Zhongping . Vertex Correction at Finite Temperature and Decoupling Phase Transition in 2+1-Dimensional Chiral Gross-Neveu Model. Chinese Physics C,
[10] Zhen Fang , Yue-Liang Wu . Equation of state and chiral transition in soft-wall AdS/QCD with a more realistic gravitational background. Chinese Physics C, doi: 10.1088/1674-1137/abab90
[11] Jin Pu , Guo-Ping Li , Qing-Quan Jiang , Xiao-Tao Zu . Deformed dispersion relation constraint with hydrogen atom 1S-2S transition. Chinese Physics C, doi: 10.1088/1674-1137/44/1/014001
[12] Yu Yue . Supersymmetry and The Gauss-Bonnet-Chern Theorem. Chinese Physics C,
[13] Hao Zhang , Qibo Chen . Chiral geometry in multiple chiral doublet bands. Chinese Physics C, doi: 10.1088/1674-1137/40/2/024102
[14] Ren-Da Dong , Ren-Hong Fang , De-Fu Hou , Duan She . Chiral magnetic effect for chiral fermion system. Chinese Physics C, doi: 10.1088/1674-1137/44/7/074106
[15] Chong-Xing Yue , Lu Ma , Yu-Chen Guo , Ya-Bing Zuo . Pair production of the Elementary Goldstone Higgs boson at the LHC. Chinese Physics C, doi: 10.1088/1674-1137/41/10/103102
[16] Qin Danhua , Ding Yibing , Chao Kuangta . On the Electric Dipole Transition of ψ (3770). Chinese Physics C,
[17] Wu Chongshi , Zeng Jinyan . Nuclear Pairing Phase Transition. Chinese Physics C,
[18] Wei-Jie Fu . Chiral criticality and glue dynamics. Chinese Physics C, doi: 10.1088/1674-1137/43/7/074101
[19] MAO Shi-Jun , HUANG Xu-Guang , ZHUANG Peng-Fei . Viscosities in chiral symmetry breaking phase. Chinese Physics C, doi: 10.1088/1674-1137/34/9/065
[20] Yanyan Bu , Shu Lin . Holographic magnetized chiral density wave. Chinese Physics C, doi: 10.1088/1674-1137/42/11/114104
Figures(2) / Tables(1)
Shijun Mao. Chiral crossover characterized by Mott transition at finite temperature[J]. Chinese Physics C.
PDF Downloads(14)
Corresponding author: Shijun Mao, [email protected]
Abstract: We discuss the proper definition for the chiral crossover at finite temperature, based on Goldstone's theorem. Different from the commonly used maximum change in chiral condensate, we propose defining the crossover temperature using the Mott transition of pseudo-Goldstone bosons, which, by definition, guarantees Goldstone's theorem. We analytically and numerically demonstrate this property in the frame of a Pauli-Villars regularized NJL model. In an external magnetic field, we find that the Mott transition temperature shows an inverse magnetic catalysis effect.
The change in chiral symmetry is one of the most important properties of quantum chromodynamics (QCD) in a hot and dense medium, which is essential for understanding the light hadrons at finite temperature and density [1-4]. In the chiral limit, the phase transition from chiral symmetry breaking in vacuum and at low temperature to its restoration at high temperature occurs at a critical temperature $ T_{\rm c} $, which has been reported in a recent lattice QCD simulation to be $ T_{\rm c} \simeq 132 $ MeV [5]. In a real case with non-vanishing current quark mass, the chiral symmetry restoration is no longer a genuine phase transition but rather a smooth crossover. Because the crossover occurs in a region and not at a point, the way to describe it with a fixed temperature is not unique. Considering the maximum fluctuations around a continuous phase transition in the chiral limit, the pseudo-critical temperature $ T_{\rm pc} $ to characterize the chiral crossover in the real case is normally defined by the maximum change in the chiral condensate, $ \partial^2\langle\bar\psi\psi\rangle/\partial T_{\rm pc}^2 = 0 $. From the lattice QCD simulation [6], this value is approximately $ T_{\rm pc}\simeq 156 $ MeV.
The mechanism for a continuous phase transition is spontaneous symmetry breaking. It is possible to define an order parameter that changes from a nonzero value to zero or vice versa when the phase transition occurs. Conversely, spontaneous breaking of a global symmetry manifests itself in Goldstone's theorem [7, 8]: whenever a global symmetry is spontaneously broken, massless fields, known as Goldstone bosons, emerge. Corresponding to the spontaneous chiral symmetry breaking, the order parameter is the chiral condensate, and the Goldstone modes are pions. If we take $ T_{\rm pc} $ defined above as the characteristic temperature of the chiral crossover, the problem is whether the chiral condensate at $ T_{\rm pc} $ is already small enough and the pseudo-Goldstone modes at $ T_{\rm pc} $ are already heavy enough to guarantee that the system is in chiral restoration phase. According to Goldstone's theorem, in the chiral breaking phase at low temperature, pions as pseudo-Goldstone modes should be in bound states, and in the chiral restoration phase at high temperature, pions should be in resonant states with nonzero width. The connection between the two states is the Mott transition at temperature $ T_m $ [9-11], where the decay process $ \pi\to q\bar q $ begins. It is clear that $ T_{\rm pc} = T_m $ in the chiral limit. In the real case, however, there is no guarantee of the coincidence of the two temperatures. In this case, Goldstone's theorem breaks down because pions may already be in resonant states with large mass at $ T<T_{\rm pc} $ or still be in bound states with small mass at $ T>T_{\rm pc} $. To remain consistent with Goldstone's theorem, we propose to pin down the chiral crossover using the Mott transition of the pseudo-Goldstone boson. Taking into account energy conservation for the decay process, the Mott transition temperature $ T_m $ is defined through the pion mass $ m_\pi(T) $ and quark mass $ m_q(T) $,
$ m_\pi(T_m) = 2m_q(T_m). $
Because of the non-perturbative difficulty in QCD, we calculate the Mott transition temperature in an effective chiral model. One of the models that enables us to see directly how the dynamical mechanism of chiral symmetry breaking and restoration operates is the Nambu-Jona-Lasinio (NJL) model applied to quarks [12-16]. Within this model, the hadronic mass spectrum and the static properties of mesons can be obtained remarkably well. In this paper, we calculate in the model the chiral condensate in mean field approximation and the meson mass beyond mean field through random phase approximation (RPA), which has been proven to guarantee Goldstone's theorem in the chiral breaking phase.
In recent years, the investigation of chiral symmetry has been extended to include external electromagnetic fields. Considering the dimension reduction of fermions in external magnetic fields, chiral symmetry breaking in vacuum is enhanced by the background magnetic field, from both the lattice QCD simulation [17-23] and effective models [24-28]. The surprise is in the behavior of the chiral crossover temperature $ T_{\rm pc} $. With increasing magnetic field, it decreases, from lattice QCD simulations [17-23], which is called the inverse magnetic catalysis, but increases from effective models in mean field approximation [24-28]. Many scenarios have been proposed to understand this qualitative difference between lattice QCD and effective models [29-42]. Note that, in previous works, the chiral crossover temperature is usually defined by the variation of the chiral condensate. In this work, we point out that, when defining the crossover temperature by the Mott transition of pseudo-Goldstone bosons, an inverse magnetic catalysis effect appears with decreasing Mott temperature in magnetic fields. Throughout this paper, the increase (decrease) in characteristic temperature with magnetic fields is called the (inverse) magnetic catalysis effect.
The magnetized two-flavor NJL model is defined through the Lagrangian density [12-16]
$ {\cal{L}} = \bar{\psi}\left({\rm i}\gamma_\mu D^\mu-m_0\right)\psi+\frac{G}{2}\left[\left(\bar\psi\psi\right)^2+\left(\bar\psi {\rm i}\gamma_5\tau\psi\right)^2\right], $
where the covariant derivative $ D^\mu = \partial^\mu+{\rm i}Q A^\mu $ couples quarks with electric charge $ Q = {\rm diag} (Q_u,Q_d) = {\rm diag} (2e/3, -e/3) $ to the external magnetic field $ {{B}} = B{{e}}_z $ through the potential $ A_\mu = (0,0,Bx,0) $, G is the coupling constant in scalar and pseudo-scalar channels, and $ m_0 $ is the current quark mass characterizing the explicit chiral symmetry breaking.
Using the Leung-Ritus-Wang method [43-48], the chiral condensate $ \langle\bar\psi\psi\rangle $ or the dynamical quark mass $ m_q = m_0-G\langle\bar\psi\psi\rangle $ at mean field level is controlled by the gap equation
$ m_q\left(1-GJ_1\right) = m_0 $
$ J_1 = N_{\rm c}\sum\limits_{f,n}\alpha_n {|Q_f B|\over 2\pi} \int {{\rm d} p_z\over 2\pi}{\tanh \dfrac{E_f} {2T}\over E_f}, $
where $ N_{\rm c} = 3 $ is the number of colors, which is trivial in the NJL model; $ \alpha_n = 2-\delta_{n0} $ is the spin degeneracy; T is the temperature of the quark system; and $ E_f = \sqrt{p^2_z+2 n |Q_f B|+m_q^2} $ is the quark energy with flavor $ f = u,d $, longitudinal momentum $ p_z $, and Landau energy level n.
Mesons in the model are treated as quantum fluctuations above the mean field and constructed through RPA [12-16]. In the chiral limit with $ m_0 = 0 $, the isospin triplet $ \pi_0 $ and $ \pi_\pm $ and isospin singlet $ \sigma $ are respectively the Goldstone modes and Higgs mode, corresponding to spontaneous chiral symmetry breaking with a vanishing magnetic field. Turning on the external magnetic field, only the neutral pion $ \pi_0 $ remains as the Goldstone mode.
With the RPA method, the meson propagator $ D_m $ can be expressed in terms of the meson polarization function or quark bubble $ \Pi_m $,
$ D_m(q) = \frac{G}{1-G\Pi_m(q)}. $
The meson mass $ m_m $ is defined as the pole of the propagator at zero momentum $ {{q}} = {\bf{0}} $,
$ 1-G\Pi_m(m_m, {\bf{0}}) = 0 $
$ \begin{aligned}[b] \Pi_m(q_0,{\bf{0}}) =& J_1-(q_0^2-\epsilon_m^2) J_2(q_0),\\ J_2(q_0) =& -N_{\rm c}\sum\limits_{f,n}\alpha_n \frac{|Q_f B|}{2\pi} \int \frac{{\rm d} p_z}{2\pi}{\tanh \dfrac{E_f} {2T}\over E_f (4 E_f^2-q_0^2)}, \end{aligned} $
$ \epsilon_{\pi_0} = 0 $ for the Goldstone mode, and $ \epsilon_\sigma = 2m_q $ for the Higgs mode. In a nonzero magnetic field, the three-dimensional quark momentum integration in the gap equation (3) and pole equation (6) becomes a one-dimensional momentum integration plus a summation over the discrete Landau levels.
In the chiral limit with vanishing current quark mass, by comparing the gap equation (3) for quark mass with the pole equation (6) for meson mass, we have the analytic solutions
$ m_{\pi_0} = 0,\ \ m_\sigma = 2m_{q} $
in the chiral breaking phase with $ m_q\neq 0 $ and
$ m_{\pi_0} = m_\sigma\neq 0 $
in the chiral restoration phase with $ m_q = 0 $. A direct consequence of these solutions is that the Mott transition temperature $ T_m $ defined by $ m_{\pi_0}(T_m) = 2m_q(T_m) $ coincides with the critical temperature $ T_{\rm c} $ defined by $ m_q(T_{\rm c}) = 0 $. The phase transition from chiral symmetry breaking to its restoration is a second order phase transition.
In the physical case with nonzero current quark mass, the chiral restoration becomes a smooth crossover. At low temperature, spontaneous chiral symmetry breaking dominates the system. Considering the fact that the explicit chiral symmetry breaking is slight, we can use $ m_0 $ expansion to solve the gap equation for quark mass and pole equation for meson mass. With the notations $ m_q = m^{cl}_q+\delta_q $ and $ m_{\pi_0} = m^{cl}_{\pi_0}+\delta_{\pi_0} $, where $ m_q^{cl} $ and $ m_{\pi_0}^{cl} = 0 $ are the quark and neutral pion masses in the chiral limit, respectively, and keeping only the linear term in $ \delta_q $ and quadratic term in $ \delta_{\pi_0} $ in the gap and pole equations, we have
$ \begin{aligned}[b] \delta_q = & -\frac{m_0}{m^{cl}_q} \frac{1}{G \dfrac{\partial J_1}{\partial m_q}{\big |}_{{{c}}l}},\\ \delta_{\pi_0}^2 =& -\frac{m_0}{m^{cl}_q+\delta_q}\frac{1}{G J_2{\big |}_{{{{c}}l}}}. \end{aligned} $
It is obvious that, in the chiral limit with $ m_0 = 0 $, we have $ \delta_q = 0 $ and $ \delta_{\pi_0} = 0 $. The explicit chiral symmetry breaking with $ m_0 \neq 0 $ modifies the dynamical quark mass, and the Goldstone mode in the chiral limit becomes a pseudo-Goldstone mode with nonzero mass.
At high temperature, the quark dimension reduction under an external magnetic field causes an infrared ($ p_z\to 0 $) singularity of the quark bubble $ \Pi_m(m_m,{\bf{0}}) $ [49-51]. For the pseudo-Goldstone mode $ \pi_0 $, the infrared singularity of $ \Pi_{\pi_0}(m_{\pi_0},{\bf{0}}) $ occurs at the Mott transition temperature $ T_m $, where the mass $ m_{\pi_0} $ jumps up from $ m_{\pi_0}<2m_q $ to $ m_{\pi_0}>2m_q $. This indicates a sudden transition from a bound state to a resonant state [49, 51].
We next perform numerical calculations on the Mott transition temperature in both the chiral limit and real case. Because of the four-fermion interaction, the NJL model is not a renormalizable theory and requires regularization. To guarantee the law of causality in magnetic fields, we apply the Pauli-Villars regularization scheme, as explained in detail in Ref. [42]. The three parameters in the NJL model, namely the current quark mass $ m_0 $, coupling constant G, and Pauli-Villars mass parameter $ \Lambda $, are listed in Table 1 by fitting the chiral condensate $ \langle\bar\psi\psi\rangle $, pion mass $ m_\pi $, and pion decay constant $ f_\pi $ in a vacuum at $ T = B = 0 $. We take the current quark mass to be $ m_0 = 0 $ in the chiral limit and $ 6.4 $ MeV in the real world.
$ m_0 $ /MeV G/GeV$ ^{-2} $ $ \Lambda $ /MeV $ \langle\bar\psi\psi\rangle $ /MeV3 $ m_\pi $ /MeV $ f_\pi $ /MeV
0 5.03 977.3 −2303 0 93
6.4 4.9 977.3 −2303 134 93
Table 1. NJL parameters in Pauli-Villars regularization.
We first discuss the temperature behavior of the phase structure for chiral symmetry at $ B = 0 $. Fig. 1 shows the quark mass and pion mass as functions of temperature in the chiral limit and the real world. For a vanishing magnetic field, the three pions are all Goldstone or pseudo-Goldstone modes. To clearly see the Mott transition temperature $ T_m $ and its difference from the pseudo-critical temperature $ T_{\rm pc} $, we plot $ 2m_q $ instead of $ m_q $ itself. In the chiral limit, the quark mass, which is proportional to the order parameter $ \langle\bar\psi\psi\rangle $, continuously decreases at low temperature, reaches zero at the critical temperature $ T_{\rm c} = 163 $ MeV, and remains zero at higher temperature. This denotes a second order chiral phase transition. Correspondingly, the Goldstone modes $ \pi $ remain massless in the chiral breaking phase and begin to have mass at $ T_{\rm c} $. It is clear that the Mott transition temperature defined by the threshold condition (1) is exactly the critical temperature $ T_m = T_{\rm c} = 163 $ MeV. Note that the critical temperature obtained here in the NJL model is higher by $ 20-30 $ MeV than the results of recent lattice QCD [5] and other effective models [52, 53]. In the real world, the quark mass continuously decreases, and the pion mass continuously increases throughout the entire temperature region. In this case, the chiral phase transition becomes a smooth crossover, and there is no strict definition for the crossover temperature. Generalizing the idea of maximum fluctuations around the second order phase transition in the chiral limit, the maximum change in the chiral condensate or dynamical quark mass is commonly used to identify the crossover, referring to the corresponding temperature as the pseudo-critical temperature $ T_{\rm pc} $ of the chiral crossover. From the definition $ \partial^2m_q/\partial T_{\rm pc}^2 = 0 $, we numerically have $ T_{\rm pc} = 162 $ MeV, which is close to the lattice QCD result of $ 156 $ MeV [6]. From the definition $ m_\pi(T_m) = 2m_q(T_m) $, denoted by the crossing point of the two dashed lines in Fig. 1, the Mott transition temperature is different from the pseudo-critical temperature: $ T_m = 174 $ MeV $ >T_{\rm pc} = 162 $ MeV. Therefore, pions as pseudo-Goldstone modes can still survive as bound states after the chiral crossover. This means that the definition of $ T_{\rm pc} $ explicitly violates Goldstone's theorem.
Figure 1. (color online) Dynamical quark mass $ m_q $ and pion mass $ m_\pi $ as functions of temperature in the chiral limit (solid lines) with current quark mass $ m_0 = 0 $ and real world (dashed lines) with $ m_0 = 6.4 $ MeV.
In the chiral limit, chiral restoration is a genuine phase transition, and the phase transition temperature is unique from either the order parameter or Goldstone's theorem, with $ T_{\rm c} = T_m $. In the physical world, which is the focus of this paper, chiral restoration is a smooth crossover. The characteristic temperature $ T_{\rm pc} $ from the maximum change in order parameter and $ T_m $ from the Mott transition of pseudo-Goldstone bosons will not coincide with each other: $ T_{\rm pc} \neq T_m $. To guarantee Goldstone's theorem, we propose to define the crossover temperature as $ T_m $.
Because charged pions interact with the magnetic field, they are no longer pseudo-Goldstone modes, and only the neutral pion is the pseudo-Goldstone boson corresponding to spontaneous chiral symmetry breaking. Fig. 2 shows the pseudo-critical temperature $ T_{\rm pc} $ and Mott transition temperature $ T_m $ for the pseudo-Goldstone mode as functions of the magnetic field. While $ T_{\rm pc} $ is controlled by the magnetic catalysis, i.e., it increases with increasing magnetic field, the Mott transition temperature $ T_m $ clearly shows the inverse magnetic catalysis effect, decreasing throughout the entire magnetic field region. The physics to explain this difference is as follows: $ T_{\rm pc} $ is controlled by quarks, which are calculated at mean field level, but $ T_m $ is governed by mesons, which are treated as quantum fluctuations beyond mean field. It is the quantum fluctuation that changes the magnetic catalysis to inverse magnetic catalysis. This is consistent with the scenario of fluctuation induced inverse magnetic catalysis discussed in Refs. [30, 40-42]. When we consider the feedback effect from mesons to quarks by including the meson contribution to the thermodynamics of the system, $ \Omega = \Omega_{mf}+\Omega_M $, the suppressed chiral condensate at high temperature and decreasing pseudo-critical temperature $ T_{\rm pc} $ have been observed.
Figure 2. (color online) Pseudo-critical temperature $ T_{\rm pc} $ (solid line) and Mott transition temperature $ T_m $ (dashed line) for the pseudo-Goldstone mode as functions of magnetic field in the physical case with $ m_0 = 6.4 $ MeV.
Again, the result of $ T_{\rm pc}\neq T_m $ violates Goldstone's theorem in any magnetic field. In a weak magnetic field, with $ eB/m_\pi^2<7 $, where $ m_\pi $ is the pion mass in a vacuum at $ T = B = 0 $, $ T_m>T_{\rm pc} $, which leads to the survival of the neutral pion as a bound state in the chiral restoration phase. Conversely, in a strong magnetic field with $ eB/m_\pi^2 > 7 $, $ T_m < T_{\rm pc} $, which results in the disappearance of the neutral pion in the chiral breaking phase. In either case, Goldstone's theorem is significantly broken.
We investigate in this paper the chiral crossover at finite temperature and in external magnetic fields. In the physical world, chiral restoration is a smooth crossover because of the explicit chiral symmetry breaking. Different from the commonly used maximum change in chiral condensate, we propose defining the crossover temperature by the Mott transition of pseudo-Goldstone bosons. This, by definition, guarantees Goldstone's theorem for chiral symmetry. As an analytical example, we calculate the order parameter (dynamical quark mass) in mean field and Goldstone mode (pion mass) beyond mean field in the frame of a Pauli-Villars regularized NJL model. If we take the maximum change in chiral condensate to describe the chiral crossover, the pseudo-Goldstone mode will survive in the chiral restoration phase with weak magnetic fields and disappear in the chiral breaking phase with strong magnetic fields.
While we believe that the idea of choosing the Mott transition temperature $ T_m $ to characterize the chiral crossover is correct, the values of the temperatures $ T_m, T_{\rm pc} $, and $ T_{\rm c} $ we obtained here are model dependent. To precisely fix the Mott transition temperature, we need to use the direct result from lattice QCD simulations.
I am grateful for the hospitality of Professor Dirk H. Rischke of Frankfurt University. Part of this work was done while I was visiting his group as an EMMI visiting professor. | CommonCrawl |
dissociation of a strong acid in water equation
It actually works out that there is a direct relationship to bond dissociation energy and bond enthalpy; which makes sense. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. HClO4(aq) + … a A little Physical Chemistry: They cannot be determined directly by means of pH, absorbance, fluorescence or NMR measurements; a measured Kb value is the sum of the K values for the micro-reactions. Other chemical shifts, such as those of 31P can be measured. / [64] For example, the abovementioned equilibrium for spermine may be considered in terms of Ka values of two tautomeric conjugate acids, with macroconstant In this case I think it might have something to do with the electronegativity of the ions involved. Any Pointers? This technique is used for the purification of weak acids and bases.[76]. For example, ionization of any compound will increase the solubility in water, but decrease the lipophilicity. Almost immediately, the proton protonates a water molecule to yield a hydronium ion (H3O+) and lowers the overall pH of the solution. @PhysicalChemist Hi, thanks for your reply! 1 The strength of an acid is expressed by its acidity constant, Ka. The products of the reaction, fluoride anion and the hydronium ion, are oppositely charged ions, and it is reasonable to assume that they will be attracted to each other. Acids that do not dissociate completely are called weak acids. K a is commonly expressed in units of mol/L. A common formula used in general chemistry is $\Delta H_\mathrm{rxn} = n(\text{bonds broken}) - n(\text{bonds formed})$. This means that the chemical equation that describes the ionization of hydrochloric acid will look like this, #color(red)("H")"Cl"_ ((aq)) + "H"_ 2"O"_ ((l)) -> "H"_ 3"O"_ ((aq))^(color(red)(+)) + "Cl"_ ((aq))^(-)#. At each point in the titration pH is measured using a glass electrode and a pH meter. around the world, Stoichiometry with Acid and Base Dissociation. What is an example of a stoichiometry with acid and base dissociation practice problem? 1 Example : Calculating [H +], pH and %dissociation for a Strong Acid. How many lithium-ion batteries does a M1 MacBook Air (2020) have? In this instance, water acts as a base.The equation for the dissociation of acetic acid, for example, is CH 3 CO 2 H + H 2 O ⇄ CH 3 CO 2 − + H 3 O +.. Dissociation of bases in water. There is, however, a constant change; as one hydrogen ion reattaches to a hydroxide ion to form a water molecule, another water molecule dissociates to replace the hydrogen ion and the hydroxide ion in solution. Most weak acids have #K_"a"# < 10⁻². But the best description uses statistical thermodynamics … Which I will not get into. Second, some reactions are exothermic and some are endothermic, but, when ΔH⊖ is negative TΔS⊖ is the dominant factor, which determines that ΔG⊖ is positive. It turns out that $\text{bonds broken}$ or $\text{bonds formed}$ is actually referring to the bond energy or bond enthalpy. "Acid–base equilibrium" redirects here. The opposite of an acid is a base, also known as an alkali. See all questions in Chemical Reactions and Equations. Name____________________________________________Date:________, Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. {\displaystyle K=K_{X}+K_{Y}} Oxalic acid has pKa values of 1.27 and 4.27. . In other words, every molecule of hydrochloric acid that is added to water will donate its proton, #"H"^(+)#, to water molecule to form a hydronium cation, #"H"_3"O"^(+)#. H+ and the corresponding anion). [11], The standard enthalpy change can be determined by calorimetry or by using the van 't Hoff equation, though the calorimetric method is preferable. K MathJax reference. Which is the weak and which is the strong? In a visual novel game with optional sidequests, how to encourage the sidequests without requiring them? Atoms can gain or lose electrons in order to form ions in a process called ionization (compounds formed in this way are called ionic compounds). What is the difference between the titration of a strong acid with a strong base and that of the titration of a weak acid with a strong base? Data presented here were taken at 25 °C in water. Ions in aqueous solution tend to orient the surrounding water molecules, which orders the solution and decreases the entropy. Use minimal integer numbers to balance the reaction. A pH indicator is a weak acid or weak base that changes colour in the transition pH range, which is approximately pKa ± 1. The dependence on m correlates with the oxidation state of the central atom, X: the higher the oxidation state the stronger the oxyacid. What is an example of an acid and base dissociation practice problem? Nevertheless, the site of protonation is very important for biological function, so mathematical methods have been developed for the determination of micro-constants.[65]. A typical strong base is sodium hydroxide, the principal component of lye. Hydrochioric acid's ionization will also produce chloride anions, #"Cl"^(-)#. A table of pKa of carbon acids, measured in DMSO, can be found on the page on carbanions. When one reactant forms two products in parallel, the macroconstant is a sum of two microconstants, / What happens when an atom gains or loses an electron? 9 years ago. The increased acidity on adding an oxo group is due to stabilization of the conjugate base by delocalization of its negative charge over an additional oxygen atom. So, the rule is "the stronger the acid, the weaker the conjugate base and vice-versa", which is not exactly the same thing. is proportional to Give an example of a strong base, and a strong acid. When this is so, the solution is not buffered and the pH rises steeply on addition of a small amount of strong base. What are some examples of acid and base dissociation? We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Legal. Suppose we add enough strong acid to a beaker of water to raise the H 3 O + ion concentration to 0.010 M.According to LeChatelier's principle, this should drive the equilibrium between water and its ions to the left, reducing the number of H 3 O + and OH-ions in the solution.. 2 H 2 O(l) H 3 O + (aq) + OH-(aq) , X See all questions in Stoichiometry with Acid and Base Dissociation. Douglas B., McDaniel D.H. and Alexander J.J. 2-D gel polyacrylamide gel electrophoresis, "Thermodynamic Quantities for the Ionization Reactions of Buffers", "Project: Ionic Strength Corrections for Stability Constants", "Henderson–Hasselbalch Equation: Derivation of p, "Chemical speciation of environmentally significant heavy metals with inorganic ligands. We will introduce "weak acids" in Chapter 10, but for now the important thing to remember is that strong acids are virtually 100% ionized in solution.
Rosemary Meaning Flower, Charles Babbage Inventions Names, Colorado Airport List, Microsoft Marketing Research, Professional Makeup Mirror With Lights, Psalm 7 Kjv, Viper 5706v Review, Another Word For Love Proposal,
dissociation of a strong acid in water equation 2020 | CommonCrawl |
Why are $L$-functions a big deal?
I've been studying modular forms this semester and we did a lot of calculations of $L$-functions, e.g. $L$-functions of Dirichlet-characters and $L$-functions of cusp-forms.
But I somehow don't see, why they are considered a big deal. To me it sounds more like a fun fact to say: "You know, the Riemann-$\zeta$ has analytic continuation.", and I don't even know what to say about $L$-functions of cusp-forms.
So why are $L$-functions such a big thing in automorphic forms and analytic number theory?
complex-analysis number-theory analytic-number-theory modular-forms automorphic-forms
StevenSteven
$\begingroup$ For one thing, $L$-functions are the connecting link between modular forms and elliptic curves that led to the formulation of the modularity conjecture. $\endgroup$ – Arthur Jul 13 '16 at 9:39
$\begingroup$ Strictly speaking, you can state the modularity conjecture in geometric terms, without L-functions (it's done in Mazur, Number Theory as gadfly, for example). But many different mathematical objects define L-functions, which allows us top compare them, and that may be a part of the answer. But anyhow, I'm waiting eagerly for the experts' answers on this question. $\endgroup$ – PseudoNeo Jul 13 '16 at 10:16
$\begingroup$ power series are more than helpful for studying additive functions, while Dirichlet series (with an Euler product = L-function) are more than helpful for studying multiplicative functions. almost everything that is related to the factorization and prime numbers can be represented as multiplicative functions, and multiplicative functions naturally occur in modular forms (and conversely) $\endgroup$ – reuns Jul 24 '16 at 14:58
$\begingroup$ Have you tried working through Dirichlet's theorem on primes in arithmetic progressions? This is likely the first place where L-functions can be used to prove something meaningful to a general number theory student. $\endgroup$ – lemiller Jul 25 '16 at 1:49
There's a lot one could say, but I'll try to be brief. Roughly the idea (just like with the zeta functions) is that L-functions provide a way to analytically study arithmetic objects. Specifically a lot of interesting data is encoded in the location of the zeroes and poles of L-functions, and because L-functions are analytic objects, you can now use analysis to study arithmetic. Here are some examples:
The fact that $\zeta(s)$ has a pole at $s=1$ implies the infinitude of primes.
(added) The Riemann hypotheses and generalizations, which are about location of nontrivial zeroes of zeta-/L-functions, have lots of implications, such as refined information about distribution of prime numbers.
The fact that Dirichlet L-functions do not have a zero at $s=1$ implies there are infinitely many primes in arithmetic progressions. Dirichlet introduced the notion of L-functions to prove this fact.
If $E : y^2 = x^3+ax+b$ is an elliptic curve and its $L$-function $L(s,E)$ (which is also the $L$-function of an elliptic curve) has a zero at the central value $s=1$, then $y^2=x^3+ax+b$ has only finitely many rational solutions. This is the known direction of the Birch and Swinnerton-Dyer conjecture.
(added) In addition to knowing just locations of zeroes and poles of L-functions, the actual values of L-functions at special points contain further arithmetic information. For instance, if $\chi_K$ is the quadratic Dirichlet character associated to an imaginary quadratic field $K$, then the class number formula says $L(1,\chi_K)$ is essentially the class number of $K$. Similarly, the value of $L(1,E)$ in the previous example is conjecturally expressed in terms of the size of the Tate-Shafarevich group of $E$ and the number of rational points on $E$.
As mentioned in the comments, $L$-functions are also a convenient tool to associate different kinds of objects to each other, e.g., elliptic curves and modular forms, but are not strictly needed to do this.
Nice $L$-functions will have at least meromorphic continuation to $\mathbb C$, Euler products, and certain bounds on their growth. For instance, L-functions of eigencusp forms and Dirichlet L-functions. These properties make $L$-functions nice analytic objects to work with. In particular, the Euler product provides a way to study global objects from local data (one finite set of data for each prime number $p$).
(added) See also this MathOverflow question.
KimballKimball
$\begingroup$ In complement to Kimball's statement about how L-functions provide a way to analytically study arithmetic objects, allow me to draw your attention to the significance of their "special values", see e.g. my answer to the question "Significance of the RH to algebraic number theory " posted by Vik78 on April 29. $\endgroup$ – nguyen quang do Jul 21 '16 at 7:51
$\begingroup$ @nguyenquangdo It might be more helpful if you provide a link to your answer (if you don't know how to do this, click on "share" at the bottom of your answer; see also "help" on the side of the comment bar). $\endgroup$ – Kimball Jul 21 '16 at 11:13
$\begingroup$ Sorry, here's the link math.stackexchange.com/a/1771242/30 $\endgroup$ – nguyen quang do Jul 21 '16 at 14:21
Not the answer you're looking for? Browse other questions tagged complex-analysis number-theory analytic-number-theory modular-forms automorphic-forms or ask your own question.
Significance of the Riemann hypothesis to algebraic number theory?
Do general automorphic factors arise in some canonical way?
What do the zero's of L-functions entail?
What are the modular forms which "naturally" corresponds to Riemann Zeta function or Dirichlet L function ?
Characters in analytic number theory
Why do we study moment of Riemann zeta function and moment of Dirichlet L-function?
Riemann hypothesis for $L(s,\chi)$ and $L(s,\chi^\sigma)$
Cusp form becomes a bounded function on $\operatorname{SL}_2(\mathbb R)$
Linear relation between theta functions $\theta(z;u) \in S_8(\Gamma_0(4))$?
Consequences of non-vanishing theorems of automorphic $L$-functions
Klein function, modular functions | CommonCrawl |
Development of the Idea of the Determinant
While I basically understand what a determinant is, I wonder how this idea was developed? What was the principal idea behind its origination? I would like to know this so that I can have a better conceptualization of the determinant.
soft-question math-history determinant
analysisjanalysisj
$\begingroup$ +1 good question. You're in for a wilder ride than you probably thought. Determinants were arguably the first linear algebra concept to be invented -- before abstract vector spaces, before even matrices. There were just these strange expressions that kept popping up when quite different problems were solved the hard way. Sometime along the way someone noticed that they were all structured similarly and gave them a name. I hope you get some good historical references in answers. $\endgroup$ – Henning Makholm Nov 13 '11 at 2:32
$\begingroup$ Non-duplicate: where did determinant come from? which was closed as a duplicate of a non-historical intuition-of-determinants question. One of the answers before it was closed links to an interesting historical wall-of-text, though. $\endgroup$ – Henning Makholm Nov 13 '11 at 2:37
$\begingroup$ If you are looking for something involving the history of determinants (and matrices) you might start here: www-history.mcs.st-and.ac.uk/history/HistTopics/… $\endgroup$ – Joseph Malkevitch Nov 13 '11 at 15:33
$\begingroup$ I know of two very interesting things; A chinese mathematician (B.C.) recorded the determinant of a 3x3 system; see books.google.co.uk/books/about/… and also Euler or Lagrange in a litter wrote down a system of equations in a letter as 11 + 12 + ... +17 = x (say) then 21 + 22 + ... + 27 = y and ... and ...81 + 82 + ... + 87 = z where 11 does not represent 11 but the first sum of the first equation - in matrix notation, if the system was represented by a matrix A cont. $\endgroup$ – Adam Jan 25 '12 at 8:51
$\begingroup$ 11 would be writte a_1,1 and in general ij is a_ij. He wrote down a condition like 11.22.33 + ... - 12.34.25 = 0 for the system to have a solution that's non-trivial, that is the determinant. $\endgroup$ – Adam Jan 25 '12 at 8:53
Let me tell you what I know in general.
The determinant was primarily introduced as a gauge to measure the existence of unique solutions to linear equations. It's like a litmus paper (which is used to know about acids and bases, but in this case its the existence of unique solutions). If you doubt that one can measure the uniqueness of solutions, I have a pair of magic spectacles you may keep that enable you to visualize the determinant in a new geometric way (this in fact can be found in many standard books, but I wanted to write the quintessence here).
In response to your first question about origin, it dates back to the $3^{rd}$ century, when Chinese mathematicians used determinants in their book by name The Nine Chapters on the Mathematical Art (Chinese version and English version here ). At first, when the historians used the concept of determinants, they didn't refer to matrices but merely to the system of linear equations and treated the determinant as a property which tests for the existence of unique solutions for a system of linear equations. And later, due to the discovery of matrix theory, determinants have been moved to the theory of matrices (in a new manner of evolution).
If you consider a matrix $$\begin{pmatrix} a&b\\c&d \end{pmatrix} $$ the determinant is given by $ad-bc$. You may wonder what information does $ad-bc$ may have within itself. All you need to do is to view things in geometric manner when you are unable to visualize things in algebraic way. Matrices are closely related to fields: in vectors and linear algebra, we representing vectors in terms of matrices. They can be seen as collections of vectors. The example that I have in mind is: consider a billiard table. Before hitting the balls, you take a thing which has a triangular frame and then arrange all the balls (present in an irregular manner) into a triangular shape. It's like arranging things in an organised manner, as human beings always try to do.
Let me explain the topic in greater detail.
If you look at the above figure, you can clearly see the coordinates. You can think of the columns of the matrix as vectors and the entities in the column representing their Cartesian placement (the coordinate positions). So now if you take all the above things into consideration, you can clearly see that if $ad-bc=0$ then the rhombus suddenly became a straight line, and in generalized manner a parallelopiped suffers a decrement in dimension by one (making the volume zero, which indirectly implies that it has an area). So analogously one can see that if the determinant becomes zero, according to the Cramer's rule, it makes the denominator go $\infty$, which is a violation.
Some of the important alternative notions about determinants: (stress on second one)
Determinants in reality can be thought as a measure of multiplicative change in the volume of parallelopiped when it is subjected to linear transformation. It can be shown as :
And the main notion that answers the connection between the determinant and existence of solutions is that the determinant of a matrix is zero if and only if the column vectors of the matrix are linearly dependent. Thus, determinants can be used to characterize linearly dependent vectors. For example, given two vectors $v_1, v_2$ in $R^3$, a third vector $v_3$ lies in the plane spanned by the former two vectors exactly if the determinant of the $3$-by-$3$ matrix consisting of the three vectors is zero. So one needs to take the theory of multilinear forms into consideration. I can't express the entire theory, but to give you a short notion, the determinant is actually a multi-linear form in general. So in a deep sense it measures the manifestations of the things related to vectors.
And when determinants are negative, they have a role of orientation in geometric sense, which is another crucial point.
Some Beautiful References :
I have some suggestions for you. Apart from reading about matrix theory in Wikipedia page I suggest reading a very good article Making Determinants Less Weird by John Duggan. And I have come across another good article recently, which is here
P.S : I took much time to edit and post this, so users are kindly requested to post any suggestions, in case of down-votes.
IDOKIDOK
$\begingroup$ Is it sufficient or I need to add something ? $\endgroup$ – IDOK Feb 10 '12 at 16:00
$\begingroup$ "To your astonishment,..." seems rather presumptuous. You know how you felt, do you really feel the need to tell people how to feel about something which may or may not be a revelation to them? $\endgroup$ – Arturo Magidin Feb 10 '12 at 17:12
$\begingroup$ @ArturoMagidin : Ok, sorry sir, I have fixed it. Thank you. $\endgroup$ – IDOK Feb 10 '12 at 17:13
$\begingroup$ Hello! I've noticed that your link to "Making Determinants Less Weird" has broken - do you know an alternate location this resource may be found? $\endgroup$ – Aza Oct 17 '14 at 4:39
$\begingroup$ Sorry @Emrakul, I am not active on M.SE anymore. But a simple Google search, would have done. Anyway the link's here : dl.dropboxusercontent.com/u/17516137/RapidWeaverSite/resources/… $\endgroup$ – IDOK Dec 5 '14 at 9:13
The authoritative reference seems to be the books by Muir: Contributions to the history of determinants and A treatise on the theory of determinants. See also http://www-history.mcs.st-and.ac.uk/Extras/Muir_determinants.html.
There is also Miller, On the History of Determinants. Amer. Math. Monthly 37 (1930), no. 5, 216–219.
lhflhf
Not the answer you're looking for? Browse other questions tagged soft-question math-history determinant or ask your own question.
What is the natural ( and historical) motivation for determinants?
What is the origin of the determinant in linear algebra?
Why do determinants have their particular form?
where did determinant come from?
Why is determinant called determinant?
Why are |vertical lines| used to mark matrix determinants?
Help with understanding the general formula for the determinant?
Identifying factors of higher order in a determinant
How was $e$ first calculated?
Historical meaning and usage of determinant
Find determinant of cofactor matrix
Without expanding the determinant.
Finding the determinant of $x*x^{t}$
Historical development of tensor analysis
Interpreting the determinant as an alternating $n$-linear function of its column vectors
What is the physical significance of the determinants of orthogonal matrices having the value of $\pm 1$? | CommonCrawl |
FIN 221 Ch 7 Practice Quiz
Sam_Williams638
Which of the following events would make it more likely that a company would choose to call its outstanding callable bonds?
Market interest rates decline sharply.
The company's bonds are downgraded.
Market interest rates rise sharply.
Inflation increases significantly.
The company's financial situation deteriorates significantly.
A 10-year Treasury bond has an 8% coupon, and an 8-year Treasury bond has a 10% coupon. Both bonds have the same yield to maturity. If the yields to maturity of both bonds increase by the same amount, which of the following statements is CORRECT?
The prices of both bonds will increase by the same amount.
The prices of both bonds will decrease by the same amount.
The prices of the two bonds will remain the same.
Both bonds will decline in price, but the 10-year bond will have a greater percentage decline in price than the 8-year bond.
One bond's price will increase, while the other bond's price decreases.
All else equal, if a bond's yield to maturity increases, its price will fall.
All else equal, if a bond's yield to maturity increases, its current yield will fall.
If a bond's yield to maturity exceeds its coupon rate, the bond will sell at a premium over par.
If a bond's yield to maturity exceeds its coupon rate, the bond will sell at par.
If a bond's required rate of return exceeds its coupon rate, the bond will sell at a premium.
Sinking fund provisions never require companies to retire their debt; they only establish "targets" for the company to reduce its debt over time.
Sinking fund provisions sometimes turn out to adversely affect bondholders, and this is most likely to occur if interest rates decline after the bond has been issued.
If interest rates have increased since a company issues bonds with a sinking fund, the company is less likely to retire the bonds by buying them back in the open market, as opposed to calling them in at the sinking fund call price.
A sinking fund provision makes a bond issue more risky to investors at the time of issuance.
Most sinking funds require the issuer to provide funds to a trustee, who saves the money so that it will be available to pay off bondholders when the bonds mature.
The Carter Company's bonds mature in 10 years have a par value of $1,000 and an annual coupon payment of $80. The market interest rate for the bonds is 9%. What is the price of these bonds?
A 14-year, $1,000 face value corporate bond has an 8% semiannual coupon and sells for $1,075. The bond may be called in five years at a call price of $1,050. What are the bond's yields to maturity and call?
YTM = 14.29%; YTC = 14.09%
YTM = 3.57%; YTC = 3.52%
A 15-year, $1,000 face value bond with a 10% semiannual coupon has a nominal yield to maturity of 7.5%. The bond, which may be called after five years, has a nominal yield to call of 5.54%. What is the bond's call price?
A 10-year, $1,000 face value bond has an 8% annual coupon and a yield to maturity of 10%. If market interest rates remain at 10%, what will be the bond's price two years from today?
A 12-year, $1,000 face value corporate pays a 9% semiannual coupon. The bond has a nominal yield to maturity of 7%, and can be called in three years at a price of $1,045. What is the bond's nominal yield to call?
Recently, Ohio Hospitals Inc. filed for bankruptcy. The firm was reorganized as American Hospitals Inc., and the court permitted a new indenture on an outstanding bond issue to be put into effect. The issue has 10 years to maturity and an annual coupon rate of 10%. The new agreement allows the firm to pay no interest for 5 years. Then, interest payments will be resumed for the next 5 years. Finally, at maturity (Year 10), the principal plus the interest that was not paid during the first 5 years will be paid. However, no interest will be paid on the deferred interest. If the required annual return is 20%, what should the bonds sell for in the market today?
FIN 221 Ch 2 & 6 Practice Quiz
Evaluating a truck weigh-in-motion program. The (1) Minnesota Department of Transportation installed a state-of-the-art weigh-in-motion scale in the concrete surface of the eastbound lanes of Interstate 494 in Bloomington, Minnesota. After installation, a study was undertaken to determine whether the scale's readings corresponded to the vehicle's static weights. (Studies of this type are known as calibration studies.) After some preliminary comparisons using a two-axle, six-tire truck carrying different loads (see the accompanying table), calibration adjustments were made in the software of the weigh-in-motion system, and the scales were reevaluated. **a** Construct two scatterplots, one of $y_1$ versus $x$ and the other of $y_2$ versus $x$.
Determine the first, second, and third quartiles of the data shown next. $$ \begin{array}{llllllllll}2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20\end{array} $$
Robert Arias recently inherited a stock portfolio from his uncle. Wishing to learn more about the companies in which he is now invested, Robert performs a ratio analysis on each one and decides to compare them to each other. Some of his ratios are mentioned below: $$ \begin{array}{lcccc} \text { Ratio } & \begin{array}{c} \text { Island } \\ \text { Electric Utility } \end{array} & \begin{array}{c} \text { Burger } \\ \text { Heaven } \end{array} & \begin{array}{c} \text { Fink } \\ \text { Software } \end{array} & \begin{array}{c} \text { Roland } \\ \text { Motors } \end{array} \\ \hline \text { Current ratio } & 1.10 & 1.3 & 6.8 & 4.5 \\ \text { Quick ratio } & 0.90 & 0.82 & 5.2 & 3.7 \\ \text { Debt ratio } & 0.68 & 0.46 & 0.0 & 0.35 \\ \text { Net profit margin } & 6.2 \% & 14.3 \% & 28.5 \% & 8.4 \% \end{array} $$ Assuming that his uncle was a wise investor who assembled the portfolio with care, Robert finds the wide differences in these ratios confusing. Help him out. a. Describe the problems might Robert encounter in comparing these companies to one another on the basis of their ratios.
Define the gambler's fallacy.
Century 21 Accounting: General Journal
11th Edition•ISBN: 9781337623124Claudia Bienias Gilbertson, Debra Gentene, Mark W Lehman
Intermediate Accounting, Volume 1 (Chapters 1-14) Problem Solving Survival Guide
13th Edition•ISBN: 9780470380574Donald E. Kieso, Jerry J. Weygandt, Terry D. Warfield
9th Edition•ISBN: 9780132753661Charles T. Horngren
4th Edition•ISBN: 9780134202815Jonathan B. Berk, Peter DeMarzo
Viro exam 1
Christina_l6
Group Counseling and Group Work
Danika_Anaya5 | CommonCrawl |
Co-occurrence simplicial complexes in mathematics: identifying the holes of knowledge
Vsevolod Salnikov1,
Daniele Cassese ORCID: orcid.org/0000-0002-2216-45621,2,3,
Renaud Lambiotte3 &
Nick S. Jones4
In the last years complex networks tools contributed to provide insights on the structure of research, through the study of collaboration, citation and co-occurrence networks. The network approach focuses on pairwise relationships, often compressing multidimensional data structures and inevitably losing information. In this paper we propose for the first time a simplicial complex approach to word co-occurrences, providing a natural framework for the study of higher-order relations in the space of scientific knowledge. Using topological methods we explore the conceptual landscape of mathematical research, focusing on homological holes, regions with low connectivity in the simplicial structure. We find that homological holes are ubiquitous, which suggests that they capture some essential feature of research practice in mathematics. k-dimensional holes die when every concept in the hole appears in an article together with other k+1 concepts in the hole, hence their death may be a sign of the creation of new knowledge, as we show with some examples. We find a positive relation between the size of a hole and the time it takes to be closed: larger holes may represent potential for important advances in the field because they separate conceptually distant areas. We provide further description of the conceptual space by looking for the simplicial analogs of stars and explore the likelihood of edges in a star to be also part of a homological cycle. We also show that authors' conceptual entropy is positively related with their contribution to homological holes, suggesting that polymaths tend to be on the frontier of research.
Co-occurrence networks capture relationships between words appearing in the same unit of text: each node is a word, or a group of words, and an edge is defined between two nodes if they appear in the same unit of text. Co-occurrence networks have been used, among other things, to study the structure of human languages (Ferrer-i-Cancho and Solé 2001), to detect influential text segments (Garg and Kumar 2018) and to identify authorship signature in temporal evolving networks (Akimushkin et al. 2017). Other applications include the study of co-citations of patents (Wang et al. 2011), articles (Lazer et al. 2009) and genes (Jenssen et al. 2001; Mullen and et al. 2014). Here we focus on the co-occurrences of concepts (theorems, lemmas, equations) in scientific articles to gain understanding in the structure of knowledge in Mathematics. Similar problems have been considered in scientometrics, even if previous works have limited their analysis to keywords, or words appearing in abstracts (Radhakrishnan et al. 2017; Zhang et al. 2012; Su and Lee 2010), and focused only on binary relations between words, as we clarify below.
The main novelty of our work is to study co-occurrences in a simplicial complex framework, using persistent homology to understand the conceptual landscape of mathematics. The adoption of a simplicial complex framework is motivated by the fact that concepts are inherently hierarchical, so simplicial complexes might seem a natural representation: often elementary conceptual units connect together to form nested sequences of higher-order concepts. A simplicial complex approach to model the semantic space of concepts was already suggested by (Chiang 2007), even if not in a topological data analysis framework (Patania et al. 2017b), while application of topological data analysis tools to visualisation of natural language can be found in (Jo et al. 2011; Wagner and et al. 2012; Sami and Farrahi 2017). Several reasons motivate the use of higher-order methods in this context. First, co-occurrence networks tend to be extremely dense in practice and require additional tools to filter the relations and sparsify the network to extract information (Serrano et al. 2009; Slater 2009). Second, in the original dataset, interactions are not pairwise and it is unclear if the constraints induced by a network framework, in terms of nodes and pairwise edges, do not obscure important structures in the system. By modelling co-occurrence relations as a simplicial complex, we thus go beyond the network description that reduces all the structural properties to pairwise interactions and their combinations, explicitly introducing higher-order relations. Note that this modelling approach, in particular the use of simplicial persistent homology, has found uses when the data is inherently multidimensional (Petri et al. 2013), with applications in neuroscience (Petri et al. 2014; Stolz et al. 2017), biology (Chan et al. 2013; Mamuye et al. 2016) to the study of contagion (Taylor and et al. 2015) and to coauthorship networks (Patania et al. 2017a; Carstens and Horadam 2013).
A second contribution of our work is the analysis of the full text of a large corpus of articles, which allows us to bypass the high-level categorisation provided by keywords but also to identify the use of methodological tools and to gain insight into mathematical praxis. However, the main purpose of this article is to use the resulting dataset of concepts and articles as a testbed in which to apply methods from topological data analysis, and to go beyond a standard network analysis.
The dataset analysed has been scraped from arXiv, and includes a total of 54177 articles from 01/1994 to 03/2007, of which 48240 in mathematics (math) and 5937 in mathematical physics (math-ph). We have limited the timeframe due to naming conventions in arXiv: since 03/2007 subject is not a part of the article identifier, thus if one wants to export it additional queries to metadata are needed. That is easily expandable, but we decided to limit the dataset at this moment for computational speed. The date is extracted from article id, hence it refers to the submission date. Notice that some of the articles in the first years may have been written some years before 1991 (arXiv first article's date). In order to describe the mathematical content of articles from the LATE X file we look at different concepts occurrences in the text. Clearly the choice of the concepts set can influence the outcome: choosing them manually by a small group of people would result in a strong bias towards the understanding and priorities of the individuals in the group. Thus we wanted to have something either globally accepted by scientific community or at least created by a sufficiently large group of people. Another point in the selection of a good concepts list is the possibility to make a similar research for other disciplines, thus we chose to get it from some general, easily accessible source. Our strategy consisted in parsing a concepts list from Wikipedia, which includes 1612 equations, theorems, lemmas. Clearly these concepts are not homogeneous, meaning that some of them might represent extremely specific theorems, while others can be very general, like differential equation, but similar holds for any text processing with different words having different frequencies. Our position on that is still to minimize the manually introduced bias: we consider that all concepts have similar weight and try to have as complete set as possible. Moreover it is possible that two different names represent for example the same theorem due to historical reasons. For the moment we consider such synonyms as distinct entities as the usage of one of them but not the other may reflect structural properties: for example a lemma might have different names depending on a (sub-)field of mathematics and manually merging them is not correct.
As a next step, we combine both datasets. Among the whole concepts list, 1067 find a match in at least one article. Among the 54177 articles, 35018 contain at least one of the concepts in our list (30369 for mathematics and 4649 for mathematical physics), and we also take the list of authors to analyse their contribution to the conceptual space. We construct the binary (non-weighted) co-occurrence simplicial complex (defined more formally below) over the 1067 nodes by including a (k−1)-simplex for each article containing k concepts, provided its concept set is not fully included in the concept set of another article, that is we only keep facets of the simplicial complex. Whenever the concept sets of two articles intersect, their corresponding simplices share a face of dimension (n−1), where n is the dimension of the intersection. The corresponding network, namely the 1-skeleton of the co-occurrence simplicial complexes (that is we only look at faces of dimension 0 and 1) has 1067 nodes and 32707 unweighted edges. Figure 1 shows the network and simplicial (concept) degree distribution, where the simplicial degree of a concept is the number of facets (articles) it belongs to. The sum of all simplicial degrees is 42009, which means there are 39.37 papers per concept on average.
Dimension of maximal simplices and simplicial degree. On the lefthand side the distribution of number of concepts per article. Here we compare the distribution of the entire dataset (grey) with that of the papers included in the simplicial complex, which shows that most of the articles with few concepts in are not included as they are a subset of other articles. On the righthand side the simplicial and network degree distribution are showed
Simplicial complexes
A simplicial complex is a space obtained as the union of simple elements (nodes, edges, triangles, tetrahedra and higher dimensional polytopes). Its elements are called simplices, where a k-simplex is a set of k+1 distinct nodes and its subsets of cardinality d≤k are called its d-faces. We say that two simplices intersect if they share a common face. More formally:
Let V be a set of vertices, then a n-dimensional simplex is a set of cardinality n+1 of distinct elements of V, {v0,v1,…,vn}, vi∈V. A simplicial complex is a collection K of simplices such that if σ∈K and τ⊂σ then τ∈K, so for every simplex in K all its faces are also in K. The k-skeleton of K is the union of all simplices in K up to dimension k.
Simplicial complexes can be seen as generalisation of a network beyond pairwise interactions, that differ from hypergraphs as all subsets of a simplicial complex must also be simplices. As an illustrative example of how simplicial complexes capture higher-order interactions where networks fail to do so, consider that in a co-occurrence network it is not possible to distinguish between three concepts appearing in the same paper and three concepts appearing in three papers each containing two concepts: in a network both cases are represented by a triangle, while in a simplicial complex the first is a 2-simplex (a filled triangle) and the second is a cycle made of three 1-simplices (an empty triangle).
As for networks, also for simplicial complexes we can define simplicial measures that are the higher-order analogs of networks ones, for example (Estrada and Ross 2018) defines several simplicial centrality measures, providing also the characterisation of some families of simplicial complexes. In this paper we use the simplicial analogs of stars to provide a further description of the concept space.
A simplicial star \(S_{l}^{k}\) consists of a central (k−1)-simplex that is a face of lk-simplices, and there is no other simplex but their subsimplices.
\(S^{1}_{5}\) for example is the usual star, with a node in the core connected to 5 nodes, while for \(S^{3}_{5}\) the core is a triangle. Examples are reported in Fig. 2.
Simplicial stars. Example of a \(S^{2}_{3}\) (top) and a \(S^{3}_{3}\) (bottom) star. Red nodes are theorem, grey nodes lemmas, while red edges delimit the simplices in core of the star
Persistent homology
Persistent homology is a method in topological data analysis (Carlsson 2009; Patania et al. 2017b) based on algebraic topology, that studies the shape of the data by finding holes of different dimensions in the dataset (Table 1). Holes are topological invariants that can be seen as voids bounded by simplices of different dimension: in dimension 0 they are connected components, in dimension 1 loops (voids bounded by edges), in dimension 2 voids bounded by triangles and so on. Here we give a brief and intuitive explanation of the homology of simplicial complexes, for details on how to compute it we refer to (Edelsbrunner and Harer 2008; Horak et al. 2009; Otter et al. 2017).
Table 1 Dataset
For every k-simplex of a simplicial complex K, consider the simplicial analog of a path, a k-chain, simply the formal sum of adjacent k-simplices (where by adjacent we mean that they share one (k−1) face) with coefficients in some algebraic ring R uniquely identifying the chain (for example a 1-chain is a formal sum of oriented edges). It is a common practice to consider Z or Zn for R and negative coefficients change the orientation. Without any limitations and for the sake of simplicity one can imagine R=Z2 as it permits us to eliminate questions of orientation: in this case − 1S=S for any simplex S. If we consider a k-simplex, it is bounded by its (k−1) faces, and we call the corresponding (k−1)-chain, equal to the sum of these faces with coefficients 1 and coherent orientations, the boundary of that simplex. Again to make an illustrative example, the boundary of a 2-simplex (filled triangle) is then a formal sum of its 1-faces (edges). The boundary for a general k-chain is defined as the sum of the boundaries of the simplices in the chain taken with corresponding coefficients. Consider the linear map on the space of k-simplices, mapping each k-simplex to its boundary, the boundary operator
$$\partial_{k} : C_{k} (K) \rightarrow C_{k-1} (K) $$
defined on the vector space with basis given by the simplices of K. A k-cycle is defined as a k-chain without a boundary, hence it is an element of the kernel of ∂k, and a k-boundary is a k-chain which is the boundary of a (k+1)-chain, so it is an element of the image of ∂k+1, which is a subset of the kernel of ∂k as the boundary of a boundary is empty, or ∂k∂k+1=0.
So we have defined two interesting subspaces: the collection of k-cycles and the collection of k-boundaries, and we can also take the quotient space as the second is a subset of the first: what is left in the quotient space are those k-cycles that do not bound (k+1)-subcomplexes, and these are the k-dimensional voids. More precisely, as there can be more k-cycles around the same hole, the elements of the quotient space can be divided in homological classes, each identifying a hole. This quotient space is the kth homology of the simplicial complex
$$H_{k} (K) = \frac{\text{ker}(\partial_{k})}{\text{Im}(\partial_{k+1})} $$
and its dimension
$$\beta_{k} (K) = \text{dim ker}(\partial_{k}) - \text{dim Im}(\partial_{k+1}) $$
is the number of homology classes or k-dimensional voids in the simplicial complex, the k-th Betti number of the homology. For example the zeroth Betti number counts the number of connected components in the graph that constitutes the 1-skeleton of the simplicial complex, the first Betti number the number of loops, the second Betti number counts voids.
To gain some intuition, consider Fig. 3: {ab,ac,bc} and {ac,ad,cd} are two 1-chains, of which {ac,ad,cd} is the boundary of the (filled) triangle acd while {ab,ac,bc} is not in the boundary of any 2-chain. So {ab,ac,bc} is a 1-dimensional homological cycle H1. On the other hand the 2-chain {bce,bcf,bef,cef} is the boundary of the (filled) tetrahedron bcef hence it is not a 2-dimensional cycle. In Fig. 4 the same 2-chain is a 2-dimensional homological cycle H2 as there the tetrahedron bcef is not in the simplicial complex anymore (as we are now taking the 2-skeleton of the complex in Fig. 3), so {bce,bcf,bef,cef} is not the boundary of any higher order chain and its triangular elements bound a void.
Network (left) and simplicial complex (right) representation. The (maximal) 2-simplices are in light blue, 3-simplices in red and the 4-simplex is in green. Notice that in the network these are indistinguishable
2-skeleton of a 4-complex in 3. As can be seen 1-dimensional holes are preserved, while 2-dimensional holes are not, in particular there are 3 H2-holes that were not in the original complex, namely those inside the two 3-simplices, and one inside the 4-simplex
The homological features of complexes are usually studied on a filtration of the complex, that is a sequence of simplicial complexes starting at the empty complex and ending with the full complex, so that the complex at step n<m is embedded in the complex at m for all the steps. In this way it is possible to focus on the persistency of homological features: as the filtration evolves the shape of data changes, so birth and death of holes can be recorded. A hole is born at step s if it appears for the first time in the corresponding step of the filtration, and dies at t if after step t the hole disappears. The difference between birth and death of homological features is called persistence, and can be recorded by a barcode, a multiset of intervals bounded below (Carlsson et al. 2005) visualizing the lifetime of the feature and its location across the filtration: the endpoints of each interval are the steps of the filtration where the homological feature is born and dies (Horak et al. 2009). An alternative visualisation is provided by the persistence diagrams, which are built by constructing a peak function for each barcode, proportional to its length (Edelsbrunner et al. 2002; Stolz et al. 2017).
The way the filtration is built depends on the analysis that one wants to do on the data, a very common method on a weighted network is the weighted rank clique filtration (Petri et al. 2013). This is done by filtering for weights: after listing all edge weights wt in descending order, at every step t one takes the graph obtained keeping all the edges which weight is greater than or equal to wt. The simplicial complex at that step of the filtration is built by including all the maximal k-cliques of the graphs as k-simplices. The obtained simplicial complex is called clique complex.
In this paper we use a temporal filtration instead, as in (Pal et al. 2017). Using article dates we build a temporal filtration
$$\mathcal{F}_{0} \subseteq \mathcal{F}_{1} \subseteq \dots \subseteq \mathcal{F}_{T} $$
where (0,T) are first and last date in our dataset (time step is one month) and for i<j and each \(\mathcal {F}_{i}\) co-occurrence complex contains simplices of concepts (articles) up to date i. As every article is a simplex we do not need to build a clique complex like in the weighted rank clique filtration.
Reducing the computational burden
Computing persistent homology is very costly if there are large simplices in the simplicial complex, as for each simplex the computation requires to list all the possible subsimplices. For instance in our (rather small) dataset, there are already simplices with 37 vertices and the number of its (k-1)-subsimplices is \(\frac {37!} {(37-k!)k!}\), making it impossible to finish the task in reasonable time with standard tools. In order to reduce the computational burden, we put an upper bound on the dimension of simplices, that is we take the subcomplex that only includes simplices up to a maximum dimension dM=5. In other words, we compute the homology of the dM-skeleton of our simplicial complex K.
In the dM-skeleton of K all simplices of dimension d>dM are replaced by collections of their dM-faces, that is a complex of dimension dM made by glueing together \(\binom {d+1}{d_{M}+1} {d_{M}}\)-simplices along their dM−1 faces, such that for each dM−1 face there are d−dM simplices sharing that face. To make an illustrative example if dM=2, the 2-skeleton of a 3-simplex is the collection of triangles in the boundary of the tetrahedron.
It is straightforward to show that the dM-skeleton of K, \(\phantom {\dot {i}\!}K^{d_{M}}\) is (dM−1)-homologically equivalent to K, in the sense that they have the same homology groups up to \(H_{(d_{M} - 1)}\). Moreover, for d≤dM, the d-chains group of the dM-skeleton coincides with the d-chains group of K, as they have the same dM-simplices, hence it follows that also d-cycles groups coincide (see Fig. 5). This implies that any map ∂d with d≤dM is the same on \(\phantom {\dot {i}\!}K^{d_{M}}\) and K, hence the set of d-boundaries with d≤dM−1 is the same on the two complexes. So \(H_{d} (K) = \frac {\text {ker}(\partial _{d})}{\text {Im}(\partial _{d+1})} = H_{d} (K^{d_{M}}) \) for d≤dM−1.
Chains, cycles and boundaries sets and their maps under the boundary operator for the d-skeleton of a complex K, with d=dM. Cd, Zd and Bd represent the collections of d chains, cycles and boundaries respectively. Notice that in K we may well have Cd+1 and bigger dimensional chains with corresponding boundary operators, while in \(\protect \phantom {\dot {i}\!}{K^{d_{M}}}\) we start from \(C_{d_{M}}\) as the largest simplices we have are dM-dimensional
As a trivial example consider the 2-skeleton of the tetrahedron, this contains a homological cycle of dimension 2, as there is a void bounded by triangles inside the tetrahedron. But no homological cycle of dimension 1 nor 0, as all its edges are in the boundary of some 2-simplex. An illustration of the 2-skeleton of a complex can be seen in Fig. 4.
We focus on homological cycles of dimension 1 and 2, respectively two dimensional holes bounded by edges and three dimensional holes bounded by triangles. Persistent homology is computed using javaplex (Adams et al. 2014), and the algorithms implemented here are based on (Zomorodian and Carlsson 2005). Without going into detail, the computation of persistent homology is formulated as a matrix decomposition problem: the boundary operator ∂k has a standard matrix representation, Mk. The null space of Mk corresponds to cycles Zk and its range-space to boundaries Bk−1 so births and deaths of holes are detected when the rank and nullity of Mk change. In other words the algorithm allows to detect births and deaths of holes but not to localize them precisely, and for each hole it computes a representative cycle that it is not necessarily the shortest cycle surrounding the hole. We use this cycle representatives for our analysis, so it is necessary to raise some caveats: when we refer to the size of the hole we are referring to the size of its representative, which can be safely considered as a proxy of the size of the hole, as the representative is a random cycle surrounding the hole. The analysis of holes killers is more problematic, as it may be that only some of the concepts in the representative cycle are actually part of the hole. Recall that a homological hole is a k-chain which is not the boundary of any higher-order structure, which means that the concepts in a k-cycle only appear together in sets of at most k+1 elements (in the k-simplices making the cycle). A k-dimensional hole dies when all of its concepts appear in an article together with other k+1 concepts in the hole. Consider for example the H1 cycle on the left of Fig. 6, its concepts appear in the same paper in couples and at most in couples. Hence they are related but at the same time they are conceptually distant. This hole could be killed by an article including all of its concepts, or by a collection of articles that include at least 3 of its concepts, covering all concepts in the hole. Clearly these articles can appear at different steps of the filtration, so that the hole progressively "shrinks" until all of its concepts are covered, and that step of the filtration is registered as the death of the hole. We use this information to detect potential hole killers: we check all articles appearing in the filtration when the hole dies, and we select those having an intersection with the cycle representative which is at least k+2. If there is more than one, we take the simplex with largest intersection with simplices in the cycle. So when we refer to a hole killer, we mean the last simplex that closes the representative cycle. By using this approach we are able to find hole killers only for a subset of representative cycles, and these representative cycles are those more likely to have a large intersection with the shortest cycle surrounding the hole, so we can use these representative cycles and their killers to illustrate some examples.
Example of a H1 cycle, a hole bounded by edges (left) and a H2 cycle, hole bounded by triangles (right). Nodes color: red for theorems, grey for lemmas and green for equations
Figure 7 reports the barcodes for H1 and H2, where holes are ordered by their death time (that is the step of the filtration at which they disappear), and Fig. 8 holes persistence. The first thing to notice is that most holes persist up to the end of the filtration, that is up to 03/2007, meaning that there are several areas of low connectivity (both in H1 and in H2) in the conceptual space, while this was not emerging from the network analysis of our data. Moreover new holes are born continuously, along all the steps of the filtration. The fact that holes continue to be born at every point in time, and most of them don't die finds a possible interpretation in that the evolution of research in mathematics proceeds by connecting new conceptual areas in a cyclic way, and rarely these concepts contribute all together to the production of scientific advances (that would kill the hole). So this suggests that the death of conceptual holes may be a sign of important advances in mathematics as the emergence of a new subfield.
Barcodes for H1 (top) and H2 (bottom), ordered for death time. The left and right endpoints of each segment represent the first and last step in the filtration where the hole appears
Persistence diagrams for H1 and H2. Persistence diagrams report the same information of barcodes, where each barcode is mapped to a peak function that is proportional to the life of the homological feature. The heatmap reports information on the length of the cycle and the coordinates are birth and death times
We investigate which are the most important concepts in H1 and H2 by counting the number of times each concept appears in a cycle, divided by the number of times it appears in different articles, to correct for the fact that very common concepts are also more likely to appear in cycles. The results are reported in Fig. 9, notice that most of the concepts are theorems there are 5 concepts in common between the most important 20 of H1 and H2, even if their ordering is not preserved.
Concept importance in cycles 20 most important concepts appearing in H1 cycles (top) and H2 cycles (bottom). The bars length corresponds to the number of appearances in cycles divided by the number of appearance in edges and triangles respectively
A clear feature emerging from both H1 and H2 is that longer cycles are more difficult to break. Considering only those cycles that die before the end of the filtration (otherwise they have infinite persistence) we compute the average persistence among all cycles of given length. Figure 10 shows the plot of cycles length versus average persistence of cycles of the corresponding length: despite some noise, a positive trend appears clearly. This suggests that concepts appearing in a long cycle that are not successive elements of the k-chain (so that don't appear in the same article) have a great distance among them in the conceptual space of mathematics compared to those appearing in shorter cycles. Notice that this relation is somehow natural, as the more concepts there are in a k-cycle, we need more articles (each including at least k+2 of the concepts in the cycle) in order to kill the hole.
Fig. 10
Longer cycles live longer. The plots report cycle size on the x-axis and cycle mean persistence on the y-axis for H1 (left) and H2 (right). In both cases the trend shows that longer cycles are more difficult to break
Looking at the distribution of killers' size, the largest simplex in H0 is made of 27 concepts, while both in H1 and H2 has 38 concepts (it is actually the same article in both). Interestingly and not surprisingly, these two articles are both surveys, the largest for H0 regards open questions in number theory (Waldschmidt 2004) and the largest for H1 is a survey on differential geometry (Yau 2006). The most frequent killers' sizes for H0, H1 and H2 are respectively 4,7,11.
To give a better idea of the meaning of holes death, let us consider some examples. The smallest cycle in H1 is an empty triangle, one of such cycles is given by the three simplices {(Schur's lemma, Stone-Von Neumann Theorem), (Schur's lemma, Spectral Theorem), (Spectral Theorem, Stone-Von Neumann Theorem)} which is killed by a 2-simplex when the three concepts appear together in the paper (Mantoiu et al. 2004). Another interesting example is a 5-step long cycle in H1, {(Boltzmann equation, Alternate Interior Angles Theorem), (Boltzmann equation, Vlasov equation), (Inverse function theorem, Vlasov equation), (Arzelá - Ascoli theorem, Alternate Interior Angles Theorem), (Arzelá - Ascoli theorem, Inverse function theorem)}, that is killed by the 10-simplex which nodes are: { Blum's speedup theorem, Boltzmann equation, Alternate Interior Angles Theorem, Kramers theorem, Perpendicular axis theorem, Ordinary differential equation, Kronecker's theorem, Arzelá - Ascoli theorem, Navier - Stokes equations, Vlasov equation, Moreau's theorem} (Gottlieb 2000). The article, classified in arXiv as Probability, establishes the conditions for a family of n-particles Markov processes to propagate chaos, and shows its application to kinetic theory. We think this is a another possible interpretation of killing holes: a theoretical result that has several applications, hence bridges related areas and closes a homological cycle.
Simplicial stars represent potentially interesting structures in the conceptual space, which can be visualised as small substructures attached to the 'surface' of a densely connected cluster, like receptors on the membrane of a cell. To grasp the intuition consider that if we partition the concepts in a Sk star between those in the core (the 0-faces of the core k-simplex) and those in the periphery (the zero faces of the (k+1) simplices that have the core as common face, without the core faces) by definition there can't be any edge (or higher-order simplex) between any of the concepts in the periphery. This means that periphery nodes 'touch' the surface of a densely connected area, and each of them belongs to a different simplex lying on the surface, while nodes in the core are one step far from the surface. Considering only those with at least two simplices we count 567 S2 stars and 284 S3 stars. We do not check for higher-order stars for computational reasons.
Figures 11 and 12 report the 20 most important concepts in the cores and peripheries of S3 and S2 respectively, adjusted for the number of times concepts appear in triangles (for cores) and tetrahedra (for peripheries). Notice that in both cases all except one concept are theorems/conjectures. Figure 9 reports the ranking of the first 20 concepts in H1 and H2, where here we adjust for the frequency of appearance of a concept in the simplex that constitutes the cycles, hence edges and triangles respectively. Even in these two rankings the large majority of the concepts are theorems.
Concept importance in S3 stars 20 most important concepts appearing in the core (top) and the periphery (bottom) of S3 stars. The bars reports the number of appearances in cores or peripheries divided by the number of appearance in triangles and tetrahedra respectively
Concept importance in S2 stars 20 most important concepts appearing in the core (top) and the periphery (bottom) of S2 stars. The bars reports the number of appearances in cores or peripheries divided by the number of appearance in edges and triangles respectively
In order to check if edges that appear in cycles are also likely to appear in stars, we find the intersection between the set of concepts in stars (differentiating between edges in the core and edges in the periphery) and cycles and divide by the total numbers of edges in the corresponding cycle. As appearing in a star is a Bernoulli variable, we can easily compute the standard deviations for our estimated probabilities. Table 2 reports the results: it is interesting that edges in the cores of both S2 and S3 are more likely to appear in cycles of all dimensions than edges in the peripheries. It is particularly striking that edges in the peripheries of S3 never appear in any cycle. Moreover edges in stars (both cores and peripheries) are more likely to appear in cycles than a random edge, except for one case: a random edge is more likely to be in a H3 cycle than an edge in the periphery of a S2 star.
Table 2 Probabilities and standard errors for edges in the cores and peripheries of stars and of a random edge to be in Hk
Authors analysis
We use the conceptual content of articles to classify the activity of researchers, by constructing for each author an activity vector:
$$\mathbf{a}_{i} = \left(a^{1}_{i}, \dots, a^{N}_{i}\right) $$
where \(a^{c}_{i} \) represents the relative importance of concept c in the activity of author i, given by the number of articles, of which a is one of the authors, containing concept c, divided by the total number of concepts the author used in different articles, so that the sum of the entries of ai is one for all authors. As in (Gurciullo and et al. 2015) we can use this vector to map authors' research activity on the basis of the broadness of their contribution to the concept space, as captured by the entropy:
$$\lambda_{i} = -\sum_{c} a^{c}_{i} \ln a^{c}_{i} $$
λi≥0 and is 0 only for those authors who only do research about one concept. We suggest a classification of authors based on their entropy level, we define author i as specialist if λi<1, polymath if λi>2, and mixed if 1≤λi≤2. The choice of the thresholds is arbitrary, and it is made just for exposition's sake, hence this classification is not to be intended in a true sense of the words: a little caveat here is that this estimator of authors' activity is not very informative for those authors who published only one article. Also, an author with high entropy may be a specialist in one specific topic that has application across disciplines, hence she has a diverse range of collaborations more than being a polymath in a strict sense. Of course we cannot disentangle which is each author's contribution to a paper, but, provided we are clear on the information conveyed by this measure of specialisation, we can make use of it to capture the relation between conceptual breadth of research (being it made by a single authors who really is a polymath or by a research group) and homological cycles.
While it is expected that there is a positive relation between number of concepts in the activity vector and its entropy, we want to see how this relation compares with a null model. The null model is constructed as follows: for each concept in our list we add to an urn as many of its copies as the number of times it appears in different articles. Then for each author we count how many concepts she used across all her publications, or equivalently we sum the number of concepts used in each paper she authored (for example if an author published three papers using concept c1, one using concept c2 and one using concept c3 her total count will be 5). Call kM the max of these counts across all authors, then for k∈[1,kM] (k integer), we extract k concepts at random from the urn, and we compute the entropy of the extracted k-tuple, repeating the operation 1000 times for each k and computing the average entropy over the 1000 extractions. To compare with authors, we group them according to the number of concepts used, and compute the average entropy for each k. We estimate the relation \(\hat {e}\) between number of concepts k and average entropy, using least squares, finding logarithmic relation \(\hat {e}_{null} = A + B \log (1+k)\) for the null model, while for the data \(\hat {e}_{data} = A + B \log (1+k)/k\) as can be seen in Fig. 13. For the data (A,B) is (3.459,−5.257) and (0.170,0.863) for the null model, showing how, for large enough values of k, the fit for the data always lies below the null model, and more importantly, that the relation emerging from the data has a horizontal asymptote, while the null model does not. This is telling us that there is an upper bound on the conceptual entropy, and even very prolific polymaths show some degree of specialisation, in the sense that as the number of articles increases, they eventually stop broadening their research including new concepts, and tend to publish new research regarding concepts they already explored.
Authors' entropy versus null model. Here we compare the entropy for authors in our dataset with average entropy over a null model. Entropy for authors is computed as the average over all authors with k concepts, while entropy for the null model is the average over 1000 random extractions of k concepts from our list. Average entropy for authors is always below the null model, and it is bounded above by a horizontal asymptote
To understand if there is any relation between authors' profile in terms of their specialisation and their contribution to homological cycles, we compute for author i a measure of his homological importance
$$h_{i} = \sum_{c \in C_{i}, k} \mathbf{1}_{H_{k}}(c) $$
where Ci is the set of concepts used by author i, and \(\mathbf {1}_{H_{k}}(c)\) is the indicator function, giving 1 if concept c is in the homological cycle Hk.
To correct for the fact that most frequent concepts tend to appear in cycles more often, we exclude the first 100 most frequent concepts from the computation of the homological importance. After removing the 100 most frequent concepts still the 86.5% of authors contribute at least once to a homological cycle. This confirms that homological cycles are ubiquitous, and really constitute a feature of mathematical research. Figure 14 shows the scatterplot of authors' conceptual entropy and homological importance. It reveals a non-linear positive relation between the two, so more interdisciplinary authors contribute more to homological cycles, thus confirming the intuition that cycles are made by concepts that belong to different areas of mathematics, which are mostly unconnected among them. Polymaths are often found on the boundary of these voids surrounded by concepts belonging to different conceptual areas. In Fig. 15 we show the relation \(\hat {c}\) between average entropy and average importance in cycles (least squares fit), and how it compares with the null model. The relation between average entropy \(\hat {\lambda }\) and average contribution to cycles is exponential both for the null model and for the data, with \(\hat {c} = A + B^{\hat {\lambda }}\), and the null model always lies above the data, with (A,B)=(3.04,3.92) for the null model and (−4.50,3.05) for the data.
Author contribution to cycles. The plot shows that contribution to homological cycles in H1, H2 and H3 is positively correlated to the conceptual entropy
Authors' contribution to cycles versus null model. Here we compare the contribution to homological cycles as a function of authors' entropy in our dataset with average contribution over the null model, the latter being always above the fit for authors' data
This work is a first attempt to explore the importance of homological holes in mathematics, and it is important to issue certain caveats here. It seems clear that our observations would not have been possible within a classical network analysis of co-occurrences, and we also think that holes deaths can be informative in capturing advances in the discipline. However, we do not claim that we are describing the essence of mathematical practice here. If on one hand, by extracting concepts from the whole text of the article instead of just focusing on the keywords we avoid the bias of authors' own classification of their work, on the other hand it is possible that for some articles the conceptual content is not well captured by our approach: we cannot exclude that the set of concepts that better identify the content of the article do not find an exact match in our list, while those finding a match are poorly representative of the article. We believe that such cases are a minority as our list is very comprehensive, still this is an aspect to take in consideration when analysing our results, even if this is an issue that is potentially arising every time one does text analysis, irrespective of the tools used to explore the data. In this direction, an important step would be to assess the robustness of our observations by purposely introducing errors in our data analysis, for instance by focusing only on a fraction of our list of identified concepts, and keeping the other unknown.
Overall, this paper suggests new directions for the study of co-occurrences by focusing on their higher-order properties and there is substantial space for further development. The first and most urgent point, in our opinion, regards the necessity to validate the findings from the data by having well-founded null models for comparison. Recently some null models for simplicial complexes have been developed (Young et al. 2017; Courtney and Bianconi 2016), which we did not use here for computational reasons, as producing samples from our relatively large dataset proved too challenging to manage with our resources. Another very important point is to devise methods and find algorithms to localise homological holes with precision, as this could be very useful for some specific applications, and it can still be considered an open problem. In this regard we think that it would be pivotal to unify the study of the geometry and the topology of the structure. A geometrical approach has been used in (Eckmann and Moses 2002) where they find that curvature is a good measure of thematic cohesion in the WWW. More recently (Wu et al. 2015; Bianconi and Rahmede 2017) studied the emergence of geometry in growing simplicial complexes, without a pre-existing embedding space and metric. This approach could be naturally extended to our case, as we have no natural embedding space for concepts, and also because the co-occurrence simplicial complex that we study is dynamic by nature, as new simplices appear at each time step eventually glueing to existing simplices. Embedding concepts in a space could facilitate the localisation of holes, in this regard (Bianconi and Rahmede 2017) find that the natural embedding for their growing simplicial complexes is hyperbolic, and in their case the position of the incoming nodes depends on the position of the nodes of the face of the simplex to which the new simplex glues. Such a framework could be ideal to the purpose of precise holes identification in our case. Moreover adopting a growing simplicial complex model (generalized to allow for simplices of different size at each time step) and fitting it to our data we could be able to predict the future evolution of the conceptual space of mathematics.
In this paper, we have studied the topological structure of conceptual co-occurrences in mathematics articles, using data from arXiv. We modelled co-occurrences in a simplicial framework, focusing on higher-order relations between concepts and applying topological data analysis tools to explore the evolution of research in mathematics. We find that homological holes are ubiquitous in mathematics, appearing to show an intrinsic characteristic of how research evolves in the field: holes are likely to represent groups of concepts that are closely related but do not belong to a unitary subfield, and the death of a hole is either a sign that anticipates a potential advance in that conceptual area (for example a review trying to bridge the concepts and suggesting research lines), or an actual advance, that is an article (or a set of articles) that unifies a subgroup of concepts in the cycle, for example a theoretical result with application to different areas. Less interesting, but we cannot exclude it as we have no other way to verify than reading each of the papers killing a hole, a hole-killer (especially if of very large size) could be a scarcely relevant article mentioning many concepts without providing any true contribution.
We also find that the higher the number of concepts in a hole, the longer it takes to die, hence the length of a hole is a good proxy of how distant these concepts are, in terms of their likelihood to appear together in an article. So in this sense large holes could be seen as potential spaces for important advances in mathematics. Moreover we further explore the structure of co-occurrences by looking at the simplicial analogs of stars in higher dimension, which represent groups of concept (those in the core of the star) that supports and connects many otherwise unrelated concepts, and we find that concepts appearing in stars tend also to appear in holes more often than they would do at random, suggesting that both structures lie at the frontier of mathematical research.
We also explore authors' conceptual profile by ordering them on the basis of their conceptual entropy, so that we can differentiate between those authors who tend to specialise and publish mostly about few concepts, and others that do research on a broad range of topics, that we call polymaths. Comparing authors' profiles with a random model, we find that authors' entropy as a function of how may concepts authors use across different publications, is bounded above, while in the null model, entropy is always increasing for larger set of random concepts. This is reasonable, and means that even the more prolific polymaths, even if they publish a large number of articles, will still tend to specialise to some extent, instead of doing research always on new topics. Moreover we find that polymaths contribute to homological holes more than specialists, so polymaths are often at the frontier of research.
Further work could be done by using larger datasets, as it would be very interesting to explore the birth and death of holes in a larger time-span, and to study simplicial co-occurrences in other disciplines, in order to see if any difference appears in the way research evolves in different fields. Furthermore, conceptual spaces emerging from co-occurrences relations could be explored adding a further dimension to the filtration: in our case we focus on a temporal filtration, disregarding the weights of simplices, this could be extended by filtering along time and weight using multidimensional persistence (Carlsson and Zomorodian 2009).
Adams, H, Tausz A, Vejdemo-Johansson M (2014) javaplex: A research software package for persistent (co)homology. In: Hong H Yap C (eds)Mathematical Software – ICMS 2014, 129–136.. Springer, Berlin, Heidelberg.
Akimushkin, C, Amancio DR, Oliveira ONJ (2017) Text authorship identified using the dynamics of word co-occurrence networks. PLos ONE 12(1):1–101.
Bianconi, G, Rahmede C (2017) Emergent hyperbolic network geometry. Sci Rep 7(41974). https://doi.org/10.1038/srep41974.
Carlsson, G (2009) Topology and data. Bullettin AMS 46(2):255–308.
Carlsson, G, Zomorodian A (2009) The theory of multidimensional persistence. Discrete Comput Geom 42:71–93.
Carlsson, G, Zomorodian A, Collins A, Guibas LJ (2005) Persistence barcodes for shapes. Int J Shape Model 11:149–188.
Carstens, CJ, Horadam KJ (2013) Persistent homology of collaboration networks. Math Probl Eng 2013:85035. https://doi.org/10.1155/2013/815035.
Chan, JM, Carlsson G, Rabadan R (2013) Topology of viral evolution. PNAS 110(46):18566–18571.
ADS MathSciNet Article MATH Google Scholar
Chiang, IJ (2007) Discover the semantic topology in high-dimensional data. Expert Syst Appl 33:256–262.
Courtney, O, Bianconi G (2016) Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes. Phys Rev E 93:062311. https://doi.org/10.1103/PhysRevE.93.062311.
Eckmann, J-P, Moses E (2002) Curvature of co-links uncovers hidden thematic layers in the world wide web. PNAS 99(9):5825–5829.
ADS MathSciNet Article Google Scholar
Edelsbrunner, H, Harer J (2008) Persistent homology - a survey. Contemp Math 453(2):255–308.
MathSciNet MATH Google Scholar
Edelsbrunner, H, Letscher D, Zomorodian A (2002) Topological persistence and simplification. Discret Comput Geom 28(4):511–533.
Estrada, E, Ross G (2018) Centralities in simplicial complexes. applications to protein interaction networks. J Theor Biol 438:46–60.
Ferrer-i-Cancho, R, Solé RV (2001) The small world of human language. Proc R Soc Lond B 268:2261–2265. https://doi.org/10.1098/rspb.2001.1800.
Garg, M, Kumar M (2018) Identifying influential segments from word co-occurrence networks using ahp. Cogn Syst Res 47:23–41.
Gottlieb, AD (2000) Markov transitions and the propagation of chaos. ArXiv Math E-prints. math/0001076.
Gurciullo, S, et al. (2015) Complex politics: A quantitative semantic and topological analysis of uk house of commons debates. ArXiv E-prints. 1510.03797.
Horak, D, Maletic S, Rajkovic M (2009) Persistent homology of complex networks. J Stat Mech Theory Exp 3:3–34.
MathSciNet Google Scholar
Jenssen, T-K, Laegreid A, Komorowski J, Hovig E (2001) A literature network of human genes for high-throughput analysis of gene expression. Nat Genet 28:21–28.
Jo, Y, Hopcroft JE, Lagoze C (2011) The web of topics: Discovering the topology of topic evolution in a corpus In: Proceedings of the 20th International Conference on World Wide Web, March 28 - April 01, 2011, Hyberabad, India, 256–266.. ACM, New York.
Lazer, D, Mergel I, Friedman A (2009) Co-citation of prominent social network articles in sociology journals: The evolving canon. Connections 29(1).
Mamuye, AL, Rucco M, Tesei L, Merelli E (2016) Persistent homology analysis of rna. Mol Based Math Biol 4:14–25.
Mantoiu, M, Purice R, Richard S (2004) Twisted crossed products and magnetic pseudodifferential operators. ArXiv Math Phys E-prints. math-ph/0403016.
Mullen, EK, et al. (2014) Gene co-citation networks associated with worker sterility in honey bees. BMC Syst Biol 8(38). https://doi.org/10.1186/1752-0509-8-38.
Otter, N, Porter MA, Tillmann U, Grindrod P, Harrington H (2017) A roadmap for the computation of persistent homology. EPJ Data Sci 6(17). https://doi.org/10.1140/epjds/s13688-017-0109-5.
Pal, S, Moore TJ, Ramanathan R, Swami A (2017) Comparative topological signatures of growing collaboration networks. In: Gonçalves B, Menezes RSR, Zlatic V (eds)Proceedings of the 8th Conference on Complex Networks CompleNet 2017, 16–27.. Stoneham: Butterworth-Heinemann, Cham.
Patania, A, Petri G, Vaccarino F (2017a) The shape of collaborations. EPJ Data Sci 6(18). https://doi.org/10.1140/epjds/s13688-017-0114-8.
Patania, A, Petri G, Vaccarino F (2017b) Topological analysis of data. EPJ Data Sci 6(7). https://doi.org/10.1140/epjds/s13688-017-0104-x.
Petri, G, Expert P, Turkheimer F, Carhart-Harris R, Nutt D, Hellyer PJ, Vaccarino F (2014) Homological scaffolds of brain functional networks. J R Soc Interface 11(20140873). https://doi.org/10.1098/rsif.2014.0873.
Petri, G, Scolamiero M, I D, F V (2013) Topological strata of weighted complex networks. PLoS ONE 8(6). https://doi.org/10.1371/journal.pone.0066506.
Radhakrishnan, S, Erbis S, Isaacs JA, Kamarthi S (2017) Novel keyword co-occurrence network-based methods to foster systematic reviews of scientific literature. PLos ONE 12(3):23–41. e0172778. https://doi.org/10.1371/journal.pone.0172778.
Sami, IR, Farrahi K (2017) A simplified topological representation of th text for local and global context In: Proceedings of the 2017 ACM on Multimedia Conference, Mountain View, California, USA, 1451–1456.. ACM, New York.
Serrano, MA, Boguña M, Vespignani A (2009) Extracting the multiscale backbone of complex weighted networks. PNAS 106:6483–6488.
Slater, PB (2009) A two-stage algorithm for extracting the multiscale backbone of complex weighted networks. PNAS 106(26). https://doi.org/10.1073/pnas.0904725106.
Stolz, BJ, Harrington HA, Porter MA (2017) Persistent homology of time-dependent functional networks constructed from coupled time series. Chaos 27(047410). https://doi.org/10.1063/1.4978997.
Su, H-N, Lee P-C (2010) Mapping knowledge structure by keyword co-occurrence: a first look at journal papers in technology foresight. Scientometrics 85:65–79. https://doi.org/10.1007/s11192-010-0259-8.
Taylor, D, et al. (2015) Topological data analysis of contagion maps for examining spreading processes on networks. Nat Commun 6(7723). https://doi.org/10.1038/ncomms8723.
Wang, X, Zhang X, Xu S (2011) Patent co-citation networks of fortune 500 companies. Scientometrics 88(3):761–770.
Yau, S-T (2006) Perspectives on geometric analysis. ArXiv Math e-prints. math/0602363.
Young, J-C, Petri G, Vaccarino F, Patania A (2017) Construction of and efficient sampling from the simplicial configuration model. Phys Rev E 96(3):032312. https://doi.org/10.1103/PhysRevE.96.032312.
Wagner, H, et al. (2012) Computational topology in text mining. In: Ferri M, Frosini P, Landi C, Cerri A, Di Fabio B (eds)Computational Topology in Image Context, 68–78.. Springer, Berlin, Heidelberg.
Waldschmidt, M (2004) Open diophantine problems. Mosc Math J 4(1):245–305.
Wu, Z, Menichetti G, Rahmede C, Bianconi G (2015) Emergent complex network geometry. Sci Rep 5(10073). https://doi.org/10.1038/srep10073.
Zhang, J, Xie J, Hou W, Tu X, Xu J, et al (2012) Mapping the knowledge structure of research on patient ahderence: knowledge domain visualization based co-word analysis and social network analysis. PLoS ONE 7(4):34497. https://doi.org/10.1371/journal.pone.0034497.
Zomorodian, A, Carlsson G (2005) Computing persistent homology. Discret Comput Geom 33:249–274.
The authors thank two anonymous reviewers for their constructive comments and Henry Adams, Oliver Vipond and Alexey Medvedev for useful discussions and suggestions.
Daniele Cassese wishes to thank support from FNRS (Belgium).
Data available upon request.
University of Namur and NaXys, Rempart de la Vierge, Namur, 5000, Belgium
Vsevolod Salnikov & Daniele Cassese
ICTEAM, University of Louvain, Av Georges Lemaître, Louvain-la-Neuve, 1348, Belgium
Daniele Cassese
Mathematical Institute, University of Oxford, Woodstock Road, Oxford, OX2 6GG, UK
Daniele Cassese & Renaud Lambiotte
Department of Mathematics, Imperial College, South Kensington Campus, London, SW7 2AZ, UK
Nick S. Jones
Vsevolod Salnikov
Renaud Lambiotte
All authors conceived the study; VS and DC performed the numerical simulations and created the Figures; All authors wrote and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Daniele Cassese.
Salnikov, V., Cassese, D., Lambiotte, R. et al. Co-occurrence simplicial complexes in mathematics: identifying the holes of knowledge. Appl Netw Sci 3, 37 (2018). https://doi.org/10.1007/s41109-018-0074-3
Topological data analysis
Special Issue of the 6th International Conference on Complex Networks and Their Applications | CommonCrawl |
Cost of electricity by source
Title: Cost of electricity by source
Subject: Price per watt, Electricity pricing, Renewable energy commercialization, Plant efficiency, Economic history
Comparison of the levelized cost of electricity for some newly built renewable and fossil-fuel based power stations in euro per kWh (Germany, 2013)
Note: employed technologies and LCOE differ by country and change over time.
In electrical power generation, the distinct ways of generating electricity incur significantly different costs. Calculations of these costs at the point of connection to a load or to the electricity grid can be made. The cost is typically given per kilowatt-hour or megawatt-hour. It includes the initial capital, discount rate, as well as the costs of continuous operation, fuel, and maintenance. This type of calculation assists policy makers, researchers and others to guide discussions and decision making.
The levelized cost of electricity (LCOE) is a measure of a power source which attempts to compare different methods of electricity generation on a comparable basis. It is an economic assessment of the average total cost to build and operate a power-generating asset over its lifetime divided by the total power output of the asset over that lifetime. The LCOE can also be regarded as the cost at which electricity must be generated in order to break-even over the lifetime of the project.
1 Cost factors
1.1 Levelized cost of electricity
1.2 Avoided cost
1.3 Marginal cost of electricity
1.4 External costs of energy sources
1.5 Additional cost factors
2 Australia
6.1 DECC
7.1 Energy Information Administration
7.2 NREL OpenEI (2014)
7.3 California Energy Commission (2007)
7.4 Lazard (2014)
8 Other studies and analysis
8.1 Nuclear Energy Agency (2012)
8.2 Brookings Institution (2014)
8.3 Comparison different studies (2004–2009)
8.4 Analysis from different sources (2009)
9 Renewables
9.1 Photovoltaics
9.2 Wind power
11 Further reading
While calculating costs, several internal cost factors have to be considered.[1] (Note the use of "costs," which is not the actual selling price, since this can be affected by a variety of factors such as subsidies and taxes):
Capital costs (including waste disposal and decommissioning costs for nuclear energy) - tend to be low for fossil fuel power stations; high for wind turbines, solar PV; very high for waste to energy, wave and tidal, solar thermal, and nuclear.
Fuel costs - high for fossil fuel and biomass sources, low for nuclear, and zero for many renewables.
Factors such as the costs of waste (and associated issues) and different insurance costs are not included in the following: Works power, own use or parasitic load - that is, the portion of generated power actually used to run the stations pumps and fans has to be allowed for.
To evaluate the total cost of production of electricity, the streams of costs are converted to a net present value using the time value of money. These costs are all brought together using discounted cash flow.[2][3]
Levelized cost of electricity
The levelized cost of electricity (LCOE), also known as Levelized Energy Cost (LEC), is the net present value of the unit-cost of electricity over the lifetime of a generating asset. It is often taken as a proxy for the average price that the generating asset must receive in a market to break even over its lifetime. It is a first-order economic assessment of the cost competitiveness of an electricity-generating system that incorporates all costs over its lifetime: initial investment, operations and maintenance, cost of fuel, cost of capital.
The levelized cost is that value for which an equal-valued fixed revenue delivered over the life of the asset's generating profile would cause the project to break even. This can be roughly calculated as the net present value of all costs over the lifetime of the asset divided by the total electricity output of the asset.[4]
The levelized cost of electricity (LCOE) is given by:
\mathrm{LCOE} = \frac{\text{sum of costs over lifetime}}{\text{sum of electricity produced over lifetime}} = \frac{\sum_{t=1}^{n} \frac{ I_t + M_t + F_t}{\left({1+r}\right)^t} }{\sum_{t=1}^{n} \frac{E_t}{\left({1+r}\right)^{t}} }
It : investment expenditures in the year t
Mt : operations and maintenance expenditures in the year t
Ft : fuel expenditures in the year t
Et : electricity generation in the year t
r : discount rate
n : expected lifetime of system or power station
Note: Some caution must be taken when using formulas for the levelized cost, as they often embody unseen assumptions, neglect effects like taxes, and may be specified in real or nominal levelized cost. For example, other versions of the above formula do not discount the electricity stream.
Typically the LCOE is calculated over the design lifetime of a plant, which is usually 20 to 40 years, and given in the units of currency per kilowatt-hour or megawatt-day, for example AUD/kWh or EUR/kWh or per megawatt-hour, for example AUD/MWh (as tabulated below).[5] However, care should be taken in comparing different LCOE studies and the sources of the information as the LCOE for a given energy source is highly dependent on the assumptions, financing terms and technological deployment analyzed.[6] In particular, assumption of capacity factor has significant impact on the calculation of LCOE. Thus, a key requirement for the analysis is a clear statement of the applicability of the analysis based on justified assumptions.[6]
Many scholars, such as Paul Joskow, have described limits to the "levelized cost of electricity" metric for comparing new generating sources. In particular, LCOE ignores time effects associated with matching production to demand. This happens at two levels: (1) dispatchability, the ability of a generating system to come online, go offline, or ramp up or down, quickly as demand swings; and (2) the extent to which the availability profile matches or conflicts with the market demand profile. Thermally lethargic technologies like coal and nuclear are physically incapable of fast ramping. Capital intensive technologies such as wind, solar, and nuclear are economically disadvantaged unless generating at maximum availability since the LCOE is nearly all sunk-cost capital investment. Intermittent power sources, such as wind and solar, may incur extra costs associated with needing to have storage or backup generation available.[7] At the same time, intermittent sources can be competitive if they are available to produce when demand and prices are highest, such as solar during mid-day peaks seen in summertime load profiles.[6] Despite these time limitations, leveling costs is often a necessary prerequisite for making comparisons on an equal footing before demand profiles are considered, and the levelized-cost metric is widely used for comparing technologies at the margin, where grid implications of new generation can be neglected.
Avoided cost
The US Energy Information Administration has recommended that levelized costs of non-dispatchable sources such as wind or solar may be better compared to the avoided energy cost rather than to the LCOE of dispatchable sources such as fossil fuels or geothermal. This is because introduction of fluctuating power sources may or may not avoid capital and maintenance costs of backup dispatchable sources. Levelized Avoided Cost of Energy (LACE) is the avoided costs from other sources divided by the annual yearly output of the non-dispatchable source. However, the avoided cost is much harder to calculate accurately.[8][9]
Marginal cost of electricity
A more accurate economic assessment might be the marginal cost of electricity. This value works by comparing the added system cost of increasing electricity generation from one source versus that from other sources of electricity generation (see Merit Order).
External costs of energy sources
Typically pricing of electricity from various energy sources may not include all external costs - that is, the costs indirectly borne by society as a whole as a consequence of using that energy source.[10] These may include enabling costs, environmental impacts, usage lifespans, energy storage, recycling costs, or beyond-insurance accident effects.
The US Energy Information Administration predicts that coal and gas are set to be continually used to deliver the majority of the world's electricity,[11] this is expected to result in the evacuation of millions of homes in low lying areas, and an annual cost of hundreds of billions of dollars' worth of property damage.[12][13][14][15][16][17][18]
Furthermore, with a number of island nations becoming slowly submerged underwater due to rising sea levels,[19] massive international climate litigation lawsuits against fossil fuel users are currently beginning in the International Court of Justice.[20][21]
An EU funded research study known as ExternE, or Externalities of Energy, undertaken over the period of 1995 to 2005 found that the cost of producing electricity from coal or oil would double over its present value, and the cost of electricity production from gas would increase by 30% if external costs such as damage to the environment and to human health, from the particulate matter, nitrogen oxides, chromium VI, river water alkalinity, mercury poisoning and arsenic emissions produced by these sources, were taken into account. It was estimated in the study that these external, downstream, fossil fuel costs amount up to 1%-2% of the EU's entire Gross Domestic Product (GDP), and this was before the external cost of global warming from these sources was even included.[22][23] Coal has the highest external cost in the EU, and global warming is the largest part of that cost.[10]
A means to address a part of the external costs of fossil fuel generation is carbon pricing — the method most favored by economics for reducing global-warming emissions. Carbon pricing charges those who emit carbon dioxide (CO2) for their emissions. That charge, called a 'carbon price', is the amount that must be paid for the right to emit one tonne of CO2 into the atmosphere.[24] Carbon pricing usually takes the form of a carbon tax or a requirement to purchase permits to emit (also called "allowances").
Depending on the assumptions of possible accidents and their probabilites external costs for nuclear power vary significantly and can reach between 0.2 to 200 ct/kWh.[25] Furthermore, nuclear power is working under an insurance framework that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, and the Vienna convention on civil liability for nuclear damage[26] and in the U.S. the Price-Anderson Act. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity; but the cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a CBO study.[27]
These beyond-insurance costs for worst-case scenarios are not unique to nuclear power, as hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state.[28]
Because externalities are diffuse in their effect, external costs can not be measured directly, but must be estimated. One approach estimate external costs of environmental impact of electricity is the Methodological Convention of Federal Environment Agency of Germany. That method arrives at external costs of electricity from lignite at 10.75 Eurocent/kWh, from hard coal 8.94 Eurocent/kWh, from natural gas 4.91 Eurocent/kWh, from photovoltaic 1.18 Eurocent/kWh, from wind 0.26 Eurocent/kWh and from hydro 0.18 Eurocent/kWh.[29] For nuclear the Federal Environment Agency indicates no value, as different studies have results that vary by a factor of 1,000. It recommends the nuclear given the huge uncertainty, with the cost of the next inferior energy source to evaluate.[30] Based on this recommendation the Federal Environment Agency, and with their own method, the Forum Ecological-social market economy, arrive at external environmental costs of nuclear energy at 10.7 to 34 ct/kWh.[31]
Additional cost factors
Calculations often do not include wider system costs associated with each type of plant, such as long distance transmission connections to grids, or balancing and reserve costs. Calculations do not include externalities such as health damage by coal plants, nor the effect of CO2 emissions on the climate change, ocean acidification and eutrophication, ocean current shifts. Decommissioning costs of nuclear plants are usually not included (The USA is an exception, because the cost of decommissioning is included in the price of electricity, per the Nuclear Waste Policy Act), is therefore not full cost accounting. These types of items can be explicitly added as necessary depending on the purpose of the calculation. It has little relation to actual price of power, but assists policy makers and others to guide discussions and decision making.
These are not minor factors but very significantly affect all responsible power decisions:
Comparisons of life-cycle greenhouse gas emissions show coal, for instance, to be radically higher in terms of GHGs than any alternative. Accordingly, in the analysis below, carbon captured coal is generally treated as a separate source rather than being averaged in with other coal.
Other environmental concerns with electricity generation include acid rain, ocean acidification and effect of coal extraction on watersheds.
Various human health concerns with electricity generation, including asthma and smog, now dominate decisions in developed nations that incur health care costs publicly. A Harvard University Medical School study estimates the US health costs of coal alone at between 300 and 500 billion US dollars annually.[32]
While cost per kWh of transmission varies drastically with distance, the long complex projects required to clear or even upgrade transmission routes make even attractive new supplies often uncompetitive with conservation measures (see below), because the timing of payoff must take the transmission upgrade into account.
The following table gives a selection of LCOE from two major government reports from Australia.[33][34] These figures do not include any cost for the greenhouse gas emissions (such as under carbon tax or emissions trading scenarios) associated with the different technologies. It should also be noted that the cost for wind and solar has dramatically reduced since 2006, for example, over the 5 years 2009-2014 solar costs fell by 75% making them comparable to coal, and are expected to continue dropping over the next 5 years by another 45% from 2014 prices.[35] Also, wind has been cheaper than coal since 2013, whereas coal and gas will only become less viable as subsidies may be withdrawn and there is the expectation that they will eventually have to pay the costs of pollution.
LCOE in AUD per MWh (2006)
Coal 028 28–38
Coal: IGCC + CCS 053 53–98
Coal: supercritical pulverized + CCS 064 64–106
Open-cycle Gas Turbine 101 101
Hot fractured rocks 089 89
Gas: combined cycle 037 37–54
Gas: combined cycle + CCS 053 53–93
Small Hydro power 055 55
Wind power: high capacity factor 055 63
Solar thermal 085 85
Biomass 088 88
Photovoltaics 120 120
The International Agency for the Energy and EDF have estimated for 2011 the following costs. For the nuclear power they include the costs due to new safety investments to upgrade the French nuclear plant after the Fukushima Daiichi nuclear disaster; the cost for those investments is estimated at 4 €/MWh. Concerning the solar power the estimate at 293 €/MWh is for a large plant capable to produce in the range of 50–100 GWh/year located in a favorable location (such as in Southern Europe). For a small household plant capable to produce typically around 3 MWh/year the cost is according to the location between 400 and 700 €/MWh. Currently solar power is by far the most expensive renewable source to produce electricity, although increasing efficiency and longer lifespan of photovoltaic panels together with reduced production costs could make this source of energy more competitive.
French LCOE in €/MWh (2011)
Cost in 2011
Hydro power 20
Nuclear (with State-covered insurance costs) 50
Natural gas turbines without CO2 capture 61
Onshore wind 69
Solar farms 293
In November 2013, the Fraunhofer Institute assessed the levelised generation costs for newly built power plants in the German electricity sector.[36] PV systems reached LCOE between 0.078 and 0.142 Euro/kWh in the third quarter of 2013, depending on the type of power plant (ground-mounted utility-scale or small rooftop solar PV) and average German insolation of 1000 to 1200 kWh/m² per year (GHI). There are no LCOE-figures available for electricity generated by recently built German nuclear power plants as none have been constructed since the late 1980s.
German LCOE in €/MWh (2013)
Cost range in 2013
Coal-fired power plants (brown coal) 38–53
Coal-fired power plant (hard coal) 63–80
CCGT power plants (cogeneration) 75–98
Onshore wind farms 45–107
Offshore wind power 119–194
PV systems 78–142
Biogas power plant 135–250
Source: Fraunhofer Institute- Levelized cost of electricity renewable energy technologies[36]
A 2010 study by the Japanese government (pre-Fukushima disaster), called the Energy White Paper, concluded the cost for kilowatt hour was ¥49 for solar, ¥10 to ¥14 for wind, and ¥5 or ¥6 for nuclear power. Masayoshi Son, an advocate for renewable energy, however, has pointed out that the government estimates for nuclear power did not include the costs for reprocessing the fuel or disaster insurance liability. Son estimated that if these costs were included, the cost of nuclear power was about the same as wind power.[37][38][39]
The Institution of Engineers and Shipbuilders in Scotland commissioned a former Director of Operations of the British National Grid, Colin Gibson, to produce a report on generation levelised costs that for the first time would include some of the transmission costs as well as the generation costs. This was published in December 2011 and is available on the internet :.[40] The institution seeks to encourage debate of the issue, and has taken the unusual step among compilers of such studies of publishing a spreadsheet showing its data available on the internet :[41]
On 27 February 2015 Vattenfall Vindkraft AS agreed to build the Horns Rev 3 offshore wind farm at a price of 10.31 Eurocent per kWh. This has been quoted as below 100 UK pounds per MWh.
In 2013 in the United Kingdom for a new-to-build nuclear power plant (Hinkley Point C: completion 2023), a feed-in tariff of 92.50 pounds/MWh (around 142 USD/MWh) plus compensation for inflation with a running time of 35 years was agreed.[42][43]
DECC
More recent UK estimates are the Mott MacDonald study released by DECC in June 2010[44] and the Arup study for DECC published in 2011.[45]
UK LCOE in £/MWh (2010)
Cost range (£/MWh)[44]
Natural gas turbine, no CO2 capture 55 – 110
Natural gas turbines with CO2 capture 60 – 130
Biomass 60 – 120
New nuclear(a) 80 – 105
Onshore wind 80 – 110
Coal with CO2 capture 100 – 155
Solar farms 125 – 180
Offshore wind 150 – 210
Tidal power 155 – 390
(a) new nuclear power: guaranteed strike price of £92.50/MWh for Hinkley Point C in 2023[46][47])
In March 2010, a new report on UK levelised generation costs was published by Parsons Brinckerhoff.[48] It puts a range on each cost due to various uncertainties. Combined cycle gas turbines without CO2 capture are not directly comparable to the other low carbon emission generation technologies in the BP study. The assumptions used in this study are given in the report.
Energy Information Administration
The following data is from the Energy Information Administration's (EIA) Annual Energy Outlook released in 2015 (AEO2015). They are in dollars per megawatt-hour (2013 USD/MWh). These figures are estimates for plants going into service in 2020.[49] The LCOE below is calculated based off a 30-year recovery period using a real after tax weighted average cost of capital (WACC) of 6.1%. For carbon intensive technologies 3 percentage points are added to the WACC. (This is approximately equivalent fee of $15 per metric ton of carbon dioxide CO2)
Since 2010, the US Energy Information Administration (EIA) has published the Annual Energy Outlook (AEO), with yearly LCOE-projections for future utility-scale facilities to be commissioned in about five years' time. In 2015, EIA has been criticized by the Advanced Energy Economy (AEE) Institute after its release of the AEO 2015-report to "consistently underestimate the growth rate of renewable energy, leading to 'misperceptions' about the performance of these resources in the marketplace". AEE points out that the average power purchase agreement (PPA) for wind power was already at $24/MWh in 2013. Likewise, PPA agreements for utility-scale solar PV are seen at current levels of $50–$75/MWh.[50] These figures contrast strongly with EIA's estimated LCOE of $125/MWh (or $114/MWh including subsidies) for solar PV in 2020.[51]
Projected LCOE in the U.S. by 2020 (as of 2015)
Power generating technology
Conventional Coal 87.1 95.1 119
IGCC (Integrated Coal-Gasification Combined Cycle) 106.1 115.7 136.1
IGCC with CCS 132.9 144.4 160.4
Natural Gas-fired na na na
NG[A]: Conventional Combined Cycle 70.4 75.2 85.5
NG[A]: Advanced Combined Cycle 68.6 72.6 81.7
NG[A]: Advanced CC with CCS 93.3 100.2 110.8
NG[A]: Conventional Combustion Turbine 107.3 141.5 156.4
NG[A]: Advanced Combustion Turbine 94.6 113.5 126.8
Advanced Nuclear 91.8 95.2 101
Geothermal 43.8 47.8 52.1
Biomass 90 100.5 117.4
Wind onshore 65.6 73.6 81.6
Wind offshore 169.5 196.9 269.8
Solar PV 97.8 125.3 193.3
Solar Thermal 174.4 239.7 382.5
Hydro 69.3 83.5 107.2
↑Natural Gas
The electricity sources which had the most decrease in estimated costs over the period 2010 to 2015 were solar photovoltaic (down 68%), onshore wind (down 51%) and advanced nuclear (down 20%).
For utility-scale generation put into service in 2040, the EIA estimated in 2015 that there would be further reductions in the constant-dollar cost of solar thermal (down 18%), solar photovoltaic (down 15%), offshore wind (down 11%), and advanced nuclear (down 7%). The cost of onshore wind was expected to rise slightly (up 2%) by 2040, while natural gas combined cycle electricity was expected to increase 9% to 10% over the period.[51]
Historical summary of EIA's LCOE projections (2010–2015)
Estimate in $/MWh
convent'l
NG combined cycle
of year
100.4 83.1 79.3 119.0 149.3 191.1 396.1 256.6
95.1 65.1 62.2 114.0 96.1 243.7 211.0 312.2
97.7 66.1 63.1 111.4 96.0 n.a. 152.4 242.0
100.1 67.1 65.6 108.4 86.6 221.5 144.3 261.5
95.6 66.3 64.4 96.1 80.3 204.1 130.0 243.1
Nominal change 2010-2015
Note: Projected LCOE are adjusted for inflation and calculated on constant dollars based on two years prior to the release year of the estimate.
Estimates given without any subsidies. Transmission cost for non-dispatchable sources are on average much higher.
NREL OpenEI (2014)
OpenEI, sponsored jointly by the US DOE and the National Renewable Energy Laboratory (NREL), has compiled a historical cost-of-generation database[57] covering a wide variety of generation sources. Because the data is open source it may be subject to frequent revision.
LCOE from OpenEI DB as of June, 2015
Plant Type (USD/MWh)
Wind, onshore 80 40
Wind, offshore 200 100
Solar PV 250 110 60
Solar CSP 220 100
Geothermal Hydrothermal&& 100 50
Blind Geothermal&& 100
Enhanced Geothermal 130 80
Small Hydropower&& 140
Hydropower&& 100 70 30
Ocean&& 250 240 230
Biopower 110 90
Distributed Generation 130 70 10
Fuel Cell 160 100
Natural Gas Combined Cycle 80 50
Natural Gas Combustion Turbine 200 140
Coal, pulverized, scrubbed 150 60
Coal, pulverized, unscrubbed^^ 40
Coal, integrated gasification, combined cycle 170 100
Nuclear 130 90
&& = Data from 2011
^^ = Data from 2008
All other Data from 2014
Only Median value = only one data point.
Only Max + Min value = Only two data points.
California Energy Commission (2007)
A draft report of LECs used by the California Energy Commission is available.[58] From this report, the price per MWh for a municipal energy source is shown here:
California levelized energy costs for different generation technologies in US dollars per megawatt hour (2007)
Cost (US$/MWh)
Advanced Nuclear 067 67
Gas 087 87–346
Geothermal 067 67
Hydro power 048 48–86
Wind power 060 60
Solar 116 116–312
Biomass 047 47–117
Fuel Cell 086 86–111
Wave Power 611 611
Note that the above figures incorporate tax breaks for the various forms of power plants. Subsidies range from 0% (for Coal) to 14% (for nuclear) to over 100% (for solar).
Lazard (2014)
In the summer of 2014, the investment bank Lazard has published headquartered in New York, a study on the current electricity production costs of photovoltaics in the US compared to conventional power generators. The best large-scale photovoltaic power plants can produce electricity at 60 USD per MWh. The average value of such large power plants is currently at 72 USD per MWh and the upper limit at 86 USD per MWh. In comparison, coal-fired plants are between 66 USD and $151 per MWh, nuclear power at 124 USD per MWh. Small photovoltaic power plants on roofs of houses are still at 126-265 USD per MWh, but which can do without electricity transport costs. Onshore wind turbines are 37-81 USD per MWh. One drawback see the electricity supplier of the study by the volatility of solar and wind power. One solution provides the study in batteries as a storage, but are still expensive so far.[59][60]
Below is the complete list of LCOEs by source from the investment bank Lazard.[59]
Plant Type ( USD/MWh)
Solar PV-Rooftop Residential 180 265
Solar PV-Rooftop C&I 126 177
Solar PV-Crystalline Utility Scale 72 86
Solar PV-Thin Film Utility Scale 72 86
Solar Thermal with Storage 118 130
Microturbine 102 135
Geothermal 89 142
Biomass Direct 87 116
Wind 37 81
Energy Efficiency 0 50
Battery Storage 265 324
Diesel Generator 297 332
Gas Peaking 179 230
IGCC 102 171
Nuclear 92 132
Coal 66 151
Gas Combined Cycle 61 87
In a power purchase agreement in the United States in July 2015 for a period of 20 years of solar power will be paid 3.87 UScent per kilowatt hour (38.7 USD/MWh). The solar system, which produces this solar power, is in Nevada (USA) and has 100 MW capacity.[61]
Other studies and analysis
Nuclear Energy Agency (2012)
In November 2012, the OECD Nuclear Energy Agency published a report with the title System effects in low carbon energy systems.[62] In this report NEA looks at the interactions of dispatchable energy technologies (fossil and nuclear) and variable renewables (solar and wind) in terms of their effects on electricity systems. These grid-level systems costs differ from the levelized cost of electricity metric that scholars like Paul Joskow have criticised as incomplete, as they also include costs related to the electricity grid, such as extending and reinforcing transport and distribution grids, connecting new capacity to the grid, and the additional costs of providing back-up capacity for balancing the grid. NEA calculated these costs for a number of OECD countries with different levels of penetration for each energy source.[62] This report has been criticized for its adequacy and used methodology.[63][64] Swedish KTH in Stockholm published a report in response, finding "several question marks concerning the calculation methods".[65]:5 While the grid-level systems costs in the 2012 OECD-NEA report is calculated to be $17.70 per MWh for 10% onshore wind in Finland, the Swedish Royal Institute of Technology concludes in their analysis, that these costs are rather $0 to $3.70 per MWh (or 79% to 100% less than NEA's calculations), as they are either much smaller or already included in the market.[65]:23–24
Estimated Grid-Level Systems Cost, 2012 (USD/MWh)[62]:8
Penetration Level
Backup costs (adequacy) 0.00 0.00 0.04 0.04 0.00 0.00 5.61 6.14 2.10 6.85 0.00 10.45
Balancing costs 0.16 0.10 0.00 0.00 0.00 0.00 2.00 5.00 2.00 5.00 2.00 5.00
Grid connection 1.56 1.56 1.03 1.03 0.51 0.51 6.50 6.50 15.24 15.24 10.05 10.05
Grid reinforcement & extension 0.00 0.00 0.00 0.00 0.00 0.00 2.20 2.20 1.18 1.18 2.77 2.77
Total Grid-level System Costs
Brookings Institution (2014)
In 2014, the Brookings Institution published The Net Benefits of Low and No-Carbon Electricity Technologies which states, after performing an energy and emissions cost analysis, that "The net benefits of new nuclear, hydro, and natural gas combined cycle plants far outweigh the net benefits of new wind or solar plants", with the most cost effective low carbon power technology being determined to be nuclear power.[66][67]
Comparison different studies (2004–2009)
Several studies compared the levelized cost of nuclear and fossil power generation. These include studies from the Royal Academy of Engineering (UK 2004), University of Chicago (US 2004), Canadian Energy Research Institute (CAN 2004), the United Kingdom Department of Trade and Industry (UK 2006), the European Commission (BEL 2008), the House of the Lords, Select Committee on Economic Affairs (UK 2008) and MIT (US 2009).
Analysis from different sources (2009)
European PV LCOE range projection 2010–2020 (in €-cts/kWh)[68]
Price history of silicon PV cells since 1977
Photovoltaic prices have fallen from $76.67 per watt in 1977 to an estimated $0.30 per watt in 2015, for crystalline silicon solar cells.[69][70] This is seen as evidence supporting Swanson's law, an observation similar to the famous Moore's Law, that states that solar cell prices fall 20% for every doubling of industry capacity.
By 2011, the price of PV modules per MW had fallen by 60% since 2008, according to Bloomberg New Energy Finance estimates, putting solar power for the first time on a competitive footing with the retail price of electricity in some sunny countries; an alternative and consistent price decline figure of 75% from 2007 to 2012 has also been published,[71] though it is unclear whether these figures are specific to the United States or generally global. The levelised cost of electricity (LCOE) from PV is competitive with conventional electricity sources in an expanding list of geographic regions,[6] particularly when the time of generation is included, as electricity is worth more during the day than at night.[72] There has been fierce competition in the supply chain, and further improvements in the levelised cost of energy for solar lie ahead, posing a growing threat to the dominance of fossil fuel generation sources in the next few years.[73] As time progresses, renewable energy technologies generally get cheaper,[74][75] while fossil fuels generally get more expensive:
The less solar power costs, the more favorably it compares to conventional power, and the more attractive it becomes to utilities and energy users around the globe. Utility-scale solar power can now be delivered in California at prices well below $100/MWh ($0.10/kWh) less than most other peak generators, even those running on low-cost natural gas. Lower solar module costs also stimulate demand from consumer markets where the cost of solar compares very favourably to retail electric rates.[76]
In the year 2015, First Solar agreed to supply solar power at 3.87 cents/kWh levelised price from its 100 MW Playa Solar 2 project which is far cheaper than the electricity sale price from conventional electricity generation plants.[77]
It is now evident that, given a carbon price of $50/ton, which would raise the price of coal-fired power by 5c/kWh, solar PV, Wind, and Nuclear will be cost-competitive in most locations. The declining price of PV has been reflected in rapidly growing installations, totalling about 23 GW in 2011. Although some consolidation is likely in 2012, due to support cuts in the large markets of Germany and Italy, strong growth seems likely to continue for the rest of the decade. Already, by one estimate, total investment in renewables for 2011 exceeded investment in carbon-based electricity generation.[78]
In the case of self consumption, payback time is calculated based on how much electricity is not brought from the grid. Additionally, using PV solar power to charge DC batteries, as used in Plug-in Hybrid Electric Vehicles and Electric Vehicles, leads to greater efficiencies, but higher costs. Traditionally, DC generated electricity from solar PV must be converted to AC for buildings, at an average 10% loss during the conversion. Inverter technology is rapidly improving and current equipment have reached over 96% efficiency for small scale residential, while commercial scale three-phase equipment can reach well above 98% efficiency. However, an additional efficiency loss occurs in the transition back to DC for battery driven devices and vehicles, and using various interest rates and energy price changes were calculated to find present values that range from $2,057.13 to $8,213.64 (analysis from 2009).[79]
NREL projection: the LCOE of U.S. wind power will decline by 25% from 2012 to 2030.[80]
Estimated cost per MWh for wind power in Denmark
In 2004, wind energy cost a fifth of what it did in the 1980s, and some expected that downward trend to continue as larger multi-megawatt turbines were mass-produced.[81] As of 2012 capital costs for wind turbines are substantially lower than 2008–2010 but are still above 2002 levels.[82] A 2011 report from the American Wind Energy Association stated, "Wind's costs have dropped over the past two years, in the range of 5 to 6 cents per kilowatt-hour recently.... about 2 cents cheaper than coal-fired electricity, and more projects were financed through debt arrangements than tax equity structures last year.... winning more mainstream acceptance from Wall Street's banks.... Equipment makers can also deliver products in the same year that they are ordered instead of waiting up to three years as was the case in previous cycles.... 5,600 MW of new installed capacity is under construction in the United States, more than double the number at this point in 2010. Thirty-five percent of all new power generation built in the United States since 2005 has come from wind, more than new gas and coal plants combined, as power providers are increasingly enticed to wind as a convenient hedge against unpredictable commodity price moves."[83]
This cost has additionally reduced as wind turbine technology has improved. There are now longer and lighter wind turbine blades, improvements in turbine performance and increased power generation efficiency. Also, wind project capital and maintenance costs have continued to decline.[84] For example, the wind industry in the USA is now able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This has opened up new opportunities and in Indiana, Michigan, and Ohio. The price of power from wind turbines built 300 feet to 400 feet above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option.[85]
Electricity pricing
Comparisons of life-cycle greenhouse gas emissions
Economics of new nuclear power plants
Intermittent energy source
National Grid Reserve Service
Nuclear power in France
List of thermal power station failures
Calculating the cost of the UK Transmission network: cost per kWh of transmission
List of countries by electricity production from renewable sources
List of U.S. states by electricity production from renewable sources
Environmental concerns with electricity generation
Grid parity
Nuclear Power: Still Not Viable without Subsidies. February 2011. By Doug Koplow. Union of Concerned Scientists.
Levelized Cost of New Electricity Generating Technologies. Institute for Energy Research.
Economic Value of U.S. Fossil Fuel Electricity Health Impacts. United States Environmental Protection Agency.
The Hidden Costs of Electricity: Comparing the Hidden Costs of Power Generation Fuels. Civil Society Institute.
^ A Review of Electricity Unit Cost Estimates Working Paper, December 2006 - Updated May 2007
^ Nuclear Energy Agency/International Energy Agency/Organization for Economic Cooperation and Development Projected Costs of Generating Electricity (2005 Update)
^ K. Branker, M. J.M. Pathak, J. M. Pearce, doi:10.1016/j.rser.2011.07.104 A Review of Solar Photovoltaic Levelized Cost of Electricity, Renewable and Sustainable Energy Reviews 15, pp.4470-4482 (2011). Open access
^ a b c d Open access
^ Comparing the Costs of Intermittent and Dispatchable Electricity-Generating Technologies", by Paul Joskow, Massachusetts Institute of Technology, September 2011
^ US Energy Information Administration, Levelized cost of new generation resources, 28 January 2013.
^ Levelized Cost and Levelized Avoided Cost of New Generation Resources in the Annual Energy Outlook 2015- US Energy Information Administration
^ a b "Subsidies and costs of EU energy. Project number: DESNL14583" Pages: 52. EcoFys, 10 October 2014. Accessed: 20 October 2014. Size: 70 pages in 2MB.
^ International Energy Outlook: Electricity "Although coal-fired generation increases by an annual average of only 1.9 percent, it remains the largest source of electricity generation through 2035. In 2008, coal-fired generation accounted for 40 percent of world electricity supply; in 2035, its share decreases to 37 percent, as renewables, natural gas, and nuclear power all are expected to advance strongly during the projection and displace the need for coal-fired-generation in many parts of the world. World net coal-fired generation grows by 67 percent, from 7.7 trillion kilowatthours in 2008 to 12.9 trillion kilowatthours in 2035."
^ The economic impact of global warming
^ Climate change threatens Australia's coastal lifestyle, report warns | Environment | The Guardian
^ Tufts Civil Engineer Predicts Boston's Rising Sea Levels Could Cause Billions Of Dollars In Damage
^ Rising Sea Levels' cost on Boston
^ Tufts University slide 28, note projected Bangladesh evacuation
^ The Hidden costs of Fossil fuels
^ Rising Sea Level
^ Five nations under threat from climate change
^ Tiny Pacific nation takes on Australia
^ See you in court: the rising tide of international climate litigation
^ New research reveals the real costs of electricity in Europe
^ ExternE-Pol, External costs of current and advanced electricity systems, associated with emissions from the operation of power plants and with the rest of the energy chain, final technical report. See figure 9, 9b and figure 11
^ IPCC, Glossary A-D: "Climate price", in IPCC AR4 SYR 2007.
^ Viktor Wesselak, Thomas Schabbach, Thomas Link, Joachim Fischer: Regenerative Energietechnik. Springer 2013, ISBN 978-3-642-24165-9, p. 27.
^ Publications: Vienna Convention on Civil Liability for Nuclear Damage. International Atomic Energy Agency.
^ Nuclear Power's Role in Generating Electricity Congressional Budget Office, May 2008.
^ Availability of Dam Insurance 1999
^ Methodenkonvention 2.0 zur Schätzung von Umweltkosten B, Anhang B: Best-Practice-Kostensätze für Luftschadstoffe, Verkehr, Strom -und Wärmeerzeugung (PDF; 886 kB). Studie des Umweltbundesamtes (2012). Abgerufen am 23. Oktober 2013.
^ Ökonomische Bewertung von Umweltschäden METHODENKONVENTION 2.0 ZUR SCHÄTZUNG VON UMWELTKOSTEN (PDF; 799 kB), S. 27-29. Studie des Umweltbundesamtes (2012). Abgerufen am 23. Oktober 2013.
^ Externe Kosten der Atomenergie und Reformvorschläge zum Atomhaftungsrecht (PDF; 862 kB), 9/2012. Forum Ökologisch-Soziale Marktwirtschaft e.V. im Auftrag von Greenpeace Energy eG und dem Bundesverband Windenergie e.V. Abgerufen am 23. Oktober 2013.
^ Graham, P. The heat is on: the future of energy in Australia CSIRO, 2006
^ Switkowski, Z. Uranium Mining, Processing and Nuclear Energy Review UMPNER taskforce, Australian Government, 2006
^ The Climate Council The global renewable energy boom: how Australia is missing out, 2014
^ Johnston, Eric, "Son's quest for sun, wind has nuclear interests wary", Japan Times, 12 July 2011, p. 3.
^ Bird, Winifred, "Powering Japan's future", Japan Times, 24 July 2011, p. 7.
^ Johnston, Eric, "Current nuclear debate to set nation's course for decades", Japan Times, 23 September 2011, p. 1.
^ Electricity Market Reform – Delivery Plan Department of Energy and Climate Change, December 2013
^ Carsten Volkery: Kooperation mit China: Großbritannien baut erstes Atomkraftwerk seit Jahrzehnten, In: Spiegel Online vom 21. Oktober 2013.
^ U.S. Energy Information Administration (EIA) - Source
^ a b c US Energy Information Administration, Levelized cost and levelized avoided cost of new generation resources in the Annual Energy Outlook 2015, 14 April 2015
^ US Energy Information Administration, 2016 Levelized cost of new generation resources in the Annual Energy Outlook 2010, 26 April 2010
^ US Energy Information Administration, Levelized cost of new generation resources in the Annual Energy Outlook 2011, 26 April 2011
^ US Energy Information Administration, Levelized cost of new generation resources in the Annual Energy Outlook 2012, 12 July 2012
^ US Energy Information Administration, Levelized cost of new generation resources in the Annual Energy Outlook 2013, 28 Jan. 2013
^ US Energy Information Administration, Levelized cost and levelized avoided cost of new generation resources in the Annual Energy Outlook 2014, 17 April 2014
^ OpenEI Transparent Cost Database. Accessed 06/19/2015.
^ a b LAZARD'S LEVELIZED COST OF ENERGY ANALYSIS November 2014
^ Solar and Wind Outshine Fossil Fuels November 2014
^ Buffett strikes cheapest electricity price in US with Nevada solar farm July 2015
^ a b c
^ environmentalresearchweb.org - Nuclear and renewables: back-up and grid costs
^ VTT Technical Research Centre of Finland - Note for wind energy grid level system costs published by NEA 2012 report
^ Economist magazine article "Sun, wind and drain Wind and solar power are even more expensive than is commonly thought Jul 26th 2014"
^ THE NET BENEFITS OF LOW AND NO-CARBON ELECTRICITY TECHNOLOGIES. MAY 2014, Charles Frank PDF
^ Utilities' Honest Assessment of Solar in the Electricity Supply
^ Renewable energy costs drop in '09 Reuters, November 23, 2009.
^ Solar Power 50% Cheaper By Year End – Analysis Reuters, November 24, 2009.
^ Converting Solar Energy into the PHEV Battery "VerdeL3C.com", May 2009
^ Lantz, E.; Hand, M. and Wiser, R. (13–17 May 2012) "The Past and Future Cost of Wind Energy," National Renewable Energy Laboratory conference paper no. 6A20-54526, p. 4
^ Helming, Troy (2004) "Uncle Sam's New Year's Resolution" ArizonaEnergy.org
^ Salerno, E., AWEA Director of Industry and Data Analysis, as quoted in Shahan, Z. (2011) Cost of Wind Power – Kicks Coal's Butt, Better than Natural Gas (& Could Power Your EV for $0.70/gallon)" CleanTechnica.com
Electricity economics
Economics comparisons
Energy, World Bank, Renewable energy, Middle Ages, Water
Energy, Renewable energy, United Kingdom, Siemens, Wind power in the United States
Solar power by country, Solar power in the United States, Solar panel, Solar energy, Renewable energy
Wind power, Solar power, Sustainable development, Biomass, Global warming
Price per watt
Cost of electricity by source, Coal, Solar power in the United States, Capital cost, Watt
United States, Salmon, Canada, Portugal, Saudi Arabia
Renewable energy commercialization
Renewable energy, International Energy Agency, Wind power, Global warming, China
Energy, Electricity, Heat, District heating, Power plant
Econometrics, University of Cambridge, Milton Friedman, Economics, History of economic thought | CommonCrawl |
Math Insight
Solutions to elementary derivative problems
Suggested background
Elementary derivative problems
The following is a set of solutions to the elementary derivative problems.
$\diff{h}{p}$ is
negative for $-3.5 < p < -2$, $0 < p < 1$, and $1 < p < 2$.
positive for $p < -3.5$, $-2 < p < 0$, and $p > 2$.
zero for $p=-2$, $p=0$, and $p=2$.
undefined for $p=-3.5$ and $p=1$.
Critical points are all the points where $\diff{h}{p}$ is zero or undefined. Critical points are $p=-3.5$, $p=-2$, $p=0$, $p=1$, and $p=2$.
$\diffn{h}{p}{2}$
negative for $-1 < p < 1$
positive for $-3.5 < p < -1$ and $p > 1$.
zero for $p < -3.5$ and $p=-1$.
The inflection points are the point in the domain of the function where $\diffn{h}{p}{2}$ changes sign. They are at $p=-1$ and $p=1$.
$r'(z)$
is negative for $z < -2$, $-2 < z < -1$, $-1 < z < -1/2$, and $1/2 < z < 2$.
positive for $-1/2 < z < 1/2$, $2 < z < 3$, and $z > 3$.
zero for $z=-1/2$, $z=1/2$, and $z=2$.
undefined for $z=-2$, $z=-1$, and $z=3$.
Critical points are $z=-2$, $z=-1$, $z=-1/2$, $z=1/2$, $z=2$, and $z=3$.
$r''(z)$ is
negative for $z < -2$ and $0 < z < 1$.
positive for $-2 < z < -1$, $-1 < z < 0$, $1 < z < 3$, and $z > 3$
zero for $z=0$ and $z=1$.
undefined for $z=-2$, $z=-1$, and $z=3$
Inflection points are $z=-2$, $z=0$, and $z=1$.
The critical points are labeled by the red open circles, the inflection points by the dark green open diamonds. The intervals where the derivative is positive and negative are indicated by the thin and thick purple lines labeled "increasing" and "decreasing," respectively. The intervals where the second derivative is positive and negative are indicated by the thin and thick blue lines labeled "concave up" and "concave down," respectively. The derivative is graphed by the green curve.
The critical points are labeled by the red open circles, the inflection points by the dark green open diamonds. The intervals where the derivative is positive and negative are indicated by the thin and thick purple lines, respectively. The intervals where the second derivative is positive and negative are indicated by the thin and thick blue line, respectively. The derivative is graphed by the green curve.
The critical points are labeled by the red open circles, and the derivative is not defined at the points. Also, along the horizontal line shown in red, the derivative is zero, so these are critical points. The second derivative is zero everywhere it is defined, which is everywhere except at the red circles. The intervals where the derivative is positive and negative are indicated by the thin and thick purple lines, respectively. The derivative is graphed by the green curve, which is constant along intervals and jumps between those intervals. The second derivative, as shown by the thick orange line, is zero everywhere it is defined. Although not shown by the thick orange line, it is not defined at the points above or below the red circles.
The linear approximation is the tangent line at $z=3$. Since $g(3)=5(3)^2-3+22 = 64$, $g'(z)=10z-1$, $g'(3)=10(3)-1 = 29$, the linear approximation is \begin{align*} L(z) = 64+29(z-3). \end{align*}
The tangent line is the same as the linear approximation of the previous problem. It is \begin{align*} L(z) = 64+29(z-3). \end{align*}
The linear approximation is the tangent line. By the product rule, \begin{align*} \diff{k}{q} &= \diff{q}{q}e^{-q} + q \diff{e^{-q}}{q}\\ &= 1 e^{-q} + q (-e^{-q})\\ &= (1-q)e^{-q} \end{align*} The linear approximation for $k$ around $q=a$ is \begin{align*} L(q) &= k(a) + k'(a)(q-a)\\ &= ae^{-a} + (1-a)e^{-a}(q-a) \end{align*}
This answer looks uglier than the original equation! However, the ugliness is only in its dependence on $a$, which is a fixed number. It depend on $q$ only linearly.
Using the chain rule, the derivative of $m$ is \begin{align*} m'(y) = 2ye^{y^2}. \end{align*} The equation for the tangent line at $y=b$ is \begin{align*} L(y) &= m(b) + m'(b)(y-b)\\ &= e^{b^2} + 2be^{b^2}(y-b). \end{align*} The dependence on $b$ isn't pretty, but the dependence on $y$ is simple.
By the product rule \begin{align*} x'(t) &= (t^3)'\ln(t) + t^3 (\ln(t))'\\ &= 3t^2 \ln(t) + t^3 \frac{1}{t}\\ &= 3t^2 \ln(t) + t^2. \end{align*}
The slope of the tangent line at $t=2$ is $x'(2) = 3 \cdot 2^2 \ln(2) + 2^2 = 12 \ln(2)+4$. At the point $t=\bigstar$, the slope is $x'(\bigstar) = 3 \bigstar^2 \ln(\bigstar) + \bigstar^2$.
$\displaystyle\diff{y}{z} = a \frac{1}{az} = \frac{1}{z}$.
$\displaystyle\diffn{y}{z}{2} = \diff{}{z} \frac{1}{z} = -\frac{1}{z^2}.$
$z'(y)=bce^{by}$
$z'(0)=bc$
$z'(1/b) = bce^{b/b} = bce^{1}= bce$
$z''(y) = b^2ce^{by}$
$z''(0)=b^2c$
$z''(1/b)= b^2ce^{b/b} = b^2ce^{1}=b^2ce$
$h'(s)= (a^2+b^2)2se^{s^2}=2(a^2+b^2)se^{s^2}$
$h'(1)= 2(a^2+b^2)e^{1}= 2(a^2+b^2)e$
$h''(s) =2(a^2+b^2)(e^{s^2}+2s^2e^{s^2})= 2(a^2+b^2)(1+2s^2)e^{s^2}$
$h''(1) = 2(a^2+b^2)(1+2)e^{1}= 6(a^2+b^2)e$
\begin{align*} s'(u) &= \frac{(1-u) (1+u)' - (1+u)(1-u)'}{(1-u)^2}\\ &= \frac{(1-u) 1 - (1+u)(-1)}{(1-u)^2}\\ &= \frac{2}{(1-u)^2} \end{align*}
\begin{align*} g'(x) &= (x^2)'e^{-x} + x^2 (e^{-x})'\\ &= 2x e^{-x} - x^2e^{-x}\\ &= (2x -x^2)e^{-x} \end{align*}
\begin{align*} f'(x) &= (x^n)'e^{-x} + x^n(e^{-x})'\\ &= n x^{n-1}e^{-x} -x^ne^{-x}\\ &= (nx^{n-1} - x^n)e^{-x} \end{align*}
The idea of the derivative of a function
Derivatives of polynomials
Derivatives of more general power functions
A refresher on the quotient rule
A refresher on the product rule
A refresher on the chain rule
Related rates
Intermediate Value Theorem, location of roots
Derivatives of transcendental functions
More similar pages
Nykamp DQ, "Solutions to elementary derivative problems." From Math Insight. http://mathinsight.org/derivative_elementary_problem_solutions
Keywords: derivative, ordinary derivative
Send us a message about "Solutions to elementary derivative problems"
If you enter anything in this field your comment will be treated as spam:
Solutions to elementary derivative problems by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
LETTER | Open | Published: 14 November 2014
The spatial density gradient of galactic cosmic rays and its solar cycle variation observed with the Global Muon Detector Network
Masayoshi Kozai1,
Kazuoki Munakata1,
Chihiro Kato1,
Takao Kuwabara2,
John W Bieber2,
Paul Evenson2,
Marlos Rockenbach3,
Alisson Dal Lago4,
Nelson J Schuch3,
Munetoshi Tokumaru5,
Marcus L Duldig6,
John E Humble6,
Ismail Sabbah7,
Hala K Al Jassar8,
Madan M Sharma8 &
Jozsef Kóta9
Earth, Planets and Spacevolume 66, Article number: 151 (2014) | Download Citation
The Erratum to this article has been published in Earth, Planets and Space 2016 68:38
We derive the long-term variation of the three-dimensional (3D) anisotropy of approximately 60 GV galactic cosmic rays (GCRs) from the data observed with the Global Muon Detector Network (GMDN) on an hourly basis and compare it with the variation deduced from a conventional analysis of the data recorded by a single muon detector at Nagoya in Japan. The conventional analysis uses a north-south (NS) component responsive to slightly higher rigidity (approximately 80 GV) GCRs and an ecliptic component responsive to the same rigidity as the GMDN. In contrast, the GMDN provides all components at the same rigidity simultaneously. It is confirmed that the temporal variations of the 3D anisotropy vectors including the NS component derived from two analyses are fairly consistent with each other as far as the yearly mean value is concerned. We particularly compare the NS anisotropies deduced from two analyses statistically by analyzing the distributions of the NS anisotropy on hourly and daily bases. It is found that the hourly mean NS anisotropy observed by Nagoya shows a larger spread than the daily mean due to the local time-dependent contribution from the ecliptic anisotropy. The NS anisotropy derived from the GMDN, on the other hand, shows similar distribution on both the daily and hourly bases, indicating that the NS anisotropy is successfully observed by the GMDN, free from the contribution of the ecliptic anisotropy. By analyzing the NS anisotropy deduced from neutron monitor (NM) data responding to lower rigidity (approximately 17 GV) GCRs, we qualitatively confirm the rigidity dependence of the NS anisotropy in which the GMDN has an intermediate rigidity response between NMs and Nagoya. From the 3D anisotropy vector (corrected for the solar wind convection and the Compton-Getting effect arising from the Earth's orbital motion around the Sun), we deduce the variation of each modulation parameter, i.e., the radial and latitudinal density gradients and the parallel mean free path for the pitch angle scattering of GCRs in the turbulent interplanetary magnetic field. We show the derived density gradient and mean free path varying with the solar activity and magnetic cycles.
A solar disturbance propagating away from the Sun affects the population of galactic cosmic rays (GCRs) in a number of ways. Using Parker's transport equation (Parker 1965) of GCRs in the heliosphere, we can infer the large-scale spatial gradient of GCR density by measuring the anisotropy of the high-energy GCR intensity. This is influenced by magnetic structures such as interplanetary shocks and magnetic flux ropes in the interplanetary coronal mass ejections (ICMEs). Only a global network of detectors can measure the dynamic variation of the first-order anisotropy accurately and separately from the temporal variation of the GCR density. The Global Muon Detector Network (GMDN) started operation measuring the three-dimensional (3D) anisotropy on an hourly basis with two-hemisphere observations using a pair of muon detectors (MDs) at Nagoya (Japan) and Hobart (Australia) in 1992. In 2001, another small detector at São Martinho (Brazil) was added to the network to fill a gap in directional coverage over the Atlantic and Europe. The current GMDN consisting of four multidirectional muon detectors was completed in 2006 by expanding the São Martinho detector and installing a new detector in Kuwait. Since then, the temporal variations of the anisotropy and density gradient in association with the ICME and corotating interaction regions have been analyzed on an hourly basis using the observations with the GMDN (Rockenbach et al. 2014; Okazaki et al. 2008; Kuwabara et al. 2004; Kuwabara et al. 2009).
Solar cycle variations of the interplanetary magnetic field (IMF) and solar wind parameters also alter the global distribution of GCR density in the heliosphere and cause long-term variations of the 3D anisotropy of the GCR intensity at the Earth. The 'drift model' of cosmic ray transport in the heliosphere, for instance, predicts a bidirectional latitude gradient of the GCR density, pointing in opposite directions on opposite sides of the heliospheric current sheet (HCS) (Kóta and Jokipii 1982). The predicted spatial distribution of the GCR density has a minimum along the HCS in the 'positive' polarity period of the solar polar magnetic field (also referred as A>0 epoch), when the IMF directs away from (toward) the Sun in the northern (southern) hemisphere, while the distribution has the local maximum on the HCS in the 'negative' period (A<0 epoch) with the opposite field orientation in each hemisphere. The field orientation reverses every 11 years around the period of maximum solar activity. The 3D anisotropy of GCR intensity consists of two components: one lying in the ecliptic plane and the other pointing normal to the ecliptic plane. The ecliptic component can be observed as the solar diurnal anisotropy (the first harmonic vector of the solar diurnal variation) of GCR intensity, while the normal component can be measured as the north-south (NS) anisotropy responsible for the difference between intensities recorded by north- and south-viewing detectors or the sidereal diurnal anisotropy. By analyzing the solar diurnal variation and the NS anisotropy of the GCR intensity recorded by neutron monitors (NMs), Bieber and Chen (1991) and Chen and Bieber (1993) derived the solar cycle variations of 3D anisotropy and modulation parameters on a yearly basis. On the other hand, Munakata et al. (2014) derived the long-term variation of the 3D anisotropy from the long-term record of the GCR intensity observed with a single multidirectional MD at Nagoya in Japan. By comparing the anisotropy derived from the MD data with that from the NM data, they examined the rigidity dependence of the anisotropy and its solar cycle variation.
Accurate observation of the NS anisotropy normal to the ecliptic plane is also crucial for obtaining a reliable 3D anisotropy. This component has been derived from NM and MD data in two different ways. Chen and Bieber (1993) derived this component anisotropy from the difference between count rates in a pair of NMs which are located near the north and south geomagnetic poles and observing intensities of GCRs arriving from the north and south pole orientations, respectively. The NS anisotropy derived in this way is very sensitive to the stability of operations of two independent detectors and can be easily affected by unexpected changes of instrumental and/or environmental origins. Due to the 23.4° inclination of Earth's rotation axis from the ecliptic normal, the NS anisotropy normal to the ecliptic plane can be also observed as a diurnal variation of count rate in sidereal time with the maximum phase at approximately 06:00 or approximately 18:00 local sidereal time (Swinson 1969). A possible drawback of deriving the NS anisotropy from the sidereal diurnal variation is that the expected amplitude of the sidereal diurnal variation is roughly ten times smaller than that of the solar diurnal variation. The small signal in sidereal time can be easily influenced by the solar diurnal anisotropy changing during a year. Another difficulty is that one can obtain only the yearly mean anisotropy, because the influence from the solar diurnal variation, even if it is stationary throughout a year, can be canceled in sidereal time only when the diurnal variation is averaged over at least 1 year. This makes it difficult to deduce a reliable error of the yearly mean anisotropy. Mori and Nagashima (1979) proposed another way to derive the NS anisotropy from the 'GG-component' of a multidirectional MD at Nagoya in Japan. The GG-component is a difference combination between intensities recorded in the north- and south-viewing directional channels corresponding to 56° north and 14° south asymptotic latitudes in free space at their median rigidity, approximately 80 GV, designed to measure the NS anisotropy free from atmospheric temperature effect (Nagashima et al. 1972). The NS anisotropy depends on the polarity of the magnetic field. Based on this fact, Laurenza et al. (2003) showed that the GG-component can be used for deriving reliable sector polarity of the IMF which is defined as away (toward) when the IMF directs away from (toward) the Sun. By using a global network of four multidirectional MDs which is able to observe the NS anisotropy on an hourly basis, Okazaki et al. (2008) reported for the first time that the NS anisotropy deduced from the GG-component is consistent with the anisotropy observed with the global network for a year during the solar activity minimum period.
Analyses of the diurnal variation observed with a single detector, however, can give a correct anisotropy only when the anisotropy is stationary at least over 1 day and may not work if the anisotropy changes dynamically within a day. The GG-component also needs to be averaged over 1 day to cancel the influence of the ecliptic components which have components parallel to the rotation axis of the Earth and contribute to the NS difference measured by the GG-component. Additionally, the directional channels of the Nagoya MD have an angular distribution biased toward the northern hemisphere, while the GMDN has a global angular distribution. It is important, therefore, to examine whether the long-term variation of the 3D anisotropy derived from the conventional analysis of the observed diurnal variation and the GG-component is consistent with the anisotropy observed by the GMDN which is capable of accurately measuring anisotropy with better time resolution. In this paper, we analyze the 3D anisotropy observed with the GMDN over 22 years between 1992 and 2013 and compare it with the anisotropy observed with the Nagoya multidirectional MD, especially focusing on the NS anisotropy for which the GG-component has been the only reliable measurement at the 50 to 100 GV region. Based on the difference of the response rigidities between the GMDN (approximately 60 GV) and the GG-component (approximately 80 GV), we also discuss the rigidity dependence of the NS anisotropy.
We analyze the pressure-corrected hourly count rate I i,j (t) of recorded muons in the jth directional channel of the ith detector in the GMDN at universal time t and derive three components $\left (\xi ^{\text {GEO}}_{x}(t), \xi ^{\text {GEO}}_{y}(t), \xi ^{\text {GEO}}_{z}(t)\right)$ of the first-order anisotropy in the geographic (GEO) coordinate system by best fitting the following model function to I i,j (t).
$$\begin{array}{@{}rcl@{}} I^{fit}_{i,j}(t) = I^{0}_{i,j}(t) &+&\xi^{\text{GEO}}_{x}(t)\left(c^{1}_{1 i,j} \cos \omega t_{i} - s^{1}_{1 i,j} \sin \omega t_{i}\right) \\ &+&\xi^{\text{GEO}}_{y}(t)\left(s^{1}_{1 i,j} \cos \omega t_{i} + c^{1}_{1 i,j} \sin \omega t_{i}\right) \\ &+&\xi^{\text{GEO}}_{z}(t) c^{0}_{1 i,j} \end{array} $$
((1))
where $I^{0}_{i,j}(t)$ is a parameter representing the contributions from the omnidirectional intensity and the atmospheric temperature effect; t i is the local time at the ith detector; $c^{1}_{1 i,j}$ , $s^{1}_{1 i,j}$ , and $c^{0}_{1 i,j}$ are the coupling coefficients; and ω=π/12. The coupling coefficients are calculated by integrating the response function of atmospheric muons to the primary cosmic rays (Murakami et al. 1979) for primary rigidity, detective solid angle, and detection area with weights of the asymptotic orbit by assuming a rigidity-independent anisotropy with the upper limiting rigidity set at 105 GV, far above the maximum rigidity of the response.
In deriving the anisotropy vector ξ, we additionally apply an analysis method developed to remove the influence of atmospheric temperature variations from the derived anisotropy (see Okazaki et al. 2008). Elimination of the temperature effect from the MD data is of particular importance in analyzing the long-term temporal variation of ξ. The deduced anisotropy is averaged over each IMF sector in every month designated as away (toward) if the daily polarity of the Stanford mean magnetic field of the Sun (Wilcox Solar Observatory), shifted 5 days later for a rough correction for the solar wind transit time between the Sun and the Earth, is positive (negative).
We also derive the anisotropy from observations by a single multidirectional MD at Nagoya (hereafter Nagoya MD) which is a component detector of the GMDN. By using the coupling coefficients, we deduce the equatorial component $\left ({\xi }^{\text {GEO}}_{x}, {\xi }^{\text {GEO}}_{y}\right)$ of ξ from the mean diurnal variation of the hourly counting rate in each IMF sector in every month. On the other hand, we derive the normal component to the equatorial plane, ${\xi }^{\text {GEO}}_{z}$ , from the GG-component averaged over each IMF sector by using the coupling coefficient in every month (Munakata et al. 2014). The GG-component is a difference combination between intensities recorded in the north- and south-viewing channels and has long been used as a good measure of the NS anisotropy (Mori and Nagashima 1979; Nagashima et al. 1972; Laurenza et al. 2003).
The anisotropy vector $\left ({\xi }^{\text {GEO}}_{x}, {\xi }^{\text {GEO}}_{y}, {\xi }^{\text {GEO}}_{z}\right)$ in three dimensions derived from the GMDN and Nagoya data is transformed to the geocentric solar ecliptic (GSE) coordinate system, in which the z-component corresponds to the NS component normal to the ecliptic plane, and corrected for the solar wind convection anisotropy using the solar wind velocity in the 'omnitape' (King and Papitashvili 2005; NASA 2014) data by the Space Physics Data Facility at the Goddard Space Flight Center and for the Compton-Getting anisotropy arising from the Earth's 30 km/s orbital motion around the Sun. In the corrections, we set the power law index of the GCR energy spectrum to be −2.7. We then obtain the ecliptic plane component of the anisotropy consisting of components parallel (ξ ∥) and perpendicular (ξ ⊥) to the IMF as obtained from the omnitape data and NS anisotropy (ξ z ) normal to the plane in each IMF sector in every month. We finally obtain the monthly mean three components of the anisotropy in the solar wind frame as
$$\begin{array}{@{}rcl@{}} \xi_{\|} = \left(\xi_{\|}^{T} + \xi_{\|}^{A}\right)/2 \end{array} $$
((2a))
$$\begin{array}{@{}rcl@{}} \xi_{\bot} = \left(\xi_{\bot}^{T} + \xi_{\bot}^{A}\right)/2 \end{array} $$
((2b))
$$\begin{array}{@{}rcl@{}} \xi_{z} = \left({\xi_{z}^{T}} - {\xi_{z}^{A}}\right)/2 \end{array} $$
((2c))
where $\xi _{\|}^{T}\left (\xi _{\|}^{A}\right)$ and $\xi _{\bot }^{T}\left (\xi _{\bot }^{A}\right)$ are parallel and perpendicular components in the ecliptic plane averaged over the toward (away) sector, while ${\xi _{z}^{T}}\left ({\xi _{z}^{A}}\right)$ is the NS anisotropy in the toward (away) sector. We assume that the anisotropy vector, when averaged over 1 month exceeding a solar rotation period, is symmetrical with respect to the HCS which is regarded to coincide with the solar equatorial plane on the average. Because of this assumption, the NS anisotropy is directed oppositely, with the same magnitude, above and below the HCS as defined in Equation 2c.
Solar cycle variation of the 3D anisotropy
Figure 1a,b,c shows the temporal variations of the yearly mean ξ ∥, ξ ⊥, and ξ z as defined in Equation 2c, respectively. Each panel shows that the temporal variations of the anisotropy components derived from the GMDN (solid circle) and Nagoya (open circle) data are fairly consistent with each other as far as the year-to-year variation is concerned. We can see that the solar cycle variation of ξ ∥ has two components. One is a 22-year variation resulting in a slightly larger ξ ∥ in A<0 epoch (2001 to 2011) than in A>0 epoch (1992 to 1998) as reported by Chen and Bieber (1993). The other is a variation correlated with cosψ, shown with ξ ∥ by open squares in Figure 1a, where ψ is the IMF spiral angle derived from omnitape data. ξ z deduced from the GMDN (solid circles), on the other hand, shows an 11-year cycle with minima in 1998 and 2007 around the solar activity minima, while ξ ⊥ shows no solar cycle variation.
Long-term variations of three components of the anisotropy vector in the solar wind frame. Each panel displays the yearly mean (a) ξ ∥ (on the left vertical axis), (b) ξ ⊥, and (c) ξ z as defined in Equation 2c, each as a function of year on the horizontal axis. Solid and open circles in each panel represent anisotropies derived from the GMDN and Nagoya data, respectively, while open squares in (a) display cosψ on the right vertical axis. In each panel, yearly mean value and its error are deduced from the average and dispersion of monthly mean values. Gray vertical stripes indicate periods when the polarity reversal of the solar polar magnetic field (referred as A>0 or A<0 in (b)) is in progress.
Comparison between the NS anisotropies observed with the GMDN and the GG-component
We now focus on the NS anisotropy which cannot be detected by a single-directional channel separately from GCR density variations. Figure 2 shows histograms of hourly (a and b) and daily (c and d) mean $\xi ^{\text {GEO}}_{z}$ observed by the GG-component (a and c) and GMDN (b and d) in 2006 to 2013, which are classified according to the IMF sectors designated as toward (blue histograms) if B x >B y and away (red histograms) if B x <B y by using the GSE-x, y components (B x , B y ) of the IMF vector in the omnitape data. The blue and red vertical dashed lines represent averages of the blue and red histograms, respectively. We define ' T/A separation' following Okazaki et al. (2008) as
$$(T - A)/\sqrt{\sigma_{T}\sigma_{A}} $$
where T (A) and σ T (σ A ) are the average and standard errors of each histogram in the toward (away) sector, respectively. Table 1 lists T−A, $\sqrt {\sigma _{T}\sigma _{A}}$ , T/A separation, and 'success rate' (Mori and Nagashima 1979; Laurenza et al. 2003). The success rate is a ratio of the number of hours (days) when the sign of the observed $\xi ^{\text {GEO}}_{z}$ is positive (negative) in the toward (away) IMF sector to the total number of hours (days) and is introduced as a parameter indicating to what extent we can infer the IMF sector polarity from the sign of the observed ξ z . Although we use the success rate together with T/A separation for the following comparison, it is noted that a low success rate does not necessarily imply anything wrong in the observed ξ z . The IMF sector polarity sensed by high-energy GCRs should be regarded as the polarity averaged over a spatial scale comparable to the Larmor radii of GCRs which span approximately 0.1 AU. It is natural to expect that the IMF polarity averaged over such a large scale does not always follow the single-point measurement of the polarity by a satellite. In Table 1, it is seen that the daily mean $\xi ^{\text {GEO}}_{z}$ by the GMDN shows smaller T/A separation and success rate than $\xi ^{\text {GEO}}_{z}$ deduced from the GG-component, while the hourly $\xi ^{\text {GEO}}_{z}$ by GMDN has a larger T/A separation and success rate than the GG-component which has significantly larger dispersion (Figure 2a), partly due to the contribution from diurnal anisotropy as suggested by Okazaki et al. (2008) from their analysis of 1-year data between March 2006 and March 2007.
Histograms of the NS anisotropy. Each panel displays the histograms of $\xi _{z}^{\text {GEO}}$ on (a, b) hourly and (c, d, e) daily bases derived from the (a, c) Nagoya GG-component, (b, d) GMDN, and (e) NM (Thule-McMurdo) data in 2006 to 2013. Blue and red histograms in each panel represent distributions of $\xi _{z}^{\text {GEO}}$ in toward and away IMF sectors, respectively, while blue and red vertical dashed lines represent averages of the blue and red histograms, respectively.
Table 1 T − A , ${\sqrt {\sigma _{T}\sigma _{A}}}$ , T / A separation, and success rate
We also examine the rigidity dependence of the NS anisotropy by analyzing NM data from 2006 to 2013. NMs have median responses to approximately 17 GV GCRs, while the GMDN and GG-component have median responses to approximately 60 GV and approximately 80 GV, respectively. Chen and Bieber (1993) derived the NS anisotropy ξ z in Equation 2c from the ratio (R) of the daily mean counting rate recorded by the Thule NM to that recorded by the McMurdo NM as
$$ \xi_{z} = \frac{b}{2}\frac{R^{T} - R^{A}}{R^{T} + R^{A}} $$
where R T (R A) is the R averaged over toward (away) sectors in every month and b is a constant calculated from coupling coefficients. We define the daily mean NS anisotropy by NMs as
$$ \xi^{\text{GEO}}_{z} = \frac{c}{2}\frac{R}{R^{T} + R^{A}} $$
where c is a coupling coefficient calculated on the same assumption as adopted in our analysis of the GMDN and Nagoya MD data. The T/A separation and success rate of this $\xi ^{\text {GEO}}_{z}$ represents those parameters for approximately 17 GV GCRs. The result of this analysis is presented in Figure 2e and Table 1. It is seen that the T/A separation of the NS anisotropy by NMs is significantly smaller mainly due to the small T−A, i.e., the NS anisotropy is significantly smaller than that obtained from the GMDN and GG-component. The NS anisotropy is smallest in NM data at approximately 17 GV and largest in the GG-component at approximately 80 GV, with the anisotropy in the GMDN at approximately 60 GV in between, suggesting that the NS anisotropy increases with increasing rigidity (Munakata et al. 2014).
Solar cycle variation of modulation parameters
Following the analyses by Chen and Bieber (1993), we derive modulation parameters, i.e., the density gradient and the mean free path of the pitch angle scattering. By assuming that the longitudinal gradient is zero in our analysis based on the anisotropy averaged over 1 month which is longer than the solar rotation period, ξ ∥, ξ ⊥, and ξ z obtained in Equations 2a, 2b, and 2c are related with the modulation parameters as
$$\begin{array}{@{}rcl@{}} \xi_{\|} &=& \lambda_{\|} G_{r} \cos \psi \end{array} $$
$$\begin{array}{@{}rcl@{}} \xi_{\bot} &=& \lambda_{\bot} G_{r} \sin \psi - R_{L} G_{z} \end{array} $$
$$\begin{array}{@{}rcl@{}} \xi_{z} &=& R_{L} G_{r} \sin \psi + \lambda_{\bot} G_{z} \end{array} $$
where R L is the Larmor radius of GCRs in the IMF and G z , G r , λ ∥, and λ ⊥ are the latitudinal and radial density gradients and the mean free paths of the pitch angle scattering parallel and perpendicular to the IMF. From Equations 5a, 5b, and 5c, we deduce the modulation parameters as
$$\begin{array}{*{20}l} &{}G_{z} = \left(\alpha \xi_{\|} \tan \psi - \xi_{\bot} \right)/R_{L} \end{array} $$
$$\begin{array}{*{20}l} &{}G_{r} = \left\{\! \xi_{z}\! +\! \sqrt{{\xi_{z}^{2}}\! +\! 4 \alpha \xi_{\|} \tan\! \psi \left(\xi_{\bot}\! - \!\alpha \xi_{\|} \tan\! \psi \right)}\! \right\}\!/\!\left(2 R_{L}\! \sin \psi \right) \end{array} $$
$$\begin{array}{*{20}l} &{}\lambda_{\|} = \xi_{\|}/\left(G_{r} \cos \psi \right) \end{array} $$
where α=λ ⊥/λ ∥, assumed to be 0.01 and constant as adopted by Chen and Bieber (1993). G z is converted to the bidirectional latitudinal gradient as
$$\begin{array}{@{}rcl@{}} G_{|z|} = -\text{sgn}(A) G_{z} \end{array} $$
where A represents the polarity of the solar dipole magnetic moment and
$$\begin{array}{@{}rcl@{}} \text{sgn}(A) &=& +1, \;\text{for}\; A>0 \;\text{epoch}, \\ &=& -1, \;\text{for}\; A<0 \;\text{epoch}. \end{array} $$
Figure 3a,b,c displays temporal variations of modulation parameters G |z|, G r , and λ ∥, respectively, obtained from the GMDN (solid circle) and Nagoya (open circle) data. The variations with 22-year and 11-year solar cycles are clearly seen in this figure. First, G |z| is positive in A>0 epoch indicating the local minimum of the density on the HCS, while it is negative in A<0 epoch indicating the maximum in accord with the prediction of the drift model by Kóta and Jokipii (1983). Second, significant 11-year variations are seen in both G r and λ ∥ which change in a clear anti-correlation.
Long-term variations of modulation parameters derived from the 3D anisotropy in the solar wind frame. Each panel displays the yearly mean (a) G |z|, (b) G r , and (c) λ ∥ as a function of year. Solid and open circles in each panel represent parameters derived from the GMDN and Nagoya data, respectively. In each panel, yearly mean value and its error are deduced from the average and dispersion of monthly mean values. Gray vertical stripes indicate periods when the polarity reversal of the solar polar magnetic field (referred as A>0 or A<0 in (c)) is in progress.
Summary and discussions
We analyzed the 3D anisotropy of GCR intensity observed by the GMDN and Nagoya MD in 1992 to 2013. Our analysis of the GMDN data gives the anisotropy on an hourly basis with better time resolution than the traditional analyses of the diurnal and NS anisotropies observed by a single detector such as the Nagoya MD. We confirmed that the 3D anisotropy and the modulation parameters derived from the GMDN and Nagoya MD data are fairly consistent with each other as far as the yearly mean value is concerned. This fact is important particularly for the NS anisotropy derived from the GMDN data, because the GG-component has been the only reliable reference to the NS anisotropy in the rigidity region between 50 and 100 GV.
By analyzing the distribution of the NS anisotropy separately in toward and away IMF sectors, we compared the T/A separations and success rates deduced from the GMDN and Nagoya MD data on hourly and daily bases. It is confirmed that the daily mean NS anisotropy observed by the GG-component shows slightly better T/A separation and success rate than the daily mean anisotropy by the GMDN, while the hourly mean NS anisotropy by the GG-component shows a large spread due to the local time-dependent contribution from the ecliptic anisotropy. The NS anisotropy by the GMDN, on the other hand, shows similar success rate on both daily and hourly bases, indicating that the NS anisotropy is successfully observed by the GMDN, free from the contribution of the ecliptic anisotropy.
In addition to the better time resolution, the new analysis method developed by Okazaki et al. (2008) for the GMDN data also has an advantage of providing the 3D anisotropy, including the NS component, from a single best-fit calculation for intensities recorded by four detectors. In contrast, the conventional method using a single MD requires the derivation of the NS anisotropy from the north- and south-viewing channels, separately from the derivation of the diurnal anisotropy using all directional channels.
By comparing the NS anisotropy derived from NM data with those from the GMDN and GG-component data, we find that the NS anisotropy increases with increasing rigidity and the difference between T/A separations and success rates of the GMDN and GG-component data is partly due to the rigidity dependence. Yasue (1980) analyzed the GG-component together with the sidereal diurnal variation observed by MDs and NMs and derived the power law-type rigidity spectrum of the average NS anisotropy with a positive power law index of approximately 0.3. By analyzing long-term variations of the 3D anisotropies observed with Nagoya MD and NMs on yearly basis, Munakata et al. (2014) confirmed that the perpendicular component including the NS anisotropy increases with GCR rigidity. If these are the case, the magnitude of the NS anisotropy increases with rigidity and the T/A separation and success rate will also increase if the dispersion remains similarly independent of rigidity. This is in agreement with our results in Table 1, showing that T−A increases with the rigidity while $\sqrt {\sigma _{T}\sigma _{A}}$ is almost constant on a daily basis. Three GCR observations responsible for different rigidities, GG-component (approximately 80 GV), GMDN (approximately 60 GV), and NMs (approximately 17 GV) are capable of observing the NS anisotropy on daily basis, and their cross-calibration allows us to obtain the information about the rigidity dependence of the NS anisotropy.
We confirmed that the solar cycle variations of the yearly mean solar modulation parameters derived from the GMDN and Nagoya data are consistent with each other. The bidirectional latitudinal gradient G |z| shows a clear 22-year variation being positive (negative) in A>0 (A<0) epochs indicating the local minimum (maximum) of the GCR density on the HCS, in accord with the prediction of the drift model (Kóta and Jokipii 1983). On the other hand, significant 11-year solar cycle variations are seen in G r and λ ∥, respectively. The ecliptic component of the anisotropy ξ ∥ parallel to the IMF shows a 22-year variation being slightly larger in A<0 epoch (2001 to 2011) than in A>0 epoch (1992 to 1998) as reported by Chen and Bieber (1993). This variation of ξ ∥ is responsible for the well-known 22-year variation of the phase of the diurnal variation (Thambyahpillai and Elliot 1953). We find that the variation of ξ ∥ also shows a correlation with the cosψ which is governed by the solar wind velocity. This is reasonable because ξ ∥ is proportional to cosψ as given in Equation 5a. Figure 4 shows yearly variation of ξ ∥/ cosψ, i.e., λ ∥ G r by the GMDN. In this figure, the 22-year variation is seen more clearly than in Figure 1 showing ξ ∥ (Chen and Bieber 1993). For an accurate analysis of the solar cycle variation of the anisotropy, therefore, it is necessary to correct the observed anisotropy for cosψ and the solar wind velocity which varies without any clear 11-year or 22-year periodicities.
Long-term variation of λ ∥ G r derived from the GMDN data. Yearly mean value and its error are deduced from the average and dispersion of monthly mean values. Gray vertical stripes indicate periods when the polarity reversal of the solar polar magnetic field (referred as A>0 or A<0) is in progress.
The 22-year variation of λ ∥ G r seems mainly due to the variation of λ ∥ in Figure 3 which is larger in A<0 epoch (2001 to 2011) than in A>0 epoch (1992 to 1998). However, the solar magnetic field was unusually weak around this last solar minimum (2009), which resulted in a record-high GCR flux (Mewaldt et al. 2010). The larger mean free path is more likely the result of the weaker solar minimum than a polarity issue.
Bieber, JW, Chen J (1991) Cosmic-ray diurnal anisotropy, 1936-1988: implications for drift and modulation theories. Astrophys J 372: 301–313.
Chen, J, Bieber JW (1993) Cosmic-ray anisotropies and gradients in three dimensions. Astrophys J 405: 375–389.
King, JH, Papitashvili NE (2005) Solar wind spatial scales in and comparisons of hourly wind and ace plasma and magnetic field data. J Geophys Res110: A02104: 1–8.
Kóta, J, Jokipii JR (1982) Cosmic rays near the heliospheric current sheet. Geopys Res Lett 9: 656–659.
Kóta, J, Jokipii JR (1983) Effects of drift on the transport of cosmic rays. VI. A three-dimensional model including diffusion. Astrophys J 265: 573–581.
Kuwabara, T, Munakata K, Yasue S, Kato C, Akahane S, Koyama M, Bieber JW, Evenson P, Pyle R, Fujii Z, Tokumaru M, Kojima M, Marubashi K, Duldig ML, Humble JE, Silva MR, Trivedi NB, Gonzalez WD, Schuch NJ (2004) Geometry of an interplanetary CME on October 29, 2003 deduced from cosmic rays. Geophys Res Lett31: L19803: 1–5.
Kuwabara, T, Bieber JW, Evenson P, Munakata K, Yasue S, Kato C, Fushishita A, Tokumaru M, Duldig ML, Humble JE, Silva MR, Lago AD, Schuch NJ (2009) Determination of interplanetary coronal mass ejection geometry and orientation from ground-based observations of galactic cosmic rays. J Geophys Res114: A05109: 1–10.
Laurenza, M, Storini M, Moreno G, Fujii Z (2003) Interplanetary magnetic field polarities inferred from the north-south cosmic ray anisotropy. J Geophys Res 108: 1069–1075.
Mewaldt, RA, Davis AJ, Lave KA, Leske RA, Stone EC, Wiedenbeck ME, Binns WR, Christian ER, Cummings AC, de Nolfo GA, Israel MH, Labrador AW, von Rosenvinge TT (2010) Record-setting cosmic-ray intensities in 2009 and 2010. Astrophys J Lett 723: 1–6.
Mori, S, Nagashima K (1979) Inference of sector polarity of the interplanetary magnetic field from the cosmic ray north-south asymmetry. Planet Space Sci 27: 39–46.
Murakami, K, Nagashima K, Sagisaka S, Mishima Y, Inoue A (1979) Response functions for cosmic-ray muons at various depths underground. IL Nuovo Cimento2C: 635–651.
Munakata, K, Kozai M, Kato C (2014) Long term variation of the solar diurnal anisotropy of galactic cosmic rays observed with the Nagoya multi-directional muon detector. Astrophys J791: 22: 1–16.
Nagashima, K, Fujimoto K, Fujii Z, Ueno H, Kondo I (1972) Three-dimensional cosmic ray anisotropy in interplanetary space. Rep Ionos Space Res Jpn 26: 31–68.
NASA (2014) The "omnitape" data accessed on February 10, 2014. http://omniweb.gsfc.nasa.gov/.
Okazaki, Y, Fushishita A, Narumi T, Kato C, Yasue S, Kuwabara T, Bieber JW, Evenson P, Silva MRD, Lago AD, Schuch NJ, Fujii Z, Duldig ML, Humble JE, Sabbah I, Kóta J, Munakata K (2008) Drift effects and the cosmic ray density gradient in a solar rotation period: first observation with the Global Muon Detector Network (GMDN). Astrophys J681: 693–707.
Parker, EN (1965) The passage of energetic charged particles through interplanetary space. Planet Space Sci 13: 9–49.
Rockenbach, M, Lago AD, Schuch NJ, Munakata K, Kuwabara T, Oliveira AG, Echer E, Braga CR, Mendonca RRS, Kato C, Kozai M, Tokumaru M, Bieber JW, Evenson P, Duldig ML, Humble JE, Jassar HKA, Sharma MM, Sabbah I (2014) Global muon detector network used for space weather applications. Space Sci Rev 182: 1–18.
Swinson, DB (1969) "Sidereal" cosmic-ray diurnal variations. J Geophys Res 74: 5591–5598.
Thambyahpillai, T, Elliot H (1953) World-wide changes in the phase of the cosmic-ray solar daily variation. Nature 171: 918–920.
Yasue, S (1980) North-south anisotropy and radial density gradient of galactic cosmic rays. J Geomag Geoelectr 32: 617–635.
This work is supported in part by the joint research programs of the Solar-Terrestrial Environment Laboratory (STEL), Nagoya University and the Institute for Cosmic Ray Research (ICRR), University of Tokyo. The observations with the Nagoya multidirectional muon detector are maintained by Nagoya University. CNPq, CAPES, INPE and UFSM support upgrade and maintenance of the São Martinho muon detector. The Bartol Research Institute neutron monitor program, which operates Thule and McMurdo neutron monitors, is supported by National Science Foundation grant ATM-0000315. Wilcox Solar Observatory data used in this study was obtained via the website http://wso.stanford.edu at 2013:06:24$ ¥_$22:12:55 PDT courtesy of J.T. Hoeksema. The Wilcox Solar Observatory is currently supported by NASA. JK thanks STEL and Shinshu University for the hospitality during his stay as a visiting professor of STEL.
Physics Department, Shinshu University, Matsumoto, 390-8621, Nagano, Japan
Masayoshi Kozai
, Kazuoki Munakata
& Chihiro Kato
Bartol Research Institute, Department of Physics and Astronomy, University of Delaware, Newark, 19716, DE, USA
Takao Kuwabara
, John W Bieber
& Paul Evenson
Southern Regional Space Research Center (CRS/INPE), Santa Maria RS, P.O. Box 5021, 97110-970, Brazil
Marlos Rockenbach
& Nelson J Schuch
National Institute for Space Research (INPE), São José dos Campos SP, 12227-010, Brazil
Alisson Dal Lago
Solar Terrestrial Environment Laboratory, Nagoya University, Nagoya, 464-8601, Aichi, Japan
Munetoshi Tokumaru
School of Physical Sciences, University of Tasmania, Hobart, 7001, Tasmania, Australia
Marcus L Duldig
& John E Humble
Department of Natural Sciences, College of Health Sciences, Public Authority for Applied Education and Training, Kuwait City, 72853, Kuwait
Ismail Sabbah
Physics Department, Kuwait University, Kuwait City, 13060, Kuwait
Hala K Al Jassar
& Madan M Sharma
Lunar and Planetary Laboratory, University of Arizona, Tucson, 85721, AZ, USA
Jozsef Kóta
Search for Masayoshi Kozai in:
Search for Kazuoki Munakata in:
Search for Chihiro Kato in:
Search for Takao Kuwabara in:
Search for John W Bieber in:
Search for Paul Evenson in:
Search for Marlos Rockenbach in:
Search for Alisson Dal Lago in:
Search for Nelson J Schuch in:
Search for Munetoshi Tokumaru in:
Search for Marcus L Duldig in:
Search for John E Humble in:
Search for Ismail Sabbah in:
Search for Hala K Al Jassar in:
Search for Madan M Sharma in:
Search for Jozsef Kóta in:
Correspondence to Masayoshi Kozai.
MK, the corresponding author of this paper, made all analyses presented in this paper. KM evaluated the data and discussed the analysis results in this paper. CK made the GMDN data available for this paper. TK made the GMDN data and the neutron monitor data available for this paper. JWB and PE made the neutron monitor data available for this paper. MR kept the São Martinho muon detector in operation. ADL hosted the São Martinho muon detector, assisted with its installation, and performed its general maintenance. NJS contributed in establishing the observation with the São Martinho muon detector and kept it in operation. MT kept the Nagoya muon detector in operation. MLD and JEH hosted the Hobart muon detector, assisted with its installation, performed its general maintenance, discussed the results, and input into the English grammar of this paper. IS established the observation with the Kuwait muon detector. HKAJ and MMS kept the Kuwait muon detector in operation. JK discussed analysis results and worked for improving the English in this paper. All authors read and approved the final manuscript.
Diurnal anisotropy
North-south anisotropy
Heliospheric modulation of galactic cosmic rays
Solar cycle variation of the cosmic ray density gradient | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues Feature Issues
•https://doi.org/10.1364/OE.449426
Robust coherent control in three-level quantum systems using composite pulses
Hang Xu, Xue-Ke Song, Dong Wang, and Liu Ye
Hang Xu,1 Xue-Ke Song,1,2 Dong Wang,1,3 and Liu Ye1
1School of Physics and Optoelectronics Engineering, Anhui University, Hefei 230601, China
[email protected]
[email protected]
Dong Wang https://orcid.org/0000-0002-0545-6205
H Xu
X Song
D Wang
L Ye
Hang Xu, Xue-Ke Song, Dong Wang, and Liu Ye, "Robust coherent control in three-level quantum systems using composite pulses," Opt. Express 30, 3125-3137 (2022)
Robust stimulated Raman shortcut-to-adiabatic passage with invariant-based optimal control
Xue-Ke Song, et al.
Opt. Express 29(6) 7998-8014 (2021)
Fast coherent manipulation of quantum states in open systems
Jie Song, et al.
Inverse engineering of shortcut pulses for high fidelity initialization on qubits closely spaced in...
Ying Yan, et al.
Quantum Optics
Physical optics
Pulse shaping
Quantum information processing
Quantum state engineering
Two level systems
Original Manuscript: November 23, 2021
Revised Manuscript: December 19, 2021
Manuscript Accepted: January 5, 2022
Preliminaries
Quantum state control using five coherent techniques
Robustness against different experimental errors
Here, we focus on using composite pulses to realize high-robustness and high-fidelity coherent control in three-level quantum systems. We design the dynamic parameters (Rabi frequency and detuning) for three-level Hamiltonians for high-fidelity quantum state control using five well-known coherent control techniques including a composite adiabatic passage (CAP). Furthermore, we compare their performance against the Rabi frequency and systematic errors, and accordingly show that the CAP is the most robust against them. It features a broad range of high efficiencies above 99.9%. Thus, it provides an accurate approach for manipulating the evolution of quantum states in three-level quantum systems.
Accurate initialization and manipulation of quantum states have generated extensive interest in the fields of quantum optics, quantum control, and quantum simulations [1,2]. In addition, they have widespread successful applications in atomic, molecular, and optical physics, chemistry, and many other aspects [3–5]. To achieve these targets, there are numerous control methods that can be employed, such as resonance excitation (RE) [6], adiabatic passages (APs), shortcuts to adiabaticity (STAs) [7–11], dynamical correction technique [12,13], reinforcement learning [14–17], frequency tuning method [18], and composite schemes [19–21]. RE uses $\pi$ or $\pi /2$ resonant pulses to realize rapid quantum state control. APs, including rapid adiabatic passages [22], piecewise adiabatic passages [23], and stimulated Raman adiabatic passages [24,25], can cause quantum systems to evolve adiabatically along an eigenstate of the Hamiltonian, leading to efficient and robust population transfer. STAs, including counter-diabatic (CD) shortcuts [26–31] and Lewis-Riesenfeld invariants (LRIs) [32–36], can accelerate a quantum adiabatic process to reproduce the same final population.
Composite pulses (CPs) are well-studied protocols for realizing specific coherent quantum control using a sequence of pulses. Their advantage is the control of the phases of pulses to produce desired results, instead of the control of the amplitudes of pulses. Two typical types of CPs are composite adiabatic passage (CAP) and universal composite pulses (UCPs). A CAP is a combination of an adiabatic scheme and a CP, which can eliminate experimental errors by appropriately selecting the phases of the pulse sequences; a UCP is a more general composite technology, without specific requirements for pulses. Both are regarded as powerful tools for precise quantum state manipulations in various quantum systems [37–40]. For example, in 2011, Torosov et al. [37] proposed a CAP to optimize the defect that an AP cannot achieve complete population inversion in two-level systems. In 2014, Genov et al. [38] used a UCP to realize robust high-fidelity population inversion in two-state quantum systems, and experimentally demonstrated its effectiveness and universality in a $\rm {\Pr ^{3 +}}$:$\rm {Y_2}Si{O_5}$ crystal. In 2018, Torosov et al. [39] presented three classes of symmetric broadband CP sequences, which compensate the imperfections in the pulse area to an arbitrary high order. In 2021, Shi et al. [40] proposed a method to realize robust general single-qubit gates using CPs in a three-level system.
Imperfect experimental implementations and conditions generate experimental control errors and parameter fluctuations of the Hamiltonian. For example, atoms adopting different Rabi frequencies induced by different positions and fields [41] cause the original Hamiltonian elements to present small global shifts. In addition, atoms or ions in the nonuniform spatial distribution of an external laser, a microwave, or the radiofrequency field produce fluctuations of the Rabi frequency. These errors affect the accuracy and efficiency of quantum state control. A CAP is a useful method for suppressing these experimental errors, because it is insensitive to the pulse shape or pulse area. In 2013, Schraft et al. [42] experimentally demonstrated the robustness and efficiency advantages of a CAP even under weak adiabatic conditions. In 2018, Bruns et al. [43] experimentally demonstrated the improvement in the transfer efficiency and robustness of composite stimulated Raman adiabatic passages compared to those of conventional and repeated stimulated Raman adiabatic passages. In 2021, Torosov et al. [44] compared six well-known techniques for coherent control of two-state quantum systems with respect to various sources of errors, and showed that a CAP is more resilient to experimental errors than the other methods.
Here, we establish a method to use CPs to realize complete and robust coherent control in three-level quantum systems. The CPs change a quantum state using CP sequences, with the relative phases serving as the control parameters, without dependence on the specific shapes of the pulses. More importantly, we compare the performance of a CAP with RE, an AP, a CD, and an LRI with respect to common experimental errors, and find that the CAP is the most robust against these errors among all considered methods. Moreover, it can maintain the transition probability at ultrahigh efficiencies above $99.9\%$ over broad ranges of the error parameters. This offers an effective route for achieving high-fidelity coherent control of three-level quantum systems using CAPs.
We consider here arrays of optical wells, where the dipole force of a red detuned laser field is used to store neutral atoms in each of the foci of a set of microlenses [45]. Three in-line dipole wells are modeled as three harmonic potentials, and we assume here that we can initially store no or one neutral atom per well. The Hamiltonian, in the basis of $\{\left |L\right \rangle,\left |C\right \rangle, \left |R\right \rangle \}$, is expressed as
(1)$$\begin{aligned}{H_0}(t)= \left( {\begin{array}{*{20}{c}} K & { - \sqrt 2 J} & 0\\ { - \sqrt 2 J} & 0 & { - \sqrt 2 J}\\ 0 & { - \sqrt 2 J} & K \end{array}} \right), \end{aligned}$$
where $\left |L\right \rangle =\left (\begin {array}{c} 1 \\ 0 \\ 0 \end {array} \right )$, $\left |C\right \rangle =\left (\begin {array}{c} 0 \\ 1 \\ 0 \end {array} \right )$, and $\left |R\right \rangle =\left (\begin {array}{c} 0 \\ 0 \\ 1 \end {array} \right )$, and they represent the minimal channel basis for the left, central, and right wavefunctions in a triple well, respectively. $K$ plays the role of the bias of the outer wells with respect to the central one and $J$ is the coupling coefficient between adjacent wells [46]. The three-level Hamiltonian also describes other physical systems, e.g., two bosons in two wells [46,47], three coupled waveguides [48,49], and a three-level atom under appropriate laser interactions [50].
Here, our objective is to manipulate the evolution of an atom wavefunction from the central well to the external wells with the same probability amplitude, i.e., $\left |C\right \rangle \rightarrow 1/\sqrt {2}\left (\left |L\right \rangle +\left |R\right \rangle \right )$. This is similar to a crucial device of many optical experimental and measurement systems–a beam splitter–which can split a beam of light in two (see Fig. 1). For this purpose, we can rewrite the Hamiltonian, ${H_0}(t)$, in the basis of $\{\left |C\right \rangle,\left |\Phi _+\right \rangle, \left |\Phi _-\right \rangle \}$, as
(2)$$\begin{aligned}{H_0}^{\prime} (t)= \frac{1}{2}\left( {\begin{array}{*{20}{c}} { - K} & { - 4J} & 0\\ { - 4J} & K & 0\\ 0 & 0 & K \end{array}} \right) \end{aligned}$$
where $\left |\Phi _\pm \right \rangle =1/\sqrt {2}\left (\left |L\right \rangle \pm \left |R\right \rangle \right )$, and the identity matrix, $\frac {1}{2}K$, is omitted. We can find that if the system starts in state $\left |\Phi _-\right \rangle$, the Hamiltonian ${H_0}^{\prime } (t)$ will drive the system to always stay in this state, and it never evolves into the state $\left |C\right \rangle$, $\left |\Phi _+\right \rangle$, or any superposition of them. Thus, in this case, state $\left |\Phi _-\right \rangle$ is decoupled from the subspace of $\left |C\right \rangle$ and $\left |\Phi _+\right \rangle$. Therefore, the Hamiltonian, in subspace $\{\left |C\right \rangle,\left |\Phi _+\right \rangle \}$, can be further reduced to
(3)$$\begin{aligned} H(t) = \frac{1}{2}\left( {\begin{array}{*{20}{c}} { - \Delta } & \Omega \\ \Omega & \Delta \end{array}} \right), \end{aligned}$$
where $\Delta = K$ is the detuning and $\Omega = - 4J$ is the Rabi frequency in terms of the standard notation for two-level Hamiltonians in quantum optics. Thus, the objective is transformed into realizing population inversion in two-level quantum systems.
Fig. 1. Illustration of 1:2 beam splitter.
3. Quantum state control using five coherent techniques
3.1 RE
First, we show how to achieve the above-mentioned aim using RE. RE requires that the frequency of the external drive field should be the same as the Bohr transition frequency, i.e., $\Delta = 0$. We can write analytically the time-dependent evolution of the transition probability as
(4)$$P(t) = \frac{1}{2}\left[ {1 - \cos (A)} \right],$$
where $A(t) = \int _{{t_0}}^{t} {\Omega (t')dt'}$ is the pulse area. Understandably, when $A(t)$ is selected as an odd multiple of $\pi$ and $P=1$, complete population inversion can be achieved, regardless of the pulse shape. To explain, we take a Gaussian pulse [51] as an example, which is expressed as
(5)$$\Omega (t) = \sqrt \pi {e^{ - {{(t/T)}^{2}}}}/T,$$
where $\sqrt \pi /T$ is the peak Rabi frequency and $T$ is the pulse width, which are plotted in Fig. 2(a). It satisfies $A=\int _{ - \infty }^{ + \infty } {\Omega (t')dt'} = \pi$. In practice, the pulse duration cannot be infinite; it is typically sufficient to choose the condition, $t \in [ -3T,3T]$, to realize population inversion, as shown in Fig. 2(b).
Fig. 2. Time dependence of parameters $\Omega$ and population $P(t)$ of state $\left | {{\Phi _ + }} \right \rangle$ of RE method: (a) $\Omega$ (blue, solid line), (b) Population $P(t)$ (red, solid line).
3.2 AP
AP is an important method to realize population inversion of two-state systems. It requires that the parameters of Hamiltonian change slowly enough. If the system starts in an eigenstate of Hamiltonian, then the system will evolve adiabatically along this eigenstate. Specifically, if we take a linearly chirped Gaussian pulse,
(6)$$\Delta = \alpha \frac{t}{T},\;\;\;\;{\rm{ }}\Omega = {\Omega _0}{e^{ - {{(t/T)}^{2}}}},$$
the adiabatic condition is [44]
(7)$$\sqrt 2 {\Omega _0} > \alpha \gg \frac{2}{T},$$
where ${\Omega _0}$ is the peak Rabi frequency and $\alpha$ is the chirp rate. Here we adopt
(8)$$\Delta = \frac{5}{T}\frac{t}{T},\;\;\;\;{\rm{ }}\Omega = \frac{5{\sqrt \pi }}{T}{e^{ - {{(t/T)}^{2}}}},$$
which are plotted in Fig. 3(a). The corresponding population of state $\left | {{\Phi _ + }} \right \rangle$ are depicted in Fig. 3(b).
Fig. 3. Time dependence of parameters $\Omega$ and population $P(t)$ of state $\left | {{\Phi _ + }} \right \rangle$ of AP method: (a) $\Delta$ (red, dashed line) and $\Omega$ (blue, solid line), (b) Population $P(t)$ (red, solid line).
3.3 CD control
CD control is a method of adding a CD Hamiltonian $H_c$ to original Hamiltonian $H(t)$ for adiabatic evolution, allowing to control the system by perfectly following the instantaneous eigenstate of the original Hamiltonian. Based on the theory of Berry [27], the CD Hamiltonian is written as
(9)$${H_c} = i\hbar \sum_ \pm {\left( {\left| {{\partial _t}{\phi _ \pm }} \right\rangle \left\langle {{\phi _ \pm }} \right| - \left\langle {{{\phi _ \pm }}} \mathrel{\left | {\vphantom {{{\phi _ \pm }} {{\partial _t}{\phi _ \pm }}}} \right. } {{{\partial _t}{\phi _ \pm }}} \right\rangle \left| {{\phi _ \pm }} \right\rangle \left\langle {{\phi _ \pm }} \right|} \right)} ,$$
(10)$$\begin{aligned}\left| {{\phi _ + }} \right\rangle = \left( {\begin{array}{*{20}{c}} {\sin \gamma }\\ {\cos \gamma } \end{array}} \;\;\;{\rm{ }}\right),\left| {{\phi _ - }} \right\rangle = \left( {\begin{array}{*{20}{c}} {\cos \gamma }\\ { - \sin \gamma } \end{array}} \right) \end{aligned}$$
are the eigenstates of original Hamiltonian $H(t)$, with $\gamma = \frac {1}{2}\arctan \left ( \Omega /\Delta \right )$ being the mixing angle. Thus, we obtain
(11)$$\begin{aligned}{H_c} = \frac{1}{2}\left( {\begin{array}{*{20}{c}} 0 & {i{\Omega _c}}\\ { - i{\Omega _c}} & 0 \end{array}} \right), \end{aligned}$$
where ${\Omega _c} = 2\dot \gamma$, in which the dot represents the derivative with respect to time. Here, we adopt a chirped Gaussian pulse,
(12)$$\Delta = \frac{2}{T}\frac{t}{T},\;\;\;\;{\rm{ }}\Omega = \frac{{\sqrt \pi }}{T}{e^{ - {{(t/T)}^{2}}}},$$
based on which the CD term is calculated as
(13)$${\Omega _c} ={-} 2\sqrt \pi {e^{{{(t/T)}^{2}}}}\left[ {2{{(t/T)}^{2}} + 1} \right]/\left[ {4{{(t/T)}^{2}}T{e^{{{(t/T)}^{2}}}} + \pi T} \right].$$
The time evolutions of the Rabi frequency, detuning, and CD term are plotted in Fig. 4(a). The time evolutions governed by Hamiltonian $H' = H + {H_c}$ can achieve accurate population transfer for this system, as shown in Fig. 4(b).
Fig. 4. Time dependence of parameters $\Delta$, $\Omega$, and population $P(t)$ of state $\left | {{\Phi _ + }} \right \rangle$ of CD method: (a) $\Delta$ (red, dashed line), $\Omega$ (blue, solid line), and CD term ${\Omega _c}$ (green, dashed-dotted line), (b) Population $P(t)$ (red, solid line).
3.4 LRIs
We consider an LRI for comparison. Using the concept of invariants [32], first, the dynamics of the system can be designed, by choosing a time-dependent invariant, and subsequently the Hamiltonian elements are inversely obtained by inverting the time-dependent Schrödinger equation. Specifically, for Hamiltonian $H(t)$, there exists an invariant
(14)$$\begin{aligned}I = \frac{1}{2}\left( {\begin{array}{*{20}{c}} {\cos \theta } & {\sin \theta {e^{ - i\beta }}}\\ {\sin \theta {e^{i\beta }}} & { - \cos \theta } \end{array}} \right) \end{aligned}$$
that satisfies the dynamical equation,
(15)$$\frac{{dI}}{{dt}} = \frac{1}{{i\hbar }}\left[ {I,H} \right] + \frac{{\partial I}}{{\partial t}} = 0.$$
By solving this equation, the following constraint conditions are obtained:
(16)$$\dot \theta ={-} \Omega \sin \beta ,\;\;\;\;{\rm{ }}\dot \beta {\rm{ = }} - \Omega \cot\theta \cos\beta - \Delta .$$
Using eigenstates ${\left | {{\varphi _n}(t)} \right \rangle }$ of this invariant, the solution of Schrödinger equation $i\hbar \partial \left | {\psi (t)} \right \rangle /\partial t=H(t)\left | {\psi (t)} \right \rangle$ and the propagator [32] can be written as
(17)$$\begin{aligned}\begin{array}{l} \left| {\psi (t)} \right\rangle = \sum_ {j={\pm}} {{C_j}{e^{i{\eta _j}(t)}}\left| {{\varphi _j}(t)} \right\rangle } ,\\ U(t,{t_0}) = \sum_ {j={\pm}} {{e^{i{\eta _j}(t)}}\left| {{\varphi _j}(t)} \right\rangle \left\langle {{\varphi _j}({t_0})} \right|}, \end{array} \end{aligned}$$
(18)$$\begin{aligned}&\left| {{\varphi _ + }(t)} \right\rangle = \left( {\begin{array}{c} {{e^{ - i\beta /2}}\cos \frac{\theta }{2}}\\ {{e^{i\beta /2}}\sin \frac{\theta }{2}} \end{array}} \right), \\&\left| {{\varphi _ - }(t)} \right\rangle = \left( {\begin{array}{c} { - {e^{ - i\beta /2}}\sin \frac{\theta }{2}}\\ {{e^{i\beta /2}}\cos \frac{\theta }{2}} \end{array}} \right), \end{aligned}$$
${{C_j}}$ are constants and ${\eta _j}(t)$ are the Lewis–Riesenfeld phases, where
(19)$${{\dot \eta }_j}(t) = \frac{1}{\hbar }\left\langle {{\varphi _j}(t)} \right|i\hbar \frac{\partial }{{\partial t}} - H\left| {{\varphi _j}(t)} \right\rangle .$$
This yields
(20)$${{\dot \eta }_ + }(t) ={-} {{\dot \eta }_ - }(t) = \frac{{\dot \theta \cos\beta }}{{2\sin \theta \sin \beta }}.$$
To realize population inversion along eigenstate $\left | {{\varphi _ + }(t)} \right \rangle$, $\theta (t)$ can be chosen as
(21)$$\theta (t) = 3\pi {\left( {\frac{t}{{6T}} + \frac{1}{2}} \right)^{2}} - 2\pi {\left( {\frac{t}{{6T}} + \frac{1}{2}} \right)^{3}},$$
where $t \in [ - 3T,3T]$. Here, we use the simple Fourier series type of Ansatz,
(22)$${\eta _ + }(t) ={-} \theta - n\sin [2\theta ],$$
where $n$ is a freely chosen parameter. Using Eq. (20), we obtain
(23)$$\beta (t) ={-} {\rm{arccot}}[2(1 + 2n \cdot \cos 2\theta )\sin\theta ].$$
Once $\theta (t)$ and $\beta (t)$ are determined, Rabi frequency $\Omega$ and detuning $\Delta$ leading to rapid population inversion are found using Eq. (16). When an error representing a weak disturbance to the system occurs, the Hamiltonian becomes $H_\delta =H+\delta V$, where $\delta$ is an unknown time-independent parameter representing the error in the description of the model and $V$ is the error Hamiltonian. For example, when $V = \Omega {\sigma _x}/2$, it implies that the system deviates only at Rabi frequency $\Omega$, which is called the Rabi frequency error. In contrast, when $V = H$, it implies that the system deviates at detuning $\Delta$ and Rabi frequency $\Omega$ of the Hamiltonian simultaneously, which is called the systematic error. These errors decrease the accuracy and efficiency of the control of quantum states.
Therefore, we should design different robust schemes against various errors. To this end, we define the control error sensitivity as
(24)$${q_s} ={-} \frac{{{\partial ^{2}}P({t_f})}}{2{\partial {\delta ^{2}}}}{|_{\delta = 0}},$$
where $P({t_f})$ is the transition probability at the final moment. When ${q_s} = 0$ is satisfied, the error sensitivity is zero, suggesting that the optimal LRI shortcuts are robust against such types of errors. The Rabi frequencies and detunings for the optimal LRI against the Rabi frequency error and the systematic error are plotted in Fig. 5(a), and the corresponding population of state, $\left |\Phi _+\right \rangle$, is shown in Fig. 5(b).
Fig. 5. Time dependence of parameters $\Delta$, $\Omega$, and population $P(t)$ of state $\left | {{\Phi _ + }} \right \rangle$ of LRI method. (a) n = $-$0.5, $\Delta$ (red, dashed line) and $\Omega$ (blue, solid line); n = 0.125, $\Delta$ (yellow, dashed line) and $\Omega$ (purple, solid line), (b) Population $P(t)$ (red, solid line). It is worth noting that for all n, time-dependent functions of populations are same, because n affects only phases.
3.5 CAP
Finally, we explicitly explain how use a CAP to produce a specific coherent change in the quantum system. For this purpose, we take pulses with the same shapes, widths, and detunings of each pulses, but different phases. For Hamiltonian $H(t)$, its propagator $U$ can be expressed by Cayley–Klein parameters $a$ and $b$,
(25)$$\begin{aligned}U = \left( {\begin{array}{*{20}{c}} a & b\\ { - {b^{*}}} & {{a^{*}}} \end{array}} \right), \end{aligned}$$
where ${\left | a \right |^{2}} + {\left | b \right |^{2}} = 1$. In particular, the transition probability from initial state $\left |C\right \rangle$ to target state $\left |\Phi _+\right \rangle$ is expressed as $P=|U_{21}|^{2}=|b|^{2}$, and vice versa. The remaining probability of the initial state is $|a|^{2}$, which should be a very small value at the end of the transfer process. When a constant phase $\phi$ is added in driving field $\Omega$ compared to Eq. (3) while the detuning remains unchanged, Hamiltonian $H(t)$ changes to
(26)$$\begin{aligned}H(\phi ) = \frac{1}{2}\left( {\begin{array}{*{20}{c}} { - \Delta } & {\Omega {e^{ - i\phi }}}\\ {\Omega {e^{i\phi }}} & \Delta \end{array}} \right). \end{aligned}$$
Thus, the propagator $U$ is
(27)$$\begin{aligned} U(\phi ) = \left( {\begin{array}{*{20}{c}} a & {b{e^{ - i\phi }}}\\ { - {b^{*}}{e^{i\phi }}} & {{a^{*}}} \end{array}} \right).\end{aligned}$$
For a composite sequence of $N$ identical pulses, the total propagator is expressed as
(28)$${U^{(N)}} = U({\phi _N})U({\phi _{N - 1}}) \cdots U({\phi _2})U({\phi _1}),$$
where phases ${\phi _k}$ are arbitrary control parameters. For simplicity, the first phase is set as zero because the global phase can be removed. To further reduce the constraint equations, phase sequences having symmetric distributions are applied, i.e., ${\phi _{N + 1 - k}} = {\phi _k}$, where $N$ is the total number of pulses. If the following constraints act the time-dependent variation in Hamiltonian $H$:
(29)$$\Delta (t) ={-} \Delta ( - t),\;\;\;\;{\rm{ }}\Omega (t) = \Omega ( - t),$$
parameter $a$ of propagator $U$ is real, i.e., $a \in R$. For a three-pulse sequence, only the phase of the second pulse needs to be adjusted. Here, we choose the sequence of phases as $(0,{\phi _2},0)$. From Eqs. (27) and (28), we obtain
(30)$$U_{11}^{(3)} = {a^{3}} - {a\left| b \right|^{2}}(1+2\cos{\phi _2}).$$
To realize a high-fidelity population transfer, an appropriate condition should be set to minimize $U_{11}^{(3)}$. To this end, we should only choose the condition that ${\phi _2} = 2\pi /3$, to ensure that the second term on the right-hand side of Eq. (30) is zero, and ${a^{3}}$ is a three-order small quantity of $a$. In addition, even if the population transfer of a single pulse is not completely reached, i.e., ${U_{11}} \ne 0$, this deviation can be exponentially reduced by a composite technique to achieve perfect transfer. For a five-pulse sequence with phases $(0,{\phi _2},{\phi _3},{\phi _2},0)$, we can similarly obtain
(31)$$\begin{aligned} \begin{array}{c} U_{11}^{(5)} = {a^{5}} + a{\left| b \right|^{4}}[2\cos({\phi _2} - {\phi _3}) + 2\cos(2{\phi _2} - {\phi _3}) + 1]\\ - 2{a^{3}}{\left| b \right|^{2}}[2\cos{\phi _2} + \cos ({\phi _2} - {\phi _3}) + \cos{\phi _3} + 1]. \end{array} \end{aligned}$$
Again, we choose ${\phi _2} = 4\pi /5,{\phi _3} = 2\pi /5$; therefore, $U_{11}^{(5)}$ is only left with the highest-order term of $a$, i.e., $U_{11}^{(5)} = {a^{5}}$. Similarly, we can obtain the phases corresponding to a seven-pulse sequence: ${\phi _2} = 6\pi /7,{\phi _3} = 4\pi /7,$ and ${\phi _4} = 8\pi /7$. In fact, for any CP sequence, there is a general formula for the selection of phases [37]. It is worth noting that the choices of phases are nonunique because the constrained equations are nonlinear functions of phases, from Eqs. (30) and (31). For any set of pulses, if $\{ {\phi _k}\} _2^{N - 1}$ is a solution, then $\{ 2\pi - {\phi _k}\} _2^{N - 1}$ is also a solution. In addition, for a CAP with numerous pulses, there are frequently more than two independent solutions $\{ {\phi _k}\} _2^{N - 1}$ and $\{ 2\pi - {\phi _k}\} _2^{N - 1}$, e.g., $(0,2\pi /5,6\pi /5,2\pi /5,0)$ is also a solution of a five-pulse sequence. For an $N$-pulse sequence, the transition probability can be written as
(32)$$P = 1 - {a^{2N}}.$$
If $N$ is sufficiently large, the fidelity of the prepared quantum state is infinitely close to $1$. In fact, $N=5$ or $7$ may be sufficient for the requirements of fidelity in quantum information processing.
Here, to realize population inversion, an odd number of pulses must be satisfied. For simplicity, we take a five-pulse sequence as an example. For single pulse, the Rabi frequency and the detuning are a Gaussian pulse and a linear chirp, respectively, i.e.,
(33)$$\Delta = \frac{2}{{T}}\frac{t}{T},\;\;\;\;{\rm{ }}\Omega = \frac{{\sqrt \pi }}{T}{e^{ - {{(t/T)}^{2}}}},$$
which are shown in Fig. 6(a). They satisfy the constraint conditions in Eq. (29), and $t \in [ - 3T,3T]$. As depicted in Fig. 6(b), single pulse cannot achieve complete population transfer, and its transition probability is 92.32%. The transition probability increases with the increase in the pulse sequence. The five-pulse sequence can achieve complete population transfer with a transition probability of 99.99%.
Fig. 6. Time dependence of parameters $\Delta$, $\Omega$, and population $P(t)$ of state $\left | {{\Phi _ + }} \right \rangle$ of CAP method. In protocol, we employ five-pulse sequence to construct CAP, in which all pulses have the same Rabi frequencies and detunings. $\Delta$ (red, dashed line) and $\Omega$ (blue, solid line) of single pulse are plotted in (a). (b) Population $P(t)$ of CAP with five-pulse sequence (red, solid line), where duration of each pulse is 6T.
4. Robustness against different experimental errors
Here, we discuss how the effectiveness of population inversion is affected by the Rabi frequency error and the systematic error for the quantum control protocols discussed above.
First, the error originating from the Rabi frequency is considered. For RE, we can write the relationship between the transition probability and the error coefficient analytically as $P({t_f}) = \frac {1}{2}\left [ {1 + \cos (\delta \pi )} \right ]$. For a CD, the errors of both original Hamiltonian $H$ and additional Hamiltonian ${H_c}$ should be considered, i.e., $(\Omega + i{\Omega _c} ) \to (1 + \delta )(\Omega + i{\Omega _c})$. For an LRI, the choice of parameters can be further optimized. When $n=-0.5$, ${q_s} = 0$ is satisfied; thus, the Rabi frequency and detuning for the optimal LRI scheme with respect to the Rabi frequency error are obtained from Eq. (16) (see Fig. 5). In Fig. 7, we compare the accuracies of the five methods by plotting the transition probability as a function of error parameter $\delta$. We find that in the vivinity of $\delta =0$, the RE, CD, LRI and CAP techniques behave very well, with the transition probability being 1. However, as $|\delta |$ increases, their accuracies are affected. The CAP method outperforms the other techniques, followed by the LRI and then the CD method. The RE method is sensitive to the variations in the Rabi frequency. The AP method cannot achieve complete population transfer when $\delta =0$, but it improves as $\delta$ increases. The CAP method maintains the probability at ultrahigh efficiencies above $99.9\%$ over broad ranges of $\delta$, showing its ultrahigh robustness against the Rabi frequency error.
Fig. 7. Transition probability $P({t_f})$ versus Rabi frequency error parameter $\delta$ (dimensionless parameter): RE (black, dashed line), CD (yellow, solid line), LRI (blue, dashed line), AP (green, dashed-dotted line), and CAP (red, solid line).
In the following, the effect of the systematic error on quantum state control is considered. The systematic error of RE is also the Rabi frequency error. The CD method considers both original Hamiltonian $H$ and the error of additional Hamiltonian ${H_c}$, i.e., $(H + {H_c}) \to (1 + \delta )(H + {H_c})$. The optimal LRI scheme with respect to the systematic error is obtained when $n=0.125$, and the Rabi frequency and detuning are also determined using Eq. (16) (see Fig. 5). In Fig. 8, we observe that the effect of the variation in the systematic error is similar to that of the Rabi frequency variation. The efficiencies of the LRI and CD methods show robustness against the systematic error. As prior, the CAP technique outperforms its competitors and features a broad range of high efficiencies.
Fig. 8. Transition probability $P({t_f})$ versus systematic error parameter $\delta$ (dimensionless parameter): RE (black, dashed line), CD (yellow, solid line), LRI (blue, dashed line), AP (green, dashed-dotted line), and CAP (red, solid line).
In summary, we show that CPs can be used to achieve complete and robust quantum state engineering in three-level quantum systems. A CAP combines the robustness of AP with the high fidelity of RE using a sequence of pulses to cause a change in the quantum state. For coherent control, RE and CAP do not depend on specific shapes of pulses; however, RE is sensitive to the pulse area and detuning. Different from the CD and LRI methods, a CAP is constructed by controlling the phases of the pulse sequence; thus, it has more freedom in designing the Hamiltonian parameters. By comparing the sensitivities of the CAP, RE, AP, CD, and LRI techniques to the Rabi frequency and systematic errors, we find that the LRI method needs to be changed corresponding to the Rabi frequency and the detuning to achieve robustness against the different experimental errors. The CAP shows ultrahigh robustness against these errors with a broad range of high efficiencies over $99.9\%$, without the need for varying the Hamiltonian parameters. All these features make a CAP a promising alternative for coherent control of three-level quantum systems.
Natural Science Foundation of Anhui Province (2008085QA43); National Natural Science Foundation of China (12004006, 12075001, 12175001).
Data produced by numerical simulations in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
1. P. Král, I. Thanopulos, and M. Shapiro, "Colloquium: Coherently controlled adiabatic passage," Rev. Mod. Phys. 79(1), 53–77 (2007). [CrossRef]
2. H. J. Kimble, "The quantum internet," Nature 453(7198), 1023–1030 (2008). [CrossRef]
3. K. Bergmann, H. Theuer, and B. W. Shore, "Coherent population transfer among quantum states of atoms and molecules," Rev. Mod. Phys. 70(3), 1003–1025 (1998). [CrossRef]
4. N. V. Vitanov, T. Halfmann, B. W. Shore, and K. Bergmann, "Laser-induced population transfer by adiabatic passage techniques," Annu. Rev. Phys. Chem. 52(1), 763–809 (2001). [CrossRef]
5. M. Saffman, T. G. Walker, and K. Mølmer, "Quantum information with Rydberg atoms," Rev. Mod. Phys. 82(3), 2313–2363 (2010). [CrossRef]
6. G. R. Feng, G. F. Xu, and G. L. Long, "Experimental realization of nonadiabatic holonomic quantum computation," Phys. Rev. Lett. 110(19), 190501 (2013). [CrossRef]
7. A. Ruschhaupt, X. Chen, D. Alonso, and J. G. Muga, "Optimally robust shortcuts to population inversion in two-level quantum systems," New J. Phys. 14(9), 093040 (2012). [CrossRef]
8. Y.-H. Chen, Y. Xia, Q.-Q. Chen, and J. Song, "Ef?cient shortcuts to adiabatic passage for fast population transfer in multiparticle systems," Phys. Rev. A 89(3), 033856 (2014). [CrossRef]
9. X.-K. Song, H. Zhang, Q. Ai, J. Qiu, and F.-G. Deng, "Shortcuts to adiabatic holonomic quantum computation in decoherence-free subspace with transitionless quantum driving algorithm," New J. Phys. 18(2), 023001 (2016). [CrossRef]
10. J.-L. Wu, Y. Wang, J.-X. Han, C. Wang, S.-L. Su, Y. Xia, Y. Jiang, and J. Song, "Two-path interference for enantiomer-selective state transfer of chiral molecules," Phys. Rev. Applied 13(4), 044021 (2020). [CrossRef]
11. L. Q. Qiu, H. Li, Z. K. Han, W. Zheng, X. P. Yang, Y. Q. Dong, S. Q. Song, D. Lan, X. S. Tan, and Y. Yu, "Experimental realization of noncyclic geometric gates with shortcut to adiabaticity in a superconducting circuit," Appl. Phys. Lett. 118(25), 254002 (2021). [CrossRef]
12. Z.-C. He and Z.-Y. Xue, "Robust nonadiabatic holonomic quantum gates on decoherence-protected qubits," Appl. Phys. Lett. 119(10), 104001 (2021). [CrossRef]
13. S. Li and Z.-Y. Xue, "Dynamically corrected nonadiabatic holonomic quantum gates," Phys. Rev. Applied 16(4), 044005 (2021). [CrossRef]
14. E. Zahedinejad, J. Ghosh, and B. C. Sanders, "Designing high-fidelity single-shot three-qubit gates: A machine-learning approach," Phys. Rev. Applied 6(5), 054005 (2016). [CrossRef]
15. Y.-B. Sheng and L. Zhou, "Distributed secure quantum machine learning," Sci. Bull. 62(14), 1025–1029 (2017). [CrossRef]
16. M. Bukov, A. G. R. Day, D. Sels, P. Weinberg, A. Polkovnikov, and P. Mehta, "Reinforcement learning in different phases of quantum control," Phys. Rev. X 8, 031086 (2018). [CrossRef]
17. Z. T. Wang, Y. Ashida, and M. Ueda, "Deep reinforcement learning control of quantum cartpoles," Phys. Rev. Lett. 125(10), 100401 (2020). [CrossRef]
18. X.-S. Xu, H. Zhang, X.-Y. Kong, M. Wang, and G. L. Long, "Frequency-tuning-induced state transfer in optical microcavities," Photon. Research 8(4), 490–496 (2020). [CrossRef]
19. B. T. Torosov and N. V. Vitanov, "Smooth composite pulses for high-?delity quantum information processing," Phys. Rev. A 83(5), 053420 (2011). [CrossRef]
20. X. Wang, L. S. Bishop, J. P. Kestner, E. Barnes, K. Sun, and S. D. Sarma, "Composite pulses for robust universal control of singlet–triplet qubits," Nature Commun. 3(1), 997 (2012). [CrossRef]
21. G. T. Genov, D. Schraft, N. V. Vitanov, and T. Halfmann, "Arbitrarily accurate pulse sequences for robust dynamical decoupling," Phys. Rev. Lett. 118(13), 133202 (2017). [CrossRef]
22. A. A. Rangelov, N. V. Vitanov, L. P. Yatsenko, B. W. Shore, T. Halfmann, and K. Bergmann, "Stark-shift-chirped rapid-adiabatic-passage technique among three states," Phys. Rev. A 72(5), 053403 (2005). [CrossRef]
23. E. A. Shapiro, V. Milner, C. Menzel-Jones, and M. Shapiro, "Piecewise adiabatic passage with a series of femtosecond pulses," Phys. Rev. Lett. 99(3), 033002 (2007). [CrossRef]
24. N. V. Vitanov, A. A. Rangelov, B. W. Shore, and K. Bergmann, "Stimulated Raman adiabatic passage in physics, chemistry, and beyond," Rev. Mod. Phys. 89(1), 015006 (2017). [CrossRef]
25. D. Y. Li, W. Zheng, J. Chu, X. P. Yang, S. Q. Song, Z. K. Han, Y. Q. Dong, Z. M. Wang, X. M. Yu, D. Lan, J. Zhao, S. X. Li, X. S. Tan, and Y. Yu, "Coherent state transfer between superconducting qubits via stimulated Raman adiabatic passage," Appl. Phys. Lett. 118(10), 104003 (2021). [CrossRef]
26. M. Demirplak and S. A. Rice, "Adiabatic population transfer with control fields," J. Phys. Chem. A 107(46), 9937–9945 (2003). [CrossRef]
27. M. V. Berry, "Transitionless quantum driving," J. Phys. A 42(36), 365303 (2009). [CrossRef]
28. X. Chen, I. Lizuain, A. Ruschhaupt, D. Guéry-Odelin, and J. G. Muga, "Shortcut to adiabatic passage in two- and three-level atoms," Phys. Rev. Lett. 105(12), 123003 (2010). [CrossRef]
29. X.-K. Song, Q. Ai, J. Qiu, and F.-G. Deng, "Physically feasible three-level transitionless quantum driving with multiple Schrödinger dynamics," Phys. Rev. A 93(5), 052324 (2016). [CrossRef]
30. H. Zhang, X.-K. Song, Q. Ai, H. Wang, G.-J. Yang, and F.-G. Deng, "Fast and robust quantum control for multimode interactions using shortcuts to adiabaticity," Opt. Express 27(5), 7384–7392 (2019). [CrossRef]
31. Y.-H. Chen, W. Qin, X. Wang, A. Miranowicz, and F. Nori, "Shortcuts to adiabaticity for the quantum Rabi model: Efficient generation of giant entangled cat states via parametric amplification," Phys. Rev. Lett. 126(2), 023602 (2021). [CrossRef]
32. H. R. Lewis and W. B. Riesenfeld, "An exact quantum theory of the time-dependent harmonic oscillator and of a charged particle in a time-dependent electromagnetic field," J. Math. Phys. 10(8), 1458–1473 (1969). [CrossRef]
33. X. Chen, A. Ruschhaupt, S. Schmidt, A. del Campo, D. Guéry-Odelin, and J. G. Muga, "Fast optimal frictionless atom cooling in harmonic traps: Shortcut to adiabaticity," Phys. Rev. Lett. 104(6), 063002 (2010). [CrossRef]
34. X.-T. Yu, Q. Zhang, Y. Ban, and X. Chen, "Fast and robust control of two interacting spins," Phys. Rev. A 97(6), 062317 (2018). [CrossRef]
35. X.-K. Song, F. Meng, B.-J. Liu, D. Wang, L. Ye, and M.-H. Yung, "Robust stimulated Raman shortcut-to-adiabatic passage with invariant-based optimal control," Opt. Express 29(6), 7998–8014 (2021). [CrossRef]
36. Z. K. Han, Y. Q. Dong, X. P. Yang, S. Q. Song, L. Q. Qiu, W. Zheng, J. W. Xu, T. Q. Huang, Z. M. Wang, D. Lan, X. S. Tan, and Y. Yu, "Realization of invariant-based shortcuts to population inversion with a superconducting circuit," Appl. Phys. Lett. 118(22), 224003 (2021). [CrossRef]
37. B. T. Torosov, S. Guérin, and N. V. Vitanov, "High-fidelity adiabatic passage by composite sequences of chirped pulses," Phys. Rev. Lett. 106(23), 233001 (2011). [CrossRef]
38. G. T. Genov, D. Schraft, T. Halfmann, and N. V. Vitanov, "Correction of arbitrary field errors in population inversion of quantum systems by universal composite pulses," Phys. Rev. Lett. 113(4), 043001 (2014). [CrossRef]
39. B. T. Torosov and N. V. Vitanov, "Arbitrarily accurate twin composite π-pulse sequences," Phys. Rev. A 97(4), 043408 (2018). [CrossRef]
40. Z.-C. Shi, H.-N. Wu, L.-T. Shen, J. Song, Y. Xia, X. X. Yi, and S.-B. Zheng, "Robust single-qubit gates by composite pulses in three-level systems," Phys. Rev. A 103(5), 052612 (2021). [CrossRef]
41. Y.-X. Du, Z.-T. Liang, Y.-C. Li, X.-X. Yue, Q.-X. Lv, W. Huang, X. Chen, H. Yan, and S.-L. Zhu, "Experimental realization of stimulated Raman shortcut-to-adiabatic passage with cold atoms," Nat. Commun. 7(1), 12479 (2016). [CrossRef]
42. D. Schraft, T. Halfmann, G. T. Genov, and N. V. Vitanov, "Experimental demonstration of composite adiabatic passage," Phys. Rev. A 88(6), 063406 (2013). [CrossRef]
43. A. Bruns, G. T. Genov, M. Hain, N. V. Vitanov, and T. Halfmann, "Experimental demonstration of composite stimulated Raman adiabatic passage," Phys. Rev. A 98(5), 053413 (2018). [CrossRef]
44. B. T. Torosov, B. W. Shore, and N. V. Vitanov, "Coherent control techniques for two-state quantum systems: A comparative study," Phys. Rev. A 103(3), 033110 (2021). [CrossRef]
45. K. Eckert, M. Lewenstein, R. Corbalan, G. Birkl, W. Ertmer, and J. Mompart, "Three-level atom optics via the tunneling interaction," Phys. Rev. A 70(2), 023606 (2004). [CrossRef]
46. S. Martínez-Garaot, E. Torrontegui, X. Chen, and J. G. Muga, "Shortcuts to adiabaticity in three-level systems using Lie transforms," Phys. Rev. A 89(5), 053408 (2014). [CrossRef]
47. T. Opatrný and K. Mølmer, "Partial suppression of nonadiabatic transitions," New J. Phys. 16(1), 015025 (2014). [CrossRef]
48. A. A. Rangelov and N. V. Vitanov, "Achromatic multiple beam splitting by adiabatic passage in optical waveguides," Phys. Rev. A 85(5), 055803 (2012). [CrossRef]
49. K.-H. Chien, C.-S. Yeih, and S.-Y. Tseng, "Mode conversion/splitting in multimode waveguides based on invariant engineering," J. Lightwave Technol. 31(21), 3387–3394 (2013). [CrossRef]
50. M. Ornigotti, G. D. Valle, T. T. Fernandez, A. Coppa, V. Foglietti, P. Laporta, and S. Longhi, "Visualization of two-photon Rabi oscillations in evanescently coupled optical waveguides," J. Phys. B: At., Mol. Opt. Phys. 41(8), 085402 (2008). [CrossRef]
51. G. S. Vasilev and N. V. Vitanov, "Coherent excitation of a two-state system by a linearly chirped Gaussian pulse," J. Chem. Phys. 123(17), 174106 (2005). [CrossRef]
P. Král, I. Thanopulos, and M. Shapiro, "Colloquium: Coherently controlled adiabatic passage," Rev. Mod. Phys. 79(1), 53–77 (2007).
H. J. Kimble, "The quantum internet," Nature 453(7198), 1023–1030 (2008).
K. Bergmann, H. Theuer, and B. W. Shore, "Coherent population transfer among quantum states of atoms and molecules," Rev. Mod. Phys. 70(3), 1003–1025 (1998).
N. V. Vitanov, T. Halfmann, B. W. Shore, and K. Bergmann, "Laser-induced population transfer by adiabatic passage techniques," Annu. Rev. Phys. Chem. 52(1), 763–809 (2001).
M. Saffman, T. G. Walker, and K. Mølmer, "Quantum information with Rydberg atoms," Rev. Mod. Phys. 82(3), 2313–2363 (2010).
G. R. Feng, G. F. Xu, and G. L. Long, "Experimental realization of nonadiabatic holonomic quantum computation," Phys. Rev. Lett. 110(19), 190501 (2013).
A. Ruschhaupt, X. Chen, D. Alonso, and J. G. Muga, "Optimally robust shortcuts to population inversion in two-level quantum systems," New J. Phys. 14(9), 093040 (2012).
Y.-H. Chen, Y. Xia, Q.-Q. Chen, and J. Song, "Ef?cient shortcuts to adiabatic passage for fast population transfer in multiparticle systems," Phys. Rev. A 89(3), 033856 (2014).
X.-K. Song, H. Zhang, Q. Ai, J. Qiu, and F.-G. Deng, "Shortcuts to adiabatic holonomic quantum computation in decoherence-free subspace with transitionless quantum driving algorithm," New J. Phys. 18(2), 023001 (2016).
J.-L. Wu, Y. Wang, J.-X. Han, C. Wang, S.-L. Su, Y. Xia, Y. Jiang, and J. Song, "Two-path interference for enantiomer-selective state transfer of chiral molecules," Phys. Rev. Applied 13(4), 044021 (2020).
L. Q. Qiu, H. Li, Z. K. Han, W. Zheng, X. P. Yang, Y. Q. Dong, S. Q. Song, D. Lan, X. S. Tan, and Y. Yu, "Experimental realization of noncyclic geometric gates with shortcut to adiabaticity in a superconducting circuit," Appl. Phys. Lett. 118(25), 254002 (2021).
Z.-C. He and Z.-Y. Xue, "Robust nonadiabatic holonomic quantum gates on decoherence-protected qubits," Appl. Phys. Lett. 119(10), 104001 (2021).
S. Li and Z.-Y. Xue, "Dynamically corrected nonadiabatic holonomic quantum gates," Phys. Rev. Applied 16(4), 044005 (2021).
E. Zahedinejad, J. Ghosh, and B. C. Sanders, "Designing high-fidelity single-shot three-qubit gates: A machine-learning approach," Phys. Rev. Applied 6(5), 054005 (2016).
Y.-B. Sheng and L. Zhou, "Distributed secure quantum machine learning," Sci. Bull. 62(14), 1025–1029 (2017).
M. Bukov, A. G. R. Day, D. Sels, P. Weinberg, A. Polkovnikov, and P. Mehta, "Reinforcement learning in different phases of quantum control," Phys. Rev. X 8, 031086 (2018).
Z. T. Wang, Y. Ashida, and M. Ueda, "Deep reinforcement learning control of quantum cartpoles," Phys. Rev. Lett. 125(10), 100401 (2020).
X.-S. Xu, H. Zhang, X.-Y. Kong, M. Wang, and G. L. Long, "Frequency-tuning-induced state transfer in optical microcavities," Photon. Research 8(4), 490–496 (2020).
B. T. Torosov and N. V. Vitanov, "Smooth composite pulses for high-?delity quantum information processing," Phys. Rev. A 83(5), 053420 (2011).
X. Wang, L. S. Bishop, J. P. Kestner, E. Barnes, K. Sun, and S. D. Sarma, "Composite pulses for robust universal control of singlet–triplet qubits," Nature Commun. 3(1), 997 (2012).
G. T. Genov, D. Schraft, N. V. Vitanov, and T. Halfmann, "Arbitrarily accurate pulse sequences for robust dynamical decoupling," Phys. Rev. Lett. 118(13), 133202 (2017).
A. A. Rangelov, N. V. Vitanov, L. P. Yatsenko, B. W. Shore, T. Halfmann, and K. Bergmann, "Stark-shift-chirped rapid-adiabatic-passage technique among three states," Phys. Rev. A 72(5), 053403 (2005).
E. A. Shapiro, V. Milner, C. Menzel-Jones, and M. Shapiro, "Piecewise adiabatic passage with a series of femtosecond pulses," Phys. Rev. Lett. 99(3), 033002 (2007).
N. V. Vitanov, A. A. Rangelov, B. W. Shore, and K. Bergmann, "Stimulated Raman adiabatic passage in physics, chemistry, and beyond," Rev. Mod. Phys. 89(1), 015006 (2017).
D. Y. Li, W. Zheng, J. Chu, X. P. Yang, S. Q. Song, Z. K. Han, Y. Q. Dong, Z. M. Wang, X. M. Yu, D. Lan, J. Zhao, S. X. Li, X. S. Tan, and Y. Yu, "Coherent state transfer between superconducting qubits via stimulated Raman adiabatic passage," Appl. Phys. Lett. 118(10), 104003 (2021).
M. Demirplak and S. A. Rice, "Adiabatic population transfer with control fields," J. Phys. Chem. A 107(46), 9937–9945 (2003).
M. V. Berry, "Transitionless quantum driving," J. Phys. A 42(36), 365303 (2009).
X. Chen, I. Lizuain, A. Ruschhaupt, D. Guéry-Odelin, and J. G. Muga, "Shortcut to adiabatic passage in two- and three-level atoms," Phys. Rev. Lett. 105(12), 123003 (2010).
X.-K. Song, Q. Ai, J. Qiu, and F.-G. Deng, "Physically feasible three-level transitionless quantum driving with multiple Schrödinger dynamics," Phys. Rev. A 93(5), 052324 (2016).
H. Zhang, X.-K. Song, Q. Ai, H. Wang, G.-J. Yang, and F.-G. Deng, "Fast and robust quantum control for multimode interactions using shortcuts to adiabaticity," Opt. Express 27(5), 7384–7392 (2019).
Y.-H. Chen, W. Qin, X. Wang, A. Miranowicz, and F. Nori, "Shortcuts to adiabaticity for the quantum Rabi model: Efficient generation of giant entangled cat states via parametric amplification," Phys. Rev. Lett. 126(2), 023602 (2021).
H. R. Lewis and W. B. Riesenfeld, "An exact quantum theory of the time-dependent harmonic oscillator and of a charged particle in a time-dependent electromagnetic field," J. Math. Phys. 10(8), 1458–1473 (1969).
X. Chen, A. Ruschhaupt, S. Schmidt, A. del Campo, D. Guéry-Odelin, and J. G. Muga, "Fast optimal frictionless atom cooling in harmonic traps: Shortcut to adiabaticity," Phys. Rev. Lett. 104(6), 063002 (2010).
X.-T. Yu, Q. Zhang, Y. Ban, and X. Chen, "Fast and robust control of two interacting spins," Phys. Rev. A 97(6), 062317 (2018).
X.-K. Song, F. Meng, B.-J. Liu, D. Wang, L. Ye, and M.-H. Yung, "Robust stimulated Raman shortcut-to-adiabatic passage with invariant-based optimal control," Opt. Express 29(6), 7998–8014 (2021).
Z. K. Han, Y. Q. Dong, X. P. Yang, S. Q. Song, L. Q. Qiu, W. Zheng, J. W. Xu, T. Q. Huang, Z. M. Wang, D. Lan, X. S. Tan, and Y. Yu, "Realization of invariant-based shortcuts to population inversion with a superconducting circuit," Appl. Phys. Lett. 118(22), 224003 (2021).
B. T. Torosov, S. Guérin, and N. V. Vitanov, "High-fidelity adiabatic passage by composite sequences of chirped pulses," Phys. Rev. Lett. 106(23), 233001 (2011).
G. T. Genov, D. Schraft, T. Halfmann, and N. V. Vitanov, "Correction of arbitrary field errors in population inversion of quantum systems by universal composite pulses," Phys. Rev. Lett. 113(4), 043001 (2014).
B. T. Torosov and N. V. Vitanov, "Arbitrarily accurate twin composite π-pulse sequences," Phys. Rev. A 97(4), 043408 (2018).
Z.-C. Shi, H.-N. Wu, L.-T. Shen, J. Song, Y. Xia, X. X. Yi, and S.-B. Zheng, "Robust single-qubit gates by composite pulses in three-level systems," Phys. Rev. A 103(5), 052612 (2021).
Y.-X. Du, Z.-T. Liang, Y.-C. Li, X.-X. Yue, Q.-X. Lv, W. Huang, X. Chen, H. Yan, and S.-L. Zhu, "Experimental realization of stimulated Raman shortcut-to-adiabatic passage with cold atoms," Nat. Commun. 7(1), 12479 (2016).
D. Schraft, T. Halfmann, G. T. Genov, and N. V. Vitanov, "Experimental demonstration of composite adiabatic passage," Phys. Rev. A 88(6), 063406 (2013).
A. Bruns, G. T. Genov, M. Hain, N. V. Vitanov, and T. Halfmann, "Experimental demonstration of composite stimulated Raman adiabatic passage," Phys. Rev. A 98(5), 053413 (2018).
B. T. Torosov, B. W. Shore, and N. V. Vitanov, "Coherent control techniques for two-state quantum systems: A comparative study," Phys. Rev. A 103(3), 033110 (2021).
K. Eckert, M. Lewenstein, R. Corbalan, G. Birkl, W. Ertmer, and J. Mompart, "Three-level atom optics via the tunneling interaction," Phys. Rev. A 70(2), 023606 (2004).
S. Martínez-Garaot, E. Torrontegui, X. Chen, and J. G. Muga, "Shortcuts to adiabaticity in three-level systems using Lie transforms," Phys. Rev. A 89(5), 053408 (2014).
T. Opatrný and K. Mølmer, "Partial suppression of nonadiabatic transitions," New J. Phys. 16(1), 015025 (2014).
A. A. Rangelov and N. V. Vitanov, "Achromatic multiple beam splitting by adiabatic passage in optical waveguides," Phys. Rev. A 85(5), 055803 (2012).
K.-H. Chien, C.-S. Yeih, and S.-Y. Tseng, "Mode conversion/splitting in multimode waveguides based on invariant engineering," J. Lightwave Technol. 31(21), 3387–3394 (2013).
M. Ornigotti, G. D. Valle, T. T. Fernandez, A. Coppa, V. Foglietti, P. Laporta, and S. Longhi, "Visualization of two-photon Rabi oscillations in evanescently coupled optical waveguides," J. Phys. B: At., Mol. Opt. Phys. 41(8), 085402 (2008).
G. S. Vasilev and N. V. Vitanov, "Coherent excitation of a two-state system by a linearly chirped Gaussian pulse," J. Chem. Phys. 123(17), 174106 (2005).
Ai, Q.
Alonso, D.
Ashida, Y.
Ban, Y.
Barnes, E.
Bergmann, K.
Berry, M. V.
Birkl, G.
Bishop, L. S.
Bruns, A.
Bukov, M.
Chen, Q.-Q.
Chen, X.
Chen, Y.-H.
Chien, K.-H.
Chu, J.
Coppa, A.
Corbalan, R.
Day, A. G. R.
del Campo, A.
Demirplak, M.
Deng, F.-G.
Dong, Y. Q.
Du, Y.-X.
Eckert, K.
Ertmer, W.
Feng, G. R.
Fernandez, T. T.
Foglietti, V.
Genov, G. T.
Ghosh, J.
Guérin, S.
Guéry-Odelin, D.
Hain, M.
Halfmann, T.
Han, J.-X.
Han, Z. K.
He, Z.-C.
Huang, T. Q.
Huang, W.
Jiang, Y.
Kestner, J. P.
Kimble, H. J.
Kong, X.-Y.
Král, P.
Lan, D.
Laporta, P.
Lewenstein, M.
Lewis, H. R.
Li, D. Y.
Li, H.
Li, S. X.
Li, Y.-C.
Liang, Z.-T.
Liu, B.-J.
Lizuain, I.
Long, G. L.
Longhi, S.
Lv, Q.-X.
Martínez-Garaot, S.
Mehta, P.
Meng, F.
Menzel-Jones, C.
Milner, V.
Miranowicz, A.
Mølmer, K.
Mompart, J.
Muga, J. G.
Nori, F.
Opatrný, T.
Ornigotti, M.
Polkovnikov, A.
Qin, W.
Qiu, J.
Qiu, L. Q.
Rangelov, A. A.
Rice, S. A.
Riesenfeld, W. B.
Ruschhaupt, A.
Saffman, M.
Sanders, B. C.
Sarma, S. D.
Schmidt, S.
Schraft, D.
Sels, D.
Shapiro, E. A.
Shapiro, M.
Shen, L.-T.
Sheng, Y.-B.
Shi, Z.-C.
Shore, B. W.
Song, J.
Song, S. Q.
Song, X.-K.
Su, S.-L.
Sun, K.
Tan, X. S.
Thanopulos, I.
Theuer, H.
Torosov, B. T.
Torrontegui, E.
Tseng, S.-Y.
Ueda, M.
Valle, G. D.
Vasilev, G. S.
Vitanov, N. V.
Walker, T. G.
Wang, D.
Wang, H.
Wang, M.
Wang, X.
Wang, Z. M.
Wang, Z. T.
Weinberg, P.
Wu, H.-N.
Wu, J.-L.
Xu, G. F.
Xu, J. W.
Xu, X.-S.
Xue, Z.-Y.
Yan, H.
Yang, G.-J.
Yang, X. P.
Yatsenko, L. P.
Ye, L.
Yeih, C.-S.
Yi, X. X.
Yu, X. M.
Yu, X.-T.
Yu, Y.
Yue, X.-X.
Yung, M.-H.
Zahedinejad, E.
Zhang, H.
Zhang, Q.
Zhao, J.
Zheng, S.-B.
Zheng, W.
Zhou, L.
Zhu, S.-L.
Annu. Rev. Phys. Chem. (1)
Appl. Phys. Lett. (4)
J. Chem. Phys. (1)
J. Lightwave Technol. (1)
J. Math. Phys. (1)
J. Phys. A (1)
J. Phys. B: At., Mol. Opt. Phys. (1)
J. Phys. Chem. A (1)
Nature Commun. (1)
New J. Phys. (3)
Photon. Research (1)
Phys. Rev. A (13)
Phys. Rev. Applied (3)
Rev. Mod. Phys. (4)
Sci. Bull. (1)
(1) H 0 ( t ) = ( K − 2 J 0 − 2 J 0 − 2 J 0 − 2 J K ) ,
(2) H 0 ′ ( t ) = 1 2 ( − K − 4 J 0 − 4 J K 0 0 0 K )
(3) H ( t ) = 1 2 ( − Δ Ω Ω Δ ) ,
(4) P ( t ) = 1 2 [ 1 − cos ( A ) ] ,
(5) Ω ( t ) = π e − ( t / T ) 2 / T ,
(6) Δ = α t T , Ω = Ω 0 e − ( t / T ) 2 ,
(7) 2 Ω 0 > α ≫ 2 T ,
(8) Δ = 5 T t T , Ω = 5 π T e − ( t / T ) 2 ,
(9) H c = i ℏ ∑ ± ( | ∂ t ϕ ± ⟩ ⟨ ϕ ± | − ⟨ ϕ ± | ϕ ± ∂ t ϕ ± ∂ t ϕ ± ⟩ | ϕ ± ⟩ ⟨ ϕ ± | ) ,
(10) | ϕ + ⟩ = ( sin γ cos γ ) , | ϕ − ⟩ = ( cos γ − sin γ )
(11) H c = 1 2 ( 0 i Ω c − i Ω c 0 ) ,
(12) Δ = 2 T t T , Ω = π T e − ( t / T ) 2 ,
(13) Ω c = − 2 π e ( t / T ) 2 [ 2 ( t / T ) 2 + 1 ] / [ 4 ( t / T ) 2 T e ( t / T ) 2 + π T ] .
(14) I = 1 2 ( cos θ sin θ e − i β sin θ e i β − cos θ )
(15) d I d t = 1 i ℏ [ I , H ] + ∂ I ∂ t = 0.
(16) θ ˙ = − Ω sin β , β ˙ = − Ω cot θ cos β − Δ .
(17) | ψ ( t ) ⟩ = ∑ j = ± C j e i η j ( t ) | φ j ( t ) ⟩ , U ( t , t 0 ) = ∑ j = ± e i η j ( t ) | φ j ( t ) ⟩ ⟨ φ j ( t 0 ) | ,
(18) | φ + ( t ) ⟩ = ( e − i β / 2 cos θ 2 e i β / 2 sin θ 2 ) , | φ − ( t ) ⟩ = ( − e − i β / 2 sin θ 2 e i β / 2 cos θ 2 ) ,
(19) η ˙ j ( t ) = 1 ℏ ⟨ φ j ( t ) | i ℏ ∂ ∂ t − H | φ j ( t ) ⟩ .
(20) η ˙ + ( t ) = − η ˙ − ( t ) = θ ˙ cos β 2 sin θ sin β .
(21) θ ( t ) = 3 π ( t 6 T + 1 2 ) 2 − 2 π ( t 6 T + 1 2 ) 3 ,
(22) η + ( t ) = − θ − n sin [ 2 θ ] ,
(23) β ( t ) = − a r c c o t [ 2 ( 1 + 2 n ⋅ cos 2 θ ) sin θ ] .
(24) q s = − ∂ 2 P ( t f ) 2 ∂ δ 2 | δ = 0 ,
(25) U = ( a b − b ∗ a ∗ ) ,
(26) H ( ϕ ) = 1 2 ( − Δ Ω e − i ϕ Ω e i ϕ Δ ) .
(27) U ( ϕ ) = ( a b e − i ϕ − b ∗ e i ϕ a ∗ ) .
(28) U ( N ) = U ( ϕ N ) U ( ϕ N − 1 ) ⋯ U ( ϕ 2 ) U ( ϕ 1 ) ,
(29) Δ ( t ) = − Δ ( − t ) , Ω ( t ) = Ω ( − t ) ,
(30) U 11 ( 3 ) = a 3 − a | b | 2 ( 1 + 2 cos ϕ 2 ) .
(31) U 11 ( 5 ) = a 5 + a | b | 4 [ 2 cos ( ϕ 2 − ϕ 3 ) + 2 cos ( 2 ϕ 2 − ϕ 3 ) + 1 ] − 2 a 3 | b | 2 [ 2 cos ϕ 2 + cos ( ϕ 2 − ϕ 3 ) + cos ϕ 3 + 1 ] .
(32) P = 1 − a 2 N .
James Leger, Editor-in-Chief | CommonCrawl |
Tori in the Cremona groups
arxiv.org. math. Cornell University, 2012. No. arXiv:1207.5205v3.
Popov V.
We classify up to conjugacy the subgroups of certain types in the full, in the affine, and in the special affine Cremona groups. We prove that the normalizers of these subgroups are algebraic. As an application, we obtain new results in the Linearization Problem generalizing to disconnected groups Bialynicki-Birula's results of 1966-67. We prove ``fusion theorems'' for n-dimensional tori in the affine and in the special affine Cremona groups of rank n. In the final section we introduce and discuss the notions of Jordan decomposition and torsion prime numbers for the Cremona groups.
Keywords: Cremona grouptorusconjugacylinearizability
Popov V. Izvestiya. Mathematics. 2013. Vol. 77. No. 4. P. 742-771.
We classify up to conjugacy the subgroups of certain types in the full, affine, and special affine Cremona groups. We prove that the normalizers of these subgroups are algebraic. As an application, we obtain new results in the linearization problem by generalizing Bia{\l}ynicki-Birula's results of 1966--67 to disconnected groups. We prove fusion theorems for n-dimensional tori in the affine and in special affine Cremona groups of rank n and introduce and discuss the notions of Jordan decomposition and torsion prime numbers for the Cremona groups.
Problems for the problem session
Popov V. Electronic preprint server. CIRM. Centro Internazionale per la Ricerca Matematica, 2012. No. нет.
Some problems on the structure of the Cremona groups formulated (with comments) by the author at the International conference Birational and Affine Geometry, Levico Terme (Trento), 29.10.12--03.11.12
Added: Jan 9, 2013
Rationality of the quotient of ℙ2 by finite group of automorphisms over arbitrary field of characteristic zero
Andrey S. Trepalin. Central European Journal of Mathematics. 2014. Vol. 12. No. 2. P. 229-239.
Let $\bbk$ be a field of characteristic zero and $G$ be a finite group of automorphisms of projective plane over $\bbk$. Castelnuovo's criterion implies that the quotient of projective plane by $G$ is rational if the field $\bbk$ is algebraically closed. In this paper we prove that $\mathbb{P}^2_{\bbk} / G$ is rational for an arbitrary field $\bbk$ of characteristic zero.
On stable conjugacy of finite subgroups of the plane Cremona group, II
Yuri Prokhorov. arxiv.org. math. Cornell University, 2013
We prove that, except for a few cases, stable linearizability of finite subgroups of the plane Cremona group implies linearizability.
Jordan groups and automorphism groups of algebraic varieties
Vladimir L. Popov. arxiv.org. math. Cornell University, 2013. No. 1307.5522.
This is an expanded version of my talk at the workshop ``Groups of Automorphisms in Birational and Affine Geometry'', October 29–November 3, 2012, Levico Terme, Italy. The first section is focused on Jordan groups in abstract setting, the second on that in the settings of automorphisms groups and groups of birational self-maps of algebraic varieties. The appendix is an expanded version of my notes on open problems posted on the site of this workshop. It contains formulations of some open problems and the relevant comments.
Cremona Groups and the Icosahedron
Cheltsov I., Shramov K. CRC Press, 2015.
Cremona Groups and the Icosahedron focuses on the Cremona groups of ranks 2 and 3 and describes the beautiful appearances of the icosahedral group A5 in them. The book surveys known facts about surfaces with an action of A5, explores A5-equivariant geometry of the quintic del Pezzo threefold V5, and gives a proof of its A5-birational rigidity.
The authors explicitly describe many interesting A5-invariant subvarieties of V5, including A5-orbits, low-degree curves, invariant anticanonical K3 surfaces, and a mildly singular surface of general type that is a degree five cover of the diagonal Clebsch cubic surface. They also present two birational selfmaps of V5 that commute with A5-action and use them to determine the whole group of A5-birational automorphisms. As a result of this study, they produce three non-conjugate icosahedral subgroups in the Cremona group of rank 3, one of them arising from the threefold V5.
This book presents up-to-date tools for studying birational geometry of higher-dimensional varieties. In particular, it provides readers with a deep understanding of the biregular and birational geometry of V5.
p-elementary subgroups of the Cremona group of rank 3
Prokhorov Y. In bk.: Classification of Algebraic Varieties. Zürich: European Mathematical Society Publishing house, 2010. P. 327-338.
For the subgroups of the Cremona group $\mathrm{Cr}_3(\mathbb C)$ having the form $(\boldsymbol{\mu}_p)^s$, where $p$ is prime, we obtain an upper bound for $s$. Our bound is sharp if $p\ge 17$. | CommonCrawl |
A richly interactive exploratory data analysis and visualization tool using electronic medical records
Chih-Wei Huang1,2,
Richard Lu1,2,
Usman Iqbal1,2,
Shen-Hsien Lin1,2,
Phung Anh (Alex) Nguyen1,2,
Hsuan-Chia Yang2,3,
Chun-Fu Wang4,
Jianping Li4,
Kwan-Liu Ma4,
Yu-Chuan (Jack) Li1,2,5 &
Wen-Shan Jian2,6,7
BMC Medical Informatics and Decision Making volume 15, Article number: 92 (2015) Cite this article
Electronic medical records (EMRs) contain vast amounts of data that is of great interest to physicians, clinical researchers, and medial policy makers. As the size, complexity, and accessibility of EMRs grow, the ability to extract meaningful information from them has become an increasingly important problem to solve.
We develop a standardized data analysis process to support cohort study with a focus on a particular disease. We use an interactive divide-and-conquer approach to classify patients into relatively uniform within each group. It is a repetitive process enabling the user to divide the data into homogeneous subsets that can be visually examined, compared, and refined. The final visualization was driven by the transformed data, and user feedback direct to the corresponding operators which completed the repetitive process. The output results are shown in a Sankey diagram-style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time.
This paper presented a visually rich, interactive web-based application, which could enable researchers to study any cohorts over time by using EMR data. The resulting visualizations help uncover hidden information in the data, compare differences between patient groups, determine critical factors that influence a particular disease, and help direct further analyses. We introduced and demonstrated this tool by using EMRs of 14,567 Chronic Kidney Disease (CKD) patients.
We developed a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. By using CKD as a model of disease, it was assembled by automated correlational analysis and human-curated visual evaluation. The visualization methods such as Sankey diagram can reveal useful knowledge about the particular disease cohort and the trajectories of the disease over time.
Electronic medical records (EMRs) are now widespread and collecting vast amounts of data about patients and metadata about how healthcare is delivered. These small datacenters have the potential to enable a range of health quality improvements that would not be possible with paper-based records [1]. However, the large amounts of data inside EMRs come with one large problem: how to condense the data so that is easily understandable to a human. The volume, variety and veracity of clinical data present a real challenge for non-technical users such as physicians and researchers who wish to view the data. Without a way to quickly summarize the data in a human-understandable way, the insights contained within EMRs will remain locked inside.
Many EMRs are also not flexible enough to accommodate the information needs of different types of users. For instance, clinicians often try to combine data from different information systems in order to piece together an accurate context for the medical problems of the patient who is in the room with them. Clinical researchers, however, may be primarily interested in finding population level outcomes or differences between cohorts. Administrators use EMR data to inform healthcare policy, while patients who use EMRs may be interested in comparing their health to their peers or tracking their own health over time [2]. Unfortunately, little support exists in current EMR systems for any of these common use cases, which hampers informed decision-making.
Visual analytics, also known as data visualization, holds the potential to address the information overload that is becoming more and more prevalent. Visual analytics is the science of analytical reasoning facilitated by advanced interactive visual interfaces [3, 4]. It can play a fundamental role in all IT-enabled healthcare transformation but particularly in healthcare delivery process improvement. Interactive visual approaches are valuable as they move beyond traditional static reports and indicators to mapping, exploration, discovery, and sense-making of complex data. Visual analytics techniques combine concepts from data mining, machine learning, human computing interaction, and human cognition. In healthcare, data visualization has already been used in the areas of patient education, symptom evolution, patient cohort analysis, EHR data and design, and patient care plans. This enables decision makers to obtain ideas for care process data, see patterns, spot trends, and identify outliers, all of which aid user comprehension, memory, and decision making [5].
Our objective is to create a visually interactive exploratory data analysis tool that can be used to graphically show disease-disease associations over time. That is, the tool presents how a cohort of patients with one chronic disease may go on to develop other diseases over time. The study used chronic kidney disease (CKD) as the prototype chronic disease, users could easily change the software tool to visualize a different disease. In the previous study, we have verified that such a system can significantly raise the efficiency and performance of practicing physicians and clinical researchers who desire to use EMRs for their research projects [6, 7]. Expected cohort trajectories are of great interest in clinical research. Our main task, then, will be to identify underlying chronic diseases and explore what happens over time to the being diagnosed patients and what comorbidities they develop over time.
The system is designed based on data transformations that are required to perform longitudinal cohort studies. The transformed data are connected by a sequence of adjustable operators. The output results are shown in a Sankey diagram–style timeline, which is a particular kind of flow diagram for showing factors' states and transitions over time. The visualization is driven by the transformed data, and the user feedback is directed to the corresponding operators, and completing the iterative process.
The data transformation steps behind the visual analysis process are illustrated in Fig. 1. The transformation order follows the analysis process from raw patient records to the final visualization. Assume that there are N patients and M unique factors. As the top-most chart shows, the raw sequence of a patient can be treated as a discrete trajectory with non-uniformly distributed records along the time axis. We define the patient trajectories as P = {p1, …, p n , …, p N } and the set of factors as F = {f1, …, f m , …, f M }. A patient trajectory is an ordered sequence of K n records: \( {p}_n=\left({r}_{n,1},\dots, {r}_{n,k},\dots, {r}_{n,{k}_n}\right) \), where each record consists of a factor set and a timestamp: rn,k = (Fn,k, tn,k), Fn,k ⊂ F. Note the timestamp of each record is relative and not necessarily the actual record date. In the cohort study, we are interested in the temporal and populational patterns on the course of CKD. Therefore, it makes more sense to align each patient trajectory by their days before and after being diagnosed with CKD.
Data transformation processes. The data transformation steps behind the visual analysis process followed the analysis process from the raw patient records to the final visualization
When the user specifies the time windows: T = (t1, …, t l , …, t L ), the patient trajectories are partitioned based on their timestamps. Records in the same time window are merged into one:
$$ r{\hbox{'}}_{n,l}=\left(F{\hbox{'}}_{n,l},{t}_l\right) $$
$$ F{\hbox{'}}_{n,l}={\displaystyle \underset{i\in I}{\cup }{F}_{n,i}} $$
$$ \mathrm{I}=\left\{k\Big|{t}_l\le {t}_{n,k}<{t}_{l+1}\right\} $$
The end results are patient trajectories regulated in time, \( P{\hbox{'}}_n=\left(r{\hbox{'}}_{n,1},\dots, {r^{\hbox{'}}}_{n,l},\dots, r{\hbox{'}}_{n,{L}_n}\right) \), where the timestamps are regulated by the time windows, and each record's factor set represents all the factors observed on that patient within the time window. When the user requests for patient clustering, the patients at each time window are clustered based on a certain similarity measure and become a set of cohorts: \( {C}_l = \left\{{c}_{l,1,},\dots, {c}_{l,h},\dots, {c}_{l,{H}_l}\right\} \), where C l ⊂ Ρ and it represents a set of H i cohorts at time window t l .
We define the cohort trajectory network as G = (V, E), where each node V = (vl,h|vl,h = cl,h) represents a cohort at a time window, and each edge Ε = {el,i,j|vl,i → vl + 1,j, |cl,i ∩ cl,j| > 0} represents the association between two cohorts at consecutive time windows where their members overlap. The network G is used to drive the visualization in the end of the process.
Data & control flow
As shown in Fig. 2, data flows through a sequence of operators, which are adjustable and associated with different interactions by the user. The interaction workflow is designed from the user's point of view, and it implements.
The data and control flow of visual analysis process from user perspective. The data flows through a sequence of operators, which were adjustable and associated with different interactions by the user
Once the user specifies the important factors for the study, the system scans the raw patient trajectories record by record, filters, and aggregates the factors accordingly. Similarly, the time windows defined by the user also changes the way the system partitions and aggregates the trajectories over time. The two operators, cluster nodes and filter edges, implement multiple techniques to support the analysis tasks of finding cohort and filtering associations, respectively. It is important to note that there is no once-and-for-all operation for any analysis task. Each cluster or filter operator has its strengths and its limitations, these are the reasons why we should be carefully employed.
(1) Frequency-based Cohort Clustering: Frequency-based clustering allows one to follow one's basic intuition to see the "main idea" of data. Cohorts with higher cardinalities are preserved while minor ones are considered less important and merged. Our system allows the user to specify a threshold x for the cardinality, and it merges cohorts of sizes less than the threshold into the "others" group.
$$ cluster\left({C}_l\right)=\left\{\begin{array}{c}{c}_{l,h}\kern.7em if\left|{c}_{l,h}\right|\ge x\hfill \\ {}\hfill others\ if\ \left|{c}_{l,h}\right|<x\ \hfill \end{array}\right. $$
(2) Hierarchical Cohort Clustering: Given a time window, each patient is characterized by the comorbidity of factors within the window. We consider the similarity between two unique comorbidities as the set relation of their factors. For example, two sets of factors {f1} and {f1, f2} are partially overlapped by the common factor f1. In consideration of such similarity, we apply hierarchical clustering to extract cohorts with similar comorbidities.
The resulting clusters are hierarchical and the user can specify the desired number of clusters. With more clusters one is able to describe the characteristics of each cohort more accurately, but more clusters introduce more nodes, more associations, and thus higher visual complexity. On the other hand, fewer clusters create less visual complexity at the expense of potentially overlooking some essential but smaller structures.
Given the set of factors: s i = F ' i,l at a time window t l for a patient p i , we define the similarity between two patients with the Ochiai coefficient [8], which is a variation of cosine similarity between sets:
$$ similarity=\frac{\left|{s}_1\cap {s}_2\right|}{\sqrt{\left|{s}_1\right|\left|{s}_2\right|}} $$
(3) Variance-based Association Filtering: The importance of an association lies in how confident we are able to make an inference from it. We can extract the statistically important associations by ranking and filtering their variances. Our system demonstrates this capability by adopting one particular type of variance, which is defined as the outcome entropy of the associated cohort. Such entropy can be calculated by the conditional probabilities of the different outcomes of the given cohort:
$$ pb\left(p\in {c}_{l+1,j}\Big|p\in {c}_{l,i}\right)=pb\left({c}_{l+1,j}\Big|{c}_{l,i}\right)=\frac{\left|{c}_{l,i}\cap {c}_{l+1,j}\right|}{\left|{c}_{l,i}\right|} $$
$$ entropy\left({e}_{l,i,j}\right)=-{\displaystyle \sum_k}\left(pb\left({c}_{l+1,k}\Big|{c}_{l,i}\right)*pb\left({c}_{l+1,k}\Big|{c}_{l,i}\right)\right) $$
We can see that the entropy is minimized when the patients in a cohort at the current time window all go to another cohort at the next window. In contrast, it is maximized when the probabilities of patients who go to other cohorts are uniformly distributed. Our system allows filtering important associations by adjusting the entropy threshold. When the threshold is high, all associations are shown in spite of their variance; in the extreme case when the threshold is zero, only the associations of zero entropy will be displayed; in other words, it only visualizes the associations between fully overlapped cohorts.
Visualization design
Our system visualizes the cohort trajectories network model that we discussed in the previous section and summarizes it. The user can use it to assess important features such as cohort comorbidity, cohort distributions, and their associations across time windows, etc. We design the visual encoding and the optimization strategies in a way to maximize the legibility of the presentation.
(1) Visual Encoding: We encode the dimensions of the visual space similarly to OutFlow, where the x-axis encodes the time information and the y-axis is used for laying out the categories (comorbidities) [9]. We also visualize the associations between the cohorts as ribbons.
The visualization must convey the characteristics of both the cohorts and the associations. It is common to encode cardinality to the nodes and edges [10, 11] as such information allows the user to assess the frequency-based distribution. Our system encodes cardinality as the nodes' or edges' height. Each cohort is labeled to show its dominant characteristics. It lists the common factors shared by all patients in this group. If there are factors not shared by the entire group, we indicate it by appending an asterisk to the label. In addition, we map colors to unique comorbidities and assign each node its corresponding color. The edge color is determined by the two nodes it connects, and we use gradients for smooth transitions.
The visual encoding of our system is tailored for the CKD cohort study; however, it can be easily changed to display other relevant information. For example, instead of showing the cardinality, the edge can encode other statistical measurements that reveal set relations [12].
(2) Optimization: The overlap between cohorts could be complex and thus increase the number of edges as well as the number of edge crossings. It could impact the legibility of the visualization. Since the y-axis is nominal and the ordering between the categories is flexible, we can arrange the node's vertical positions to reduce the amount of crisscrossing and thus resolve visual clutter.
The algorithm we apply to minimize edge crossing is modified from an existing library and is a heuristic iterative relaxation method [13]. The algorithm sweeps back and forth along the x-axis and adjusts the node vertical positions based on two objectives: (1) minimize the edge length, and (2) resolve node overlaps. It utilizes simulated annealing, so the process ends in a predictable time. The result is an approximation but the algorithm allows us to get reasonable results in an interactive rate.
In addition, the z-ordering (front to back of the screen) of the edges should be considered as well in order to maximize legibility [11]. We choose to place smaller edges on top of the larger ones to reveal the outliers.
Interaction methods
The system interface consists of two views: trajectory view and summary view. The trajectory view is time-based and displays an overview of patient trajectories that the user can interact directly with. It also highlights the trajectories of selected patients. Summary view presents the characteristics of the selected patient group. For example, it shows the distributions of gender, age, and factors, etc. It is also interactive and provides additional functions such as querying by patient metadata information.
Most data items (patients, factors, etc.) in the system are selectable, and the system automatically searches for related items and highlights such associations with visual links. For example, the user can select a cluster of patients by clicking on a node or an edge in the trajectory view. The patients selected are highlighted as red regions in each node and link. The highlighted regions also encode the cardinality as heights so it shows the proportion of the patients selected comparing to others. In the meantime, the highlighted edges reveal the paths traveled by the selected patients. In addition, the user can also select a factor, and all patients having this factor will be highlighted. This enables the user to observe the global distribution of a particular factor.
Pilot study
The original data source for this paper is from Taiwan's National Health Insurance Research Database (NHIRD), a longitudinal database which contains International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) codes for disease identification as well as procedure codes. The database contains health information for one million people over 13 years (1998–2011). We extracted 14,567 CKD patients who had eleven common comorbidities.
Preparing to visualize clinical data involves a series of logical steps [4, 14]. The first step in the data visualization process is selecting the patient cohorts. Figure 3 shows the visualization with only 17 observed factors. The x-axis shows the patients' conditions over the timeline before and after getting CKD diagnosis; the y-axis presents the arrangement of trajectories for each CKD patient, which they were aggregated together with the same comorbidity clusters. However, the outcome of visualization was too difficult to interpret and understand. The tool would be more useful for users if we could provide selection and aggregation function to associate their target patient groups.
Time course of 14,567 CKD patients clustered by comorbidities. 14,567 CKD patients clustered according to comorbidities on the timeline. The x-axis showed the timeline covering 12 years before and after each patient who got a diagnosis of CKD, while the y-axis presents the clusters of trajectories for each CKD patient
Another challenge in the cohort identification process is to standardize the large diversity and inhomogeneity of comorbidities in the database [4]. Due to those high-dimensional data, such as electronic medical records, would lower the homogeneity between data items, we used a divide-and-conquer approach to classify patients into relatively uniform within each groups. Figure 4 shows an overview of this process.
Classifying patients into uniform cohorts. The flow showed an overview of the data analysis process in this study. The visual analysis process was based on CKD research dataset
A factor is a general term used to describe a single criterion that is used to separate patients into cohorts. The factors are derived from diseases and procedures and are the fundamental elements that characterize a patient in our system. In the CKD cohort study, there are tens of thousands diseases and procedure codes that one could use to separate CKD patients. Defining the right set of factors is not a trivial task because including unnecessary factors that are either redundant or irrelevant to the analysis objectives increases the computational cost as well as jeopardizes the interpretability of the visualization. Our system is flexible enough to allow the user to define a set of factors by selecting independent ICD-9 codes or aggregating correlated ones based on the user's domain knowledge. In this study, we worked with nephrologists to define 17 related criteria that users can visually explore concerning chronic kidney disease (Table 1). The 17 factors represent the most related diseases and procedures that follow a diagnosis CKD.
Table 1 Factor association rules
Time windows
Visualizing EMR data over time also requires the ability to change the granularity of the x-axis (time). For example, in CKD there are several stages in its natural history. Within each stage, CKD can be relatively stable, but there is inhomogeneity between CKD patients at different stages. Therefore, we use time windows to refer the time duration or interval (i.e., 1-month, 1-year, 2-year…etc.) To decide on a time granularity is a manual process that is often best judged by humans [3]. The results are patient trajectories partitioned over time, which accentuates the differences between cohorts.
While patient comorbidities within each time window are expected to be stable, comorbidities are not stable over the entire population across all time windows.
Our system handles this problem by using clustering methods which make clear the underlying comorbidity distributions within each patient group. The end results are cohorts that have reliable distributions of comorbidities.
Visual examination
Once the time windows are defined and the cohorts are extracted, the quality of the visualization can be evaluated by examining the associations between cohorts. For instance, the user might want to examine how cohorts merge or diverge over time. Our system reveals not just associations that would otherwise be impossible a person to notice, but also allows users to interact with the underlying data immediately to facilitate "what-if" scenarios. Sometimes, however, the quantity or variance of the associations could be large and thus lead to visual clutter problems. Therefore, our system also allows the user to rank and filter the associations based on their statistical importance. This way, the user can limit themselves to exploring visual changes that are also in fact statistically significant both visually and mathematically. At any step of the visual analysis process, the user can go back and change the settings for factors, time windows, patient clustering, and comorbidity association filtering. For example, if the user wants to explore the temporal patterns in finer detail and examine if there are local and short-term patterns, the user can add more time windows to the context; on the other hand, if two or more stages exhibit indistinguishable patterns, the user might want to merge those time windows as they do not convey extra information. The user can also change the parameters to refine how patients are grouped or how associations are filtered. This iterative process continues until the user obtain a result that he/she is satisfied with.
We use the CKD as a model chronic disease to demonstrate the analysis process, but the process can be applied to the study of other diseases as well. For example, if the user wants to study the clinical trajectories of diabetics, the user can define a list of factors related to diabetes. Then the user can apply the same process to set up time windows, cluster patients, and explore cohort trajectories.
Ethical approval
This type of study was not required the Institutional Review Board review in accordance with the policy of National Health Research Institutes which provides the large computerized de-identified data (http://nhird.nhri.org.tw/en/).
Exploring cohort structures
In this study, we build an exploratory data analysis tool that depicts the trajectories of 14,567 CKD patients' comorbidities over time. We partition the records into multiple 2-year time windows. Researchers often have different factors-of-interest for different windows of CKD. In the pre-CKD stage, they are interested in common diseases such as hypertension, diabetes; for end-stage CKD factors, they are interested in critical procedures such as dialysis, renal transplantation, or patient death. We filter the factors of interest according to each time window.
Since there are too many comorbidities to visualize clearly as shown in Fig. 3, we apply frequency-based cohort clustering to extract the dominant cohorts. As Fig. 5 shows, the trajectories are simplified where larger cohorts are kept and smaller ones are merged into a single "others" group (light green for others without CKD and light orange for others with CKD). From the overviews, we can learn about the prevalence of different comorbidities and their proportions in the population. For example, we can see from Fig. 5 that the number of patients with a single disease such as hypertension (HTN) (brown) and diabetes (DM) (dark blue) shrinks as the time approaches year 0, which means that patients start to exhibit other diseases. The user can lower the threshold to reveal smaller sized cohorts as shown in Fig. 6.
Frequency-based Cohort Clustering: Sankey Diagrams for CKD Cohort Sizes of < 250. The trajectories were simplified where larger cohorts were kept and smaller ones were merged into a single "others" group. The light green for others without CKD and light orange for others with CKD
The system visualization displayed with the threshold adjust to 150. The user could enlarge/lower the threshold to reveal different size of cohorts
Exploring associated relationships
Another goal of exploratory data analysis is to uncover unexpected associations between two variables. In this study, we demonstrate the exploring associations between hemodialysis (HD) in early stages of CKD and other diseases and procedures. More specifically, we want to identify the driving factors that may lead to hemodialysis and the downstream consequences.
First, we divide CKD patients according to CKD severity: (1) pre-CKD: before the patient's first CKD diagnosis, (2) first year-of-CKD, and (3) post-CKD: a year after the patient's first CKD diagnosis. Second, we filter CKD patients according to pre-determined criteria that nephrologists determined to be clinically important. For the first-year-of-CKD stage, we focus on which patients will go on to require hemodialysis; for the post-CKD stage, we watch other common diseases and procedures related to CKD patients: Death, peritoneal dialysis (PD), and renal transplant (RTPL); for the pre-CKD stage, we watch all 17 diseases/procedures. As a result, there are 835 unique combinations at the pre-CKD stage, two at the first-year-of- CKD stage and nine at the post-CKD stage.
Since there are only a total of 11 CKD/disease or CKD/procedure combinations for first-year-of-CKD stage and the post-CKD patients, we can visualize their clinical courses without any simplification processes. However, there are too many combinations at the pre-CKD stage to be visualized directly. For simplicity, we first group them into one single cluster and focus on the last two time windows. As Fig. 7a shows, we find that 70.2 % of the patients who took hemodialysis in the first year of CKD did not develop any other diseases or procedures related to CKD, while the rest of them either required peritoneal (PD) or renal transplantation (RTPL), or died. Some of the patients who were not on hemodialysis in the first year also died; however, the mortality rate seems lower. We also notice that more than half of the patients who didn't require hemodialysis in the first year are not associated with any of post-CKD factors of interest. This means they were either in stable condition after the first year or their following treatments were not recorded.
Explore causal relationship (12,960 patients). a There were 70.2 % of the patients who took HD in the first year of CKD did not develop any other factors, while the rest of them either took PD, RTPL, or died. b After filtering the unconfident associations, the remaining associations only covers 17.4 % of the population. c To perform hierarchical clustering on the patients at the pre-CKD stage and generate ten groups of similar patients. d When we highlighted the group who had a common factor of systemic lupus erythematosus (SLE), we found that none of them took the more serious procedures such as renal transplantation or died. Note: There are three groups labelled "*" because of the groups have no common factor shared by all members in the group
To see stronger associations between the pre-CKD factors and HD during the first-year-of-CKD stage, we must filter out the associations that are not helpful. For example, if a group of similar patients are associated with both "CKD" and "CKD|HD" clusters, it's hard to tell whether this combination of factors will lead to hemodialysis or not. We can rule out all those unconfident associations by filtering according to the variance of their associations. We set a strict threshold 0.0 for the variance so that the association is kept only when it is 100 % confident. After the filtering, 32.6 % of the 835 unique combinations are taken out because their associations with the first-year-of-CKD stage are not confident. Figure 7b shows that the remaining associations only covers 17.4 % of the population. This means the pre-defined 17 factors might not be good explanatory variables to discriminate patients taking or not taking hemodialysis in the first year of CKD.
Next, we perform hierarchical clustering on the patients at the pre-CKD stage and generate ten groups of similar patients, as shown in Fig. 7c. Note there are three groups labeled "*", which seems confusing at first as they could have been merged into one group. In fact, the three groups have different factor distributions. They are labeled "*" because none of the groups have a common factor shared by all members in the group. To avoid confusion, the user can assign a custom label to describe the nature of the group. When we select and highlight the group who has a common factor of systemic lupus erythematosus (SLE), we find that none of them required more serious procedures such as renal transplantation or died. Figure 7d is a zoom-in view showing the structure of the selected "SLE,*" group. We also notice that the proportion of patients requiring hemodialysis in the first year of CKD in the "SLE,*" group (3.14 %) is one-third of the proportion in the entire population (9.54 %).
Our system is a web-based (http://sankey.ic-hit.net/) and is tested with a commodity desktop machine (CPU: 2.66 GHz Quad-Core, Memory: 8GB 1066 MHz DDR3) as the application server and another desktop machine as the client. Most of the back end programs are written in Python, and the front end programs are written in Javascript and HTML5.
The system caches the transformed data after each operation in the data control flow (as shown in Fig. 2) to reduce unnecessary processing time and improve user end responsiveness. There are four major types of user interactions: defining factors, partitioning time windows, merging patients and filtering associations. The first two interactions usually happen at the beginning of a study and occasionally happen in major revisions. On the other hand, the rest of two interaction types are much more frequent in the analysis process. Caching the less frequently updating results helps us reduce unnecessary processing time.
We measure the time elapsed for each process using the system timer. For 14,567 patients and 6,031,579 records, it takes 6 min to filter and aggregate factors of the entire data set, and 25 s to partition the data set into three time windows. However, such operations are taken only a few times throughout the analysis and thus do not require immediate response. More frequently performed operations such as clustering patients or filtering associations only take 5 s per time window on average.
We present a system to visually analyze the comorbidities associated with CKD by using a large-scale database containing 14,567 patients. We visualize the results using a Sankey diagram to help practicing physicians and clinical researchers investigate the outcome of this complex disease based on comorbidities or procedures that these patients have.
Building a visually interactive exploratory data analysis tool is not without several challenges. First, direct visualization of all the patients can easily lead to overplotting. Second, in this dataset, there exists tens of thousands of risk factors pertinent to CKD patients. It is not apparent how to best discriminate and visualize these factors to bring out structures of interest in the data. After all, one of the main goals of data visualization is to bring out unexpected patterns in the data, which is best achieved by unsupervised machine learning methods. Figure 3 shows an unfiltered visualization of the CKD and 17 associated comorbidities and procedures. You may find out that the visualization is too complex to comprehend. It would be useful to select, aggregate, and visualize factors associated with patient groups. We have developed an interactive visualization system to support such operations.
Temporal visualization
Time-series information are traditionally of particular interest when analyzing EMRs. [15] Much prior work has suggested presenting patient history longitudinally [16–18]. Real world data usually has prohibitively high visual complexity due to its high dimensionality or high variance. Thus, several simplification methods have been proposed. Bui et al. suggested using folder as well as non-linear spacing [19]. In the V-model project, Park et al. compressed the causality relationship along a linear timescale to an ordinal representation to carry more contextual information of the event [20]. In addition to abstracting time to use the horizontal screen real estate more efficiently, there are methods to save the vertical real estate. Bade et al. implemented a level-of-detail technique that presents data in five different forms based on its source and the row height available [21]. Our method simplifies the visual complexity of patient trajectories by aggregating records over time, clustering patients and filtering associations between cohorts.
Query-based visual analytics
In many real world cases, the user can narrow down the scope and reduce the complexity of the data by querying based on his or her domain knowledge. Systems of this kind allow the user to specify the pattern of interest and can enhance the analytic process with advanced interfaces [22, 23]. However, it is not always easy to translate an analysis task into proper queries [24]. For temporal event queries, Wang et al. proposed an interactive system to support querying with higher level semantics such as precursor, co-occurring, and aftereffect events [25]. Their system outputs visual-oriented summary information to show the prevalence of the events as well as to allow comparison between multiple groups of events [26]. For overview specific tasks, Wongsuphasawat et al. proposed LifeFlow, a novel visualization that simplifies and aggregates temporal event sequences into a tree-based visual summary [27]. Monroe et al. improved the usability of the system by integrating interval-based events and developing a set of user-driven simplification techniques in conjunction with a metric for measuring visual complexity [13, 28]. Wongsuphasawat et al. also extended LifeFlow into a Sankey diagram-based visualization, which reveals the alternative paths of events and helps the user understand the evolution of patient symptoms and other related factors [29]..
In spite of their effectiveness in guided or well-informed analysis, query-based systems fall short for exploratory analysis where the user may not have a well-defined hypothesis and simply wants to explore and learn the data.
Exploring inhomogeneous data
High-dimensional data items are less homogeneous and harder to compare with each other. It is harder to associate, rank, or filter those items meaningfully. Some have proposed that data be sliced and diced by dimension or item and separated into homogeneous subsets [4]. It has been proven that, by carefully selecting projection methods, a system can incorporate multiple heterogeneous genetic data and identify meaningful clusters of patients [30]. Our work is an example of the slice-and-dice concept, where we partition the record time into multiple dimensions and group patients within each time window.
We would like to investigate the possibility of using more sophisticated feature extraction methods in future work. In this case, we define the factors by hand with domain knowledge and group the patients based on the factors by a simple set similarity metric or a frequency-based metric. However, the combinations of factors are noisy and the variance within each cluster are usually high. Furthermore, there are still thousands of unused factors that may provide additional insights. Such problem could potentially be addressed with the help of correspondence analysis.
More optimizations can also be made to enhance the visual rendering of information as well. First, for conveying the association between the clusters, in this work we only visualize the cardinality of the association and filter them by variance. There are other measures of proportionality available which can help evaluate the association of comorbidities [31]. We would like to study each method's role and effectiveness by conducting different analysis tasks. Second, for conveying and comparing the nature of each cluster, in this work we only present such information as text that shows the dominant factors of the cluster and indicate uncertainty. However, the underlying differences are non-binary and high-dimensional. Getting the system to effectively extract and present the subtle differences between the clusters could be the key to improving visual pattern depiction.
Finally, it is possible to improve the computational performance by parallel data processing. Some of the steps in the analysis process are easily parallelizable while others, such as patient clustering, are not. We also intend to investigate more advanced database structures for efficient data management.
In this study, we develop a visual mining system to support exploratory data analysis of multi-dimensional categorical EMR data. Using CKD as a model disease, a CKD cohort was assembled by automated correlational analysis and human-curated visual evaluation. Our system also shows relevant comorbidities that CKD patients develop over time.
All of this information is combined to produce a Sankey diagram that reveals useful but non-obvious knowledge about the CKD cohort and the expected trajectories of the disease over 13 years. Furthermore, the various parameters governing cohort selection, comorbidity selection, and temporal features are all adjustable by the user and requires no programming knowledge.
Finally, the analysis process is generalizable to any other disease that a user wishes to follow over time and can work with different clustering and filtering algorithms.
EMRs:
NHIRD:
National Health Insurance Research Database
ICD-9-CM:
International Classification of Disease, Ninth Revision, Clinical Modification
CVD:
CHF:
CAD:
GN:
HTN:
PD:
RTPL:
PKD:
SLE:
Miller RH, Sim I. Physicians' use of electronic medical records: barriers and solutions. Health Aff. 2004;23(2):116–26.
Caban JJ, Gotz D. Visual analytics in healthcare–opportunities and research challenges. J Am Med Inform Assoc. 2015;22(2):260–2.
Cook KA, Thomas JJ. Illuminating the path: The research and development agenda for visual analytic. Richland: Pacific Northwest National Laboratory (PNNL); 2005.
Lex A, Schulz H, Streit M, Partl C, Schmalstieg D. VisBricks: multiform visualization of large, inhomogeneous data. IEEE Trans Vis Comput Graph. 2011;17(12):2291–300.
Basole RC, Braunstein ML, Kumar V, Park H, Kahng M, Chau DH, et al. Understanding variations in pediatric asthma care processes in the emergency department using visual analytics. J Am Med Inform Assoc. 2015;22(2):318–23. doi:10.1093/jamia/ocu016.
Jianping Li C-FW, Kwan-Liu M. Design considerations for visualizing large EMR data, EHRVis - visualizing electronic health record data. Paris: EEE VIS 2014 WORKSHO; 2014.
Huang C-W, Syed-Abdul S, Jian W-S, Iqbal U, Nguyen P-AA, Lee P, et al. A novel tool for visualizing chronic kidney disease associated polymorbidity: a 13-year cohort study in Taiwan. J Am Med Inform Assoc. 2015;22(2):290–8.
Ochiai A. Zoogeographic studies on the soleoid fishes found in Japan and its neighbouring regions. Bull Jpn Soc Sci Fish. 1957;22(9):526–30.
Wongsuphasawat K, Gotz D. Outflow: visualizing patient flow by symptoms and outcome, IEEE VisWeek workshop on visual analytics in healthcare. Providence: IEEE; 2011. p. 2011.
Kosara R, Bendix F, Hauser H. Parallel sets: Interactive exploration and visual analysis of categorical data. IEEE Trans Vis Comput Graph. 2006;12(4):558–68.
Riehmann P, Hanfler M, Froehlich B. Interactive sankey diagrams, Information visualization, 2005 INFOVIS 2005 IEEE symposium on: 2005. Minneapolis: IEEE; 2005. p. 233–40.
Ellis G, Dix A. A taxonomy of clutter reduction for information visualisation. IEEE Trans Vis Comput Graph. 2007;13(6):1216–23.
Monroe M, Wongsuphasawat K, Plaisant C, Shneiderman B, Millstein J, Gold S. Exploring point and interval event patterns: display methods and interactive visual query. Univ Maryland Tech Rep. 2012.
Jensen PB, Jensen LJ, Brunak S. Mining electronic health records: towards better research applications and clinical care. Nat Rev Genet. 2012;13(6):395–405.
Tufte ER, Graves-Morris P. The visual display of quantitative information. Cheshire: Graphics press; 1983.
Cousins SB, Kahn MG. The visual display of temporal information. Artif Intell Med. 1991;3(6):341–57.
Plaisant C, Mushlin R, Snyder A, Li J, Heller D, Shneiderman B. LifeLines: using visualization to enhance navigation and analysis of patient records, Proceedings of the AMIA symposium: 1998. Bethesda: American Medical Informatics Association; 1998. p. 76.
Nair V, Kaduskar M, Bhaskaran P, Bhaumik S, Lee H. Preserving narratives in electronic health records, Bioinformatics and biomedicine (BIBM), 2011 IEEE international conference on: 2011. Atlanta: IEEE; 2011. p. 418–21.
Bui AA, Aberle DR, Kangarloo H. TimeLine: visualizing integrated patient records. IEEE Trans Inf Technol Biomed. 2007;11(4):462–73.
Park H, Choi J. V-model: a new innovative model to chronologically visualize narrative clinical texts, Proceedings of the SIGCHI conference on human factors in computing systems: 2012. New York: ACM; 2012. p. 453–62.
Bade R, Schlechtweg S, Miksch S. Connecting time-oriented data and information to a coherent interactive visualization, Proceedings of the SIGCHI conference on human factors in computing systems: 2004. New York: ACM; 2004. p. 105–12.
Hochheiser H, Shneiderman B. Dynamic query tools for time series data sets: timebox widgets for interactive exploration. Inf Vis. 2004;3(1):1–18.
Fails JA, Karlson A, Shahamat L, Shneiderman B. A visual interface for multivariate temporal data: finding patterns of events across multiple histories, Visual analytics science and technology, 2006 IEEE symposium on: 2006. Baltimore: IEEE; 2006. p. 167–74.
Jin J, Szekely P. Interactive querying of temporal data using a comic strip metaphor, Visual analytics science and technology (VAST), 2010 IEEE symposium on: 2010. Salt Lake: IEEE; 2010. p. 163–70.
Wang TD, Plaisant C, Quinn AJ, Stanchak R, Murphy S, Shneiderman B. Aligning temporal data by sentinel events: discovering patterns in electronic health records, Proceedings of the SIGCHI conference on human factors in computing systems: 2008. New York: ACM; 2008. p. 457–66.
Wang TD, Plaisant C, Shneiderman B, Spring N, Roseman D, Marchand G, et al. Temporal summaries: supporting temporal categorical searching, aggregation and comparison. IEEE Trans Vis Comput Graph. 2009;15(6):1049–56.
Wongsuphasawat K, Guerra Gómez JA, Plaisant C, Wang TD, Taieb-Maimon M, Shneiderman B. LifeFlow: visualizing an overview of event sequences, Proceedings of the SIGCHI conference on human factors in computing systems: 201. New York: ACM; 2011. p. 1747–56.
Monroe M, Lan R, Lee H, Plaisant C, Shneiderman B. Temporal event sequence simplification. IEEE Trans Vis Comput Graph. 2013;19(12):2227–36.
Wongsuphasawat K, Gotz D. Exploring flow, factors, and outcomes of temporal event sequences with the outflow visualization. IEEE Trans Vis Comput Graph. 2012;18(12):2659–68.
Turkay C, Lex A, Streit M, Pfister H, Hauser H. Characterizing cancer subtypes using dual analysis in caleydo stratomex. IEEE Trans Vis Comput Graph. 2014;34(2):38.
Piringer H, Buchetics M. Exploring proportions: comparative visualization of categorical data, Visual analytics science and technology (VAST), 2011 IEEE conference on: 2011. Providence: IEEE; 2011. p. 295–6.
This research is sponsored in part by the U.S. National Science Foundation and UC Davis RISE program, Taiwan Ministry of Science and Technology, grant number MOST 103-2221-E-038-016, MOST 104-2221-E-038-013 and Health and welfare surcharge of tobacco products, grant number MOHW104-TDU-B-212-124-001.
Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
Chih-Wei Huang, Richard Lu, Usman Iqbal, Shen-Hsien Lin, Phung Anh (Alex) Nguyen & Yu-Chuan (Jack) Li
International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei, Taiwan
Chih-Wei Huang, Richard Lu, Usman Iqbal, Shen-Hsien Lin, Phung Anh (Alex) Nguyen, Hsuan-Chia Yang, Yu-Chuan (Jack) Li & Wen-Shan Jian
Institute of Biomedical Informatics, National Yang Ming University, Taipei, Taiwan
Hsuan-Chia Yang
Department of Computer Science, University of California-Davis, Davis, CA, USA
Chun-Fu Wang, Jianping Li & Kwan-Liu Ma
Department of Dermatology, Wan-Fang Hospital, Taipei, Taiwan
Yu-Chuan (Jack) Li
School of Health Care Administration, Taipei Medical University, Taipei, Taiwan
Wen-Shan Jian
Faculty of Health Sciences, Macau University of Science and Technology, Macau, China
Chih-Wei Huang
Richard Lu
Usman Iqbal
Shen-Hsien Lin
Phung Anh (Alex) Nguyen
Chun-Fu Wang
Jianping Li
Kwan-Liu Ma
Correspondence to Kwan-Liu Ma, Yu-Chuan (Jack) Li or Wen-Shan Jian.
C-W.H., C-F.W., J.L., K-L.M. and YC L. invented and developed the methodology. Y-C.L., W-S.J. and P-A.A.N. obtained the data. W-S.J., K-L.M. and Y-C.L. organized and validated the data. C-W.H., C-F.W., J.L., K-L.M. and Y-C.L. reviewed the methods. C-F.W. and J.L. wrote software, implemented the method and developed the software. W-S.J. and S-H.L. gave technical support. H-C.Y. performed biological validation. C-W.H., C-F.W., R.L. and U.I. wrote the manuscript.
Chih-Wei Huang and Chun-Fu Wang contributed equally to this work.
Huang, CW., Lu, R., Iqbal, U. et al. A richly interactive exploratory data analysis and visualization tool using electronic medical records. BMC Med Inform Decis Mak 15, 92 (2015). https://doi.org/10.1186/s12911-015-0218-7
Accepted: 02 November 2015
Time Window
Chronic Kidney Disease Patient | CommonCrawl |
Microstructure Control and Performance Evolution of Aluminum Alloy 7075 by Nano-Treating
Min Zuo1,2,
Maximilian Sokoluk ORCID: orcid.org/0000-0002-1770-62242,3,
Chezheng Cao2,3,
Jie Yuan2,3,
Shiqi Zheng ORCID: orcid.org/0000-0003-0823-57552,3 &
Xiaochun Li2,3
Scientific Reports volume 9, Article number: 10671 (2019) Cite this article
Other nanotechnology
Nano-treating is a novel concept wherein a low percentage of nanoparticles is used for microstructural control and property tuning in metals and alloys. The nano-treating of AA7075 was investigated to control its microstructure and improve its structural stability for high performance. After treatment with TiC nanoparticles, the grains were significantly refined from coarse dendrites of hundreds of micrometers to fine equiaxial ones smaller than 20 μm. After T6 heat treatment, the grains, with an average size of 18.5 μm, remained almost unchanged, demonstrating an excellent thermal stability. It was found that besides of growth restriction factor by pinning behavior on grain boundries, TiC nanoparticles served as both an effective nucleation agent for primary grains and an effective secondary phase modifier in AA7075. Furthermore, the mechanical properties of nano-treated AA7075 were improved over those of the pure alloy. Thus, nano-treating provides a new method to enhance the performance of aluminum alloys for numerous applications.
Recently, 7000-series Al alloys, especially Al alloy 7075 (AA7075), have drawn considerable attention due to their exceptionally high strength-to-weight ratios in structural components for the aerospace and automotive industries1,2. However, some long-standing problems still exist, hindering their widespread applications. For example, the mechanical properties of these alloys experience a noticeable reduction at elevated temperatures due to grain and precipitate coarsening3,4. Hot cracking often occurs during the solidification of 7000-series Al alloys due to their wide solidification ranges5. Moreover, a significant number of intermetallic compounds and eutectic phases composed of Zn, Mg, Cu, and Al usually distribute along grain boundaries, resulting in a remarkable reduction in mechanical properties after heat treatment and hot deformation processes6,7. Thus, it is important to address these problems to improve the performance and extend the application space for these alloys.
Many studies have been conducted to control the microstructure of AA7075 in order to optimize its performance for wider applications. Various techniques have been attempted, such as severe plastic deformation8,9,10, grain refinement11,12, and so on. With the addition of boron, the grains in Al-Zn-Mg-Cu alloys produced by strain-induced melt activation (SIMA) were significantly refined but still coarser than 60 μm13. Ebrahimi et al.14 reported that the addition of Ti could refine the grain size of the as-cast Zn-rich Al-Zn-Mg-Cu alloy from 859 μm to 46 μm, and Al3Ti particles could effectively pin dislocations and grain boundaries during solution treatment. The combination of Sc and Zr in Al-Zn-Mg-Cu alloys is an effective route towards improving their recrystallization resistance (and thus thermal stability), attributed to the formation of the intermetallic compound Al3(Zr, Sc)15. Rogal et al.16 found that with Sc and Zr additions, the grains in AA7075 could be refined to less than 30 μm. After T61 or T62 heat treatment, the grains in AA7075ScZr grew coarser than those in the as-cast alloy. Wen et al.17 also reported that, with the addition of Zr, the grain size in the Al-Zn-Mg-Cu alloy still grew after a prolonged homogenization process.
Recently, nanotechnological methods, such as using a low percentage of nanoparticles to modify alloys (i.e. nano-treating), have gained increasing attention in metallurgy due to their revolutionary capabilities, such as microstructural control and property tuning18,19,20,21,22,23. The resultant properties of the nano-treated alloys would be highly dependent on the type, size, and distribution of the modified precipitate particles24.
TiC nanoparticles (NPs) are attractive reinforcements for Al alloys due to their high melting point, high stiffness, high hardness, good thermal stability, and low thermal expansion coefficient. Furthermore, TiC is an excellent heterogeneous nucleation agent for Al due to a similar face centered cubic (FCC) crystal structure and small lattice mismatch with Al25. In this paper, a nano-treatment approach with a low percentage of TiC nanoparticles was applied to process and modify AA7075. It also reports the effect of this approach on the solidification rate and the control mechanism of the microstructure in nano-treated AA7075. The novel nano-treatment method would open significant new opportunities for microstructural control and property enhancement of many, if not all, metals and alloys.
Microstructures of as-cast AA7075 before and after nano-treating
Figure 1 shows the typical microstructures of pure AA7075 alloys solidified in the copper wedge mold. It is well known that the cooling rate has a significant influence on the microstructural characteristics of metallic alloys, which in turn affect their mechanical properties. As indicated by the arrows in Fig. 1, the microstructures of these alloys are composed of dendrites that are sensitive to the cooling rate. When the cooling rate decreases from 197.5 K/s to 14.8 K/s, the grains become rather coarse, growing from 68.1 μm ± 14.1 μm to 626.7 μm ± 166.8 μm in length. In the sample cooled at 14.8 K/s, plenty of evenly developed dendrites were observed, as shown in Fig. 1(e).
Typical microstructures of pure AA7075 alloys with different cooling rates. (a) 197.5 K/s; (b) 87.8 K/s; (c) 29.5 K/s; (d) 20.3 K/s; (e,f) 14.8 K/s.
In order to investigate the nano-treating effect of TiC on AA7075, a sample containing 1 vol.% of TiC nanoparticles was fabricated. The typical microstructure of this sample is illustrated in Fig. 2. By statistical analysis, it was determined that the microstructure of these nano-treated alloys was composed of fine equiaxed grains with a mean size of less than 20 μm. For the sample with a solidification rate of 14.8 K/s, the average size of Al grains was only 17.5 μm ± 3.0 μm, as clearly indicated in Fig. 2(e). In comparison, the coarse dendrites with the same cooling rate in the pure alloy measured up to 626.7 μm. Based on this observation, it can be concluded that the grain size of AA7075 can be effectively refined by TiC nanoparticles.
Typical microstructures of nano-treated AA7075 with TiC NPs under varied cooling rates. (a) 197.5 K/s; (b) 87.8 K/s; (c) 29.5 K/s; (d) 20.3 K/s; (e,f) 14.8 K/s.
Microstructures of heat-treated AA7075 before and after nano-treating
It is well known that a fine grain structure can enhance the mechanical properties of bulk Al alloys8. As a typical precipitation-hardening Al alloy, the microstructural characteristics of heat-treated AA7075 alloys with and without nano-treating are shown in Fig. 3, which is the sample of bulk cast AA7075 with a solidification rate of 14.8 K/s. It can be found that after heat treatment the difference of grain sizes between basic AA7075 and nano-treated AA7075 alloys become even greater. As clearly indicated by arrows in Fig. 3(a), the dendrites in the pure alloy grow much coarser with sizes of up to hundreds of micrometers. In comparison, the microstructure of heat-treated AA7075 fabricated by nano-treatment shows an average grain size of 18.5 ± 4.0 µm, which is quite similar to that of the as-cast nano-treated sample, indicating a superior thermal stability of grain size with nano-treatment. The detailed average grain sizes and standard deviations in AA7075 alloys (according to different cooling rates) are illustrated in Fig. 4. Based on this thermal stability study, it can be speculated that the nano-treatment provides new opportunities for high performance and thermally stable Al alloys.
Typical microstructures of AA7075 (sample with cooling rate as 14.8 K/s) after T6 heat treatment. (a,b) Pure sample; (c,d) nano-treated sample.
The variation in average grain sizes of AA7075 alloys before and after nano-treating. The embedded image is the partial enlargement of mean grain sizes of nano-treated AA7075, indicating a superior thermal stability of grain size before and after heat treatment. However, the grains in basic AA7075 alloys grow even coarser after heat treatment as clearly illustrated in Fig. 3(a,b).
Mechanical properties of AA7075 before and after nano-treating
With the excellent microstructural refinement through nano-treatment, the mechanical properties of AA7075 were also improved. The tensile data of series of AA7075 alloys can be found as Supplementary Fig. S1. As shown in Fig. 5, the Vickers hardness of the as-cast nano-treated samples were enhanced from 110 HV to about 130 HV, and the samples with finer grains have relatively higher hardness. After heat treatment, the hardness of nano-treated AA7075 alloys increased to about 180 HV. However, the hardness of the pure alloy seems to be more sensitive to the grain size, which has been directly influenced by the cooling rate, as shown in Fig. 4. When the grains grew coarser with the decrease in cooling rate, the hardness of these samples gradually decreased from over 180 HV to about 165 HV. In contrast, the hardness of the nano-treated samples after heat treatment increased to about 188 HV and remained stable despite different cooling rates, which suggests that the nano-treated samples were thermally stable and insensitive to cooling rate.
Variations in Vickers Hardness of AA7075 alloys with different cooling rates.
Ramakoteswara Rao et al.26 synthesized 7075 composite alloys reinforced with 2 to 10 wt.% of TiC particles through stir casting. They reported that the best Vickers hardness obtained was 115.9 HV for as-cast AA7075 with 8 wt.% of TiC particles. Wu et al.27 studied the influence of in-situ TiC particles' form, distribution state, and content on the microstructure and mechanical properties of TiC-AA7075 composites, and they found that the highest hardness achieved was 108 HB by composite alloys containing 8 wt.% TiC. It's worth noting that through highly efficient nano-treating, AA7075 with better mechanical properties can be obtained with a rather low content of TiC NPs.
Mechanism behind the nano-treatment of AA7075
Figure 6(a,b) shows typical FESEM of Al-TiC master nanocomposite prepared by a molten salt-assisted processing method. It can be found that TiC NPs were well introduced into aluminum matrix and mainly existed in the form of micrometer scale clusters. From partial enlargement, it can be found TiC NPs were rather well dispersed in nanoparticles rich area, which would have a further improvement for the nano-treatment effect of metallic alloys. Figure 6(c) shows the XRD patterns of nano-treated AA7075 alloys before and after heat treatment and the corresponding standard diffraction peaks of TiCx (x = 0.957). Due to the carbon vacancies28,29, the C/TiC atom ratio in TiC was not 1:1. Instead, it usually fell in the wide x range from 0.49 to 0.98 without a change in crystal structure. According to the XRD patterns illustrated in Fig. 6, it was observed that AA7075 treated with TiC NPs is mainly composed of three phases, including Al, Mg(Al,Cu,Zn)2, and TiC. The diffraction lines of nano-treated AA7075 alloys before and after heat treatment both exhibit peaks at 36.41°, 41.68°, 61.21°, and 72.85°, corresponding to the (111), (200), (220), and (311) planes of the FCC TiC respectively; This could provide evidence for the existence of TiC particles in AA7075. Furthermore, the diffraction peaks of TiC can also be detected in the heat-treated sample, meaning that TiC can stably exist in this system at elevated temperatures.
(a,b) FESEM characterizations of Al–TiC master alloy prepared by a molten salt-assisted processing method. (c) XRD patterns of AA7075 alloys before and after nano-treating with TiC NPs.
In order to investigate the influence of nano-treatment on Mg(Al,Cu,Zn)2 in detail, FESEM characterization of secondary phases in AA7075 alloys are studied. As clearly illustrated in Fig. 7(a), it is observed that the coarse secondary phases tend to precipitate along grain boundaries, especially in the triple junction of them, which would cause solidification cracks and lead to the failure of metallic alloys. With the presence of TiC NPs, the secondary phase is segmented, resulting in finer and shorter features with uniform distribution. In combination with this encapsulation attachment structure, the low mismatch between \((\overline{1}{11}_{{\rm{TiC}}})/(1\overline{{\rm{2}}}{{\rm{10}}}_{{{\rm{MgZn}}}_{{\rm{2}}}})\) interfaces as 5.6% could further affiliate with secondary phases and effectively modify eutectic compounds30. By effective nano-treatment of TiC NPs, equiaxed Al grains and fine divorced eutectic features of AA7075 were obtained, suggesting the significant improvement of mechanical properties.
FESEM characterizations of secondary Mg(Al,Cu,Zn)2 phases in AA7075 alloys before and after nano-treatment. (a) Secondary phase in pure AA7075 alloys (partial enlargement illustrated in the upper right corner); (b) modified secondary phase in AA7075 by TiC NPs; (c) the X-ray images for elements Al, Ti, Zn and Mg in (b); (d,e) modification features of secondary phases by TiC NPs.
To obtain more structural characteristics of nano-treated AA7075, the fracture surfaces of AA7075 alloys were analyzed by FESEM and are shown in Fig. 8. For the pure alloy, before and after heat treatment, it can be noticed that the fracture surfaces of both the samples consist of a large quantity of cleavage facets. As indicated in Fig. 8(a,b), the coarse Al grains and Mg(Al,Cu,Zn)2 intermetallic compounds that precipitated along the Al grain boundaries can be clearly observed. Furthermore, there are some microscopic porosities detected between coarse dendrites, which might be caused by the hindrance influence of continuous eutectic phases on the flowability during solidification process. After heat treatment, besides many dimples, coarse penetrative cracks appeared, which caused the sample to fail. In comparison, the microstructure of nano-treated AA7075 was much finer and more homogeneous. Many fine dimples were clearly observed in Fig. 8(c–f). Through the enlarged images shown in Fig. 8(d,f), secondary phases precipitated with the existence of uniformly dispersed TiC nanoparticles, significantly modifying their appearance. With the significant refinement of aluminum and modification of the secondary phase, the mechanical properties of nano-treated AA7075 would be obviously improved31.
FESEM images of the fracture surfaces of AA7075 alloys with a cooling rate of 14.8 K/s. (a,b) AA7075 before and after heat treatment. Cracks and microscopic porosities were clearly observed and would lead to the failure of alloys; (c,d) nano-treated AA7075 (as-cast); (e,f) nano-treated AA7075 after heat treatment. With nanotreatment by TiC NPs, AA7075 alloys with dense microstructure were obtained.
De Cicco and Li et al.32 raised the nucleation catalysis effect of nanoscale inoculants in aluminum alloys and found that the undercooling would reduce due to the effective nucleation on the nanoparticles' surfaces. According to different levels of undercooling caused by nanoparticles of various sizes, they reported that nanoparticle encapsulation might occur during the adsorption of the initial crystal layer, which could further promote the refinement effect of nanoparticles on aluminum grains. Meanwhile, the effective grain boundary pinning by nanoparticles due to strong bonding with the matrix has a further promotion in restriction growth of aluminum grains33.
Therefore, the solidification process of nano-treated AA7075 with TiC NPs can be proposed as follows, and the corresponding schematic diagram is shown in Fig. 9. TiC NPs can act as heterogeneous nucleation sites for primary Al grains due to the small lattice mismatch between Al and TiC34,35 as illustrated in Supplementary Fig. S2. According to ref.36, the growth restriction factor Q of a free-growth model is found to be a function of refiner addition level, the chemical composition of alloy and solidification rate. Due to the high concentration and rather uniform distribution of TiC NPs in Al-TiC master nanocomposite, the effective quantity of TiC introduced into AA7075 alloys would be supposed to be rather high, which could have an improvement for the growth restriction effect of nanoparticles. With continuous solidification, some TiC NPs and other alloying elements are pushed to the solidification fronts and TiC NPs are quite effective to pin down grain growth by physical barrier (pushing to the grain boundary) to provide considerable growth restriction factor. Then, the encapsulation structure is formed by nanoparticles outside of the eutectic phases due to high surface activity. By means of the restriction effect of encapsulation geometry and specific interface matching between TiC NPs and secondary phases, these eutectic phases of AA7075 alloys are obviously modified as finer divorced features with good dispersion as shown in Fig. 7. Furthermore, it can be observed that there are many TiC NPs embedded into the secondary phases, as clearly illustrated in Fig. 8(d,f). Based on these results, the microstructure of nano-treated AA7075 is significantly improved, with the aluminum dendrites refined to fine equiaxed grains (less than 20 μm) and continuous lamellar eutectic phases to fine divorced features. After heat treatment, the grain sizes of nano-treated AA7075 are similar to those of as-cast samples, which were attributed to the excellent pinning effect of homogeneous TiC NPs. As a result, the mechanical properties of nano-treated AA7075 with dense microstructure are correspondingly improved due to the excellent refinement of its microstructure37,38.
The schematic diagrams of solidification processes of AA7075 alloys. (a) Pure alloy; (b) nano-treated alloy with TiC NPs.
In summary, a novel nano-treatment approach was applied for microstructural control and performance improvement of AA7075. The effect of cooling rate on the nano-treatment of AA7075 was also investigated.
When nano-treated with TiC NPs, the coarse dendritic grains in the as-cast samples were refined to small equiaxed grains smaller than 20 µm in size. During T6 heat treatment, the aluminum grains remained almost the same, indicating that nano-treating instilled an exceptional thermal stability to AA7075.
In contrast to the pure alloy, the grain size of nano-treated AA7075 was only slightly affected by solidification rates. With a solidification rate at 14.8 K/s, the average grain size of nano-treated AA7075 was about 17.5 μm.
The hardness of the nano-treated alloys was enhanced from 110 HV to about 130 HV. The samples with finer grains showed higher hardness. After T6 heat treatment, the hardness of nano-treated AA7075 alloys significantly increased to about 188 HV.
It is believed that besides of growth restriction factor by pinning behavior on grain boundries, TiC nanoparticles also serve as effective heterogeneous nucleation sites and excellent modifiers of the secondary phases to optimize the microstructure, resulting in the improvement of mechanical properties of AA7075.
Commercially pure Al and high-purity TiC nanoparticles (40–60 nm, US Research Nanomaterials, Inc.) were used to fabricate Al-TiC master nanocomposites by a molten salt-assisted processing method39,40,41. Commercially pure Zn, Mg, Cu, and Cr ingots were added into the Al melt to prepare basic AA7075 alloys, whose chemical composition is listed in Table 1 (all compositions are in wt.% unless otherwise stated).
Table 1 Chemical composition of AA7075 (wt.%).
Nano-treatment of AA7075 alloys was carried out as follows. Under argon gas protection, about 200 g of base alloy was re-melted in a graphite crucible with an inner diameter of 100 mm and a height of 150 mm by an electrical resistance furnace at 830 °C for 30 min. Subsequently, the Al-6 vol%TiC master nanocomposite was added into the melt, followed by mechanical stirring for 10 min until it melted and obtained uniform dispersion. Finally, the melt was poured into a copper wedge mold. The cooling rate, R, corresponding to the thickness, d, of the viewing zone in the mold could be calculated by Eq. (1)42 where K is in Kelvin, s is in seconds, and d is in mm. From the bottom to the top of the bulk casting samples, the microstructures were studied at five locations, and their corresponding cooling rates were determined to be 197.5 K/s, 87.8 K/s, 29.5 K/s, 20.3 K/s, and 14.8 K/s, respectively.
$$R\approx \frac{1000\,Km{m}^{2}}{s}/{(\frac{d}{2})}^{2}$$
The AA7075 samples were heat treated following the T6 procedure, which included solution treating at 460–480 °C for 1 h, followed by water quenching and then artificial aging at 120 °C for 19 h. Metallographic specimens were cut from the midsection of each cast sample and then mechanically ground, polished, and low–angle ion milled by using Precison Ion Polishing System (PIPS) for 1.5 h to expose typical microstructures and embedded nanoparticles. For ion milling process, the accelerating voltage and milling angle were 4 KV and 4°, respectively. For HRTEM analysis, a sample of approximately 50 nm in thickness was obtained from nano-treated AA7075 (as-cast) using Focused Ion Beam (FIB, Zeiss 1540 XB CrossBeam Workstation) and studied with a Titan 80–200 Aberration-corrected S/TEM (FEI). In order to evaluate the grain size of the alloys, samples were etched for 15–25 s using a mixture of 1.0 g NaOH, 4.0 g KMnO4 and 100 ml deionized water. The microstructural characterizations of the AA7075 samples were determined with an optical microscope and a field emission scanning electron microscope (FEI Nova 600) equipped with a focused ion beam system. The average grain size of aluminum was obtained from FESEM images using the image analysis software Image J. The Vickers Hardness tests were conducted with a Microhardness Tester (LM800 AT) under 200 gf with a dwell time of 10 s. For different cooling rates, 20 specimens of pure AA7075 and nano-treated AA7075 alloys before and after heat treatments were tested. By statistic analysis of ten points of each specimen, the Vickers hardnesses of series of AA7075 alloys were obtained.
Panigrahi, S. K. & Jayaganthan, R. Development of ultrafine grained high strength age hardenable Al 7075 alloy by cryorolling. Mater. Des. 32, 3150–3160 (2011).
Peng, J. F. et al. Study on the damage evolution of torsional fretting fatigure in a 7075 aluminum alloy. Wear 402–403, 160–168 (2018).
Ezatpour, H. R., Haddad-Sabzevar, M., Sajjadi, S. A. & Huang, Y. Z. Investigation of microstructure and mechanical properties of Al6061-nanocomposite fabricated by stircasting. Mater. Des. 55, 921–928 (2014).
Ezatpour, H. R., Chaichi, A. & Sajjadi, S. A. The effect of Al2O3-nanocomposites as the reinforcement additive on the hot deformation behavior of 7075 aluminum alloy. Mater. Des. 88, 1049–1056 (2015).
Cheng, C. M., Chou, C. P., Lee, I. K. & Lin, H. Y. Hot cracking of welds on heat treatable aluminium alloys. Sci. Technol. Weld. Join. 10, 344–352 (2005).
Seyed Ebrahimi, S. H. & Emamy, M. Effects of Al-5Ti-1B and Al-5Zr master alloys on the structure, hardness and tensile properties of a highly alloyed aluminum alloy. Mater. Des. 31, 200–209 (2010).
Dong, J., Cui, J. Z., Yu, F. X., Zhao, Z. H. & Zhuo, Y. B. A new way to cast high-alloyed Al-Zn-Mg-Cu-Zr for super-high strength and toughness. J. Mater. Process. Technol. 171, 399–404 (2006).
Panigrahi, S. K. & Jayaganthan, R. Effect of ageing on microstructure and mechanical properties of bulk, cryorolled, and room temperature rolled Al 7075 alloy. J. Alloy Compd. 509, 9609–9616 (2011).
Ilieva, M. & Radev, R. Effect of severe plastic deformation by ECAP on corrosion behavior of aluminium alloy AA 7075. Arch. Mater. Sci. Eng. 81, 55–61 (2016).
Moghaddam, M., Zarei-Hanzaki, A., Pishbin, M. H., Shafieizad, A. H. & Oliveira, V. B. Characterization of the microstructure, texture and mechanical properties of 7075 aluminum alloy in early stage of severe plastic deformation. Mater. Charact. 119, 137–147 (2016).
Hotea, V., Juhasz, J. & Cadar, F. Grain refinement of 7075Al alloy microstructures by inoculation with Al-Ti-B master alloy. 2017 IOP Conf. Ser.: Mater. Sci. Eng. 200, 012029 (2017).
Chen, X. H., Yan, H. & Jie, X. P. Effects of Ti addition on microstructure and mechanical properties of 7075 alloy. Int. J. Cast Metal Res. 28, 151–157 (2015).
Alipour, M. et al. Effects of pre-deformation and heat treatment conditions in the SIMA process on properties of an Al-Zn-Mg-Cu alloy modified by Al-8B grain refiner. Mater. Sci. Eng. A 528, 4482–4490 (2011).
Seyed Ebrahimi, S. H., Aghazadeh, J., Dehghani, K., Emamy, M. & Zangeneh, S. The effect of Al-5Ti-1B on the microstructure, hardness and tensile properties of a new Zn rich aluminium alloy. Mater. Sci. Eng. A 636, 421–429 (2015).
Senkov, O. N., Shagiev, M. R., Senkova, S. V. & Miracle, D. B. Precipitation of Al3(Sc,Zr) particles in an Al-Zn-Mg-Cu-Sc-Zr alloy during conventional solution heat treatment and its effect on tensile properties. Acta Mater. 56, 3723–3738 (2008).
Rogal, L. et al. Characterization of semi-solid processing of aluminium alloy 7075 with Sc and Zr additions. Mater. Sci. Eng. A 580, 362–373 (2013).
Wen, K. et al. Microstructure evolution of a high zinc containing Al-Zn-Mg-Cu alloy during homogenization. Rare Metal Mater. Eng. 46, 0928–0934 (2017).
Ma, C., Chen, L. Y., Cao, C. Z. & Li, X. C. Nanoparticle-induced unusual melting and solidification behaviours of metals. Nat. Commun. 8, 14178 (2017).
Wu, J. G., Zhou, S. Y. & Li, X. C. Ultrasonic attenuation based inspection method for scale-up production of A206-Al2O3 metal matrix nanocomposites. J. Manuf. Sci. Eng. 137, 011013 (2015).
Huang, S. J., Peng, W. Y., Visic, B. & Zak, A. Al alloy metal matrix composites reinforced by WS2 inorganic nanomaterials. Mater. Sci. Eng. A 709, 290–300 (2018).
Shin, S. E. & Bae, D. H. Fatigue behavior of Al2024 alloy-matrix nanocomposites reinforced with multi-walled carbon nanotubes. Compos. Part B- Eng. 134, 61–68 (2018).
Kannan, C. & Ramanujam, R. Comparative study on the mechanical and microstructural characterization of AA 7075 nano and hybrid nanocomposites produced by stir and squeeze casting. J. Adv. Res. 8, 309–319 (2017).
Joshi, T. C., Prakash, U. & Dabhade, V. V. Effect of nano-scale and micro-scale yttria reinforcement on powder forged AA7075 composites. J. Mater. Eng. Perform. 25, 1889–1902 (2016).
Flores-Campos, R. et al. Microstructural and mechanical characterization in 7075 aluminum alloy reinforced by silver nanoparticles dispersion. J. Alloy Compd. 497, 394–398 (2010).
Nie, J. F., Wu, Y. Y., Li, P. T., Li, H. & Liu, X. F. Morphological evolution of TiC from octahedron to cube induced by elemental nickel. CrystEngComm 14, 2213–2221 (2012).
Ramakoteswara Rao, V., Ramanaiah, N. & Sarcar, M. M. M. Dry sliding wear behavior of TiC-AA7075 metal matrix composites. Int. J. Appl. Sci. Eng. 14, 27–37 (2016).
Wu, R. R., Li, Q. S., Guo, L., Ma, Y. X. & Wang, R. F. Microstructure and mechanical properties of TiC/Al(7075) composites fabricated by in situ reaction. Acta Mater. Compos. Sin. 34, 1334–1339 (2017).
Zhang, B. Q., Fang, H. S., Li, J. G. & Ma, H. T. An investigation on microstructures and refining performance of newly developed Al-Ti-C grain refining master alloy. J. Mater. Sci. Lett. 19, 1485–1489 (2000).
Nie, J. F., Ma, X. G., Ding, H. M. & Liu, X. F. Microstructure and grain refining performance of a new Al-Ti-C-B master alloy. J. Alloy Compd. 486, 185–190 (2009).
Sokoluk, M., Cao, C. Z., Pan, S. H. & Li, X. C. Nanoparticle-enabled phase control for arc welding of unweldable aluminum alloy 7075. Nat. Commun. 10, 98 (2019).
Zhang, F., Su, X., Chen, Z. & Nie, Z. Effect of welding parameters on microstructure and mechanical properties of friction stir welded joints of a super high strength Al-Zn-Mg-Cu aluminum alloy. Mater. Des. 67, 483–491 (2015).
De Cicco, M. P., Turng, L. S., Li, X. C. & Perepezko, J. H. Nucleation catalysis in aluminum alloy A356 using nanoscale inoculants. Metall. Mater. Trans. A 42, 2323–2330 (2011).
Zhong, X. L., Wong, W. L. E. & Gupta, M. Enhancing strength and ductility of magnesium by integrating it with aluminum nanoparticles. Acta Mater. 55, 6338–6344 (2007).
Prasada Rao, A. K., Das, K., Murty, B. S. & Chakraborty, M. Al-Ti-C-Sr master alloy- a melt inoculants for simultaneous grain refinement and modification of hypoeutectic Al-Si alloys. J. Alloy Compd. 480, 49–51 (2009).
Zhang, M. X., Kelly, P. M., Easton, M. A. & Taylor, J. A. Crystallographic study of grain refinement in aluminum alloys using the edge-to-edge matching model. Acta Mater. 53, 1427–1438 (2005).
Greer, A. L., Bunn, A. M., Tronche, A., Evans, P. V. & Bristow, D. J. Modelling of inolucation of metallic melts: application of grain refinement of aluminium by Al-Ti-B. Acta Mater. 49, 2823–2835 (2000).
Hansen, N. Hall-Petch relation and boundary strengthening. Scripta Mater. 51, 801–806 (2004).
Ghiaasiaan, R., Amirkhiz, B. S. & Shankar, S. Quantitative metallography of precipitating and secondary phases after strengthening treatment of net shaped casting of Al-Zn-Mg-Cu (7000) alloys. Mater. Sci. Eng. A 698, 206–217 (2017).
Liu, W. Q., Cao, C. Z., Xu, J. Q., Wang, X. J. & Li, X. C. Molten salt assisted solidification nanoprocessing of Al-TiC nanocomposites. Mater. Lett. 185, 392–395 (2016).
Cao, C. Z. et al. Scalable manufacturing of immiscible Al-Bi alloy by self-assembled nanoparticles. Mater. Des. 146, 163–171 (2018).
Yao, G. C. et al. High-performance copper reinforced with dispersed nanoparticles. J. Mater. Sci. 54, 4423–4432 (2019).
Pryds, N. H. & Huang, X. The effect of cooling rate on the microstructures formed during solidification of ferritic steel. Metall. Mater. Trans. A 31, 3155–3166 (2000).
The authors gratefully acknowledge the support of National Science Foundation, the National Natural Science Foundation of China (51401085) and Natural Science Foundation of Shandong Province (ZR2019MEM019).
School of Materials Science and Engineering, University of Jinan, Jinan, 250022, People's Republic of China
Min Zuo
Department of Mechanical and Aerospace Engineering, University of California Los Angeles, California, 90094, United States
Min Zuo, Maximilian Sokoluk, Chezheng Cao, Jie Yuan, Shiqi Zheng & Xiaochun Li
Department of Materials Science and Engineering, University of California Los Angeles, California, 90094, United States
Maximilian Sokoluk, Chezheng Cao, Jie Yuan, Shiqi Zheng & Xiaochun Li
Maximilian Sokoluk
Chezheng Cao
Jie Yuan
Shiqi Zheng
Xiaochun Li
X.C.L. and M.Z. conceived the idea and designed the experiments. M.Z. fabricated the AA7075 +1 vol.% alloys and conducted the heat treatment experiments. M.Z. and M.S. fabricated Al-TiC master nanocomposites. M.S. and C.C.Z. characterized FESEM imagings. Y.J. conducted hardness tests. S.Q.Z. performed XRD analyses. M.Z. and X.C.L. wrote the manuscript. X.C.L. supervised the whole work.
Correspondence to Min Zuo or Xiaochun Li.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Zuo, M., Sokoluk, M., Cao, C. et al. Microstructure Control and Performance Evolution of Aluminum Alloy 7075 by Nano-Treating. Sci Rep 9, 10671 (2019). https://doi.org/10.1038/s41598-019-47182-9
Nanoparticle-enabled phase modification (nano-treating) of CuZrSi pseudo-binary alloy
Gongcheng Yao
, Shuaihang Pan
, Chezheng Cao
, Maximilian Sokoluk
& Xiaochun Li
Materialia (2020)
Synthesis and machining characteristics of novel TiC ceramic and MoS2 soft particulate reinforced aluminium alloy 7075 matrix composites
Anvesh Dhulipalla
, Budireddy Uday Kumar
, Varupula Akhil
, Jian Zhang
, Zhe Lu
, Hye-Yeong Park
, Yeon-Gil Jung
& Jing Zhang
Manufacturing Letters (2020)
Welding and additive manufacturing with nanoparticle-enhanced aluminum 7075 wire
Daniel Oropeza
, Douglas C. Hofmann
, Kyle Williams
, Samad Firdosy
, Punnathat Bordeenithikasem
, Maximillian Sokoluk
, Maximilian Liese
, Jingke Liu
Journal of Alloys and Compounds (2020) | CommonCrawl |
Integrating over level sets of a function
Decoupling of Holomorphic and Anti-holomorphic parts in 2D CFT
Powers of an exponential in the pertubation expansion.
Non-commutativity of the d'alambert operator acting on the covariant derivative of a scalar field in general relativity
Exterior Differential (and its Equivalent Differential Operator) of an Integral 0-Form
Density of odd and even eigenstates of an integral operator
Energy Oscillations in a One-Dimensional Crystal
Energy Oscillations in a One Dimensional Crystal?
Is the world $C^\infty$?
How exactly analyticity of S-matrix comes from causality principle?
Commutativity of integration and Taylor expansion of the integrand in an integral
I am baffled with a seemingly a straightforward problem. Suppose we are given the following integral:
\begin{equation} f(a)\,=\,\int_{0}^{\infty} \frac{x^4}{x^4+a^4} e^{-x}, \end{equation} and we want to determine the dependence of $f(a)$ on $a$ when $a\ll 1$. Apparently this integral can be solved using Mathematica. Taylor expanding the result, which is a Meijer G-function, it turns out that $f(a)$ is analytic in $a$.
In the specific case of this integral, it's possible to use a trick so that one can directly Taylor expand the integrand (Taylor expanding the integrand of $f(a)-f(0)$ after $x\to x'=a x$). But I'm not interested in this particular integral and am mentioning this as a simple example.
Now here is what I find paradoxical: Let's try to do this in a more pedestrian way by breaking up the integration range and Taylor expanding the exponential when x is small and the rest of the integrand when x is large. Interchanging the integration and summation is justified by Fubini's theorem (if I'm not mistaken, $\int \sum |c_n(x)| <\infty$ or $\sum\int |c_n(x)|<\infty$).
Now, breaking up the integral can be done in two ways. Either,
\begin{equation} f(a)=\int_{0}^{1} \frac{x^4}{x^4+a^4} e^{-x} + \int_{1}^{\infty} \frac{x^4}{x^4+a^4} e^{-x}\,, \end{equation} or
\begin{equation} f(a)=\int_{0}^{2a} \frac{x^4}{x^4+a^4} e^{-x} + \int_{2a}^{\infty} \frac{x^4}{x^4+a^4} e^{-x}\,. \end{equation} $\frac{x^4}{x^4+a^4}$ can be Taylor expanded and the integration ranges are within the convergence radius in both cases. The Taylor expansion in both cases results in a series that's uniformly convergent and therefore one should be able to interchange integration and summation.
The former case, where the integration range is broken up at $1$, gives an analytic result in $a$. Curiously, the latter (breaking up the integral at $2a$) gives non-analytic terms (see below) and I cannot figure out how to reconcile this with the exact result. The lower integration ranges in both cases give analytic expressions in $a$.
\begin{equation} \int_{2a}^{\infty} \frac{x^4}{x^4+a^4} e^{-x}\,=\, \sum_{n=0}^{\infty} \int_{2a}^{\infty} \frac{(-1)^n a^{4n}}{x^{4n}}e^{-x} \,=\, \sum_{n=0}^{\infty}(-1)^{n}a^{4n}\Gamma(1-4n,2a). \end{equation} Using the series expansion of the upper incomplete $\Gamma$-function, there will be terms of the form $\frac{-(-1)^n}{(4n-1)!} a^{4n} \ln(a)$.
I would like to know whether the Taylor expansion is not justified (if so, why precisely), or, although hard to imagine, is it that somehow these non-analytic terms sum up to an analytic result. Thanks.
This post imported from StackExchange Mathematics at 2014-06-16 11:25 (UCT), posted by SE-user S.G.
real-analysis
asked Jan 21, 2014 in Mathematics by S.G. (30 points) [ no revision ]
I think your computation of the Taylor expansion of the latter case is wrong in the sense that it is not a Taylor expansion. Your integral boundaries depend on $a$ which has to be taken into account.
This post imported from StackExchange Mathematics at 2014-06-16 11:25 (UCT), posted by SE-user Dominik
commented Jan 21, 2014 by Dominik (0 points) [ no revision ]
Thanks Dominik. But that's exactly what I would like to know. Does that mean I can have a Taylor expansion for a function (here $f(a)$), and at the same time a series expansion that involve non-analytic terms of the above form?
commented Jan 21, 2014 by S.G. (30 points) [ no revision ]
What series expansion did you use for $\Gamma(1-4n,2a)$? I don't see where the terms with $\ln(a)$ come from.
This post imported from StackExchange Mathematics at 2014-06-16 11:25 (UCT), posted by SE-user Pulsar
commented Jan 21, 2014 by Pulsar (0 points) [ no revision ]
Thanks Pulsar. You can expand $\Gamma(m,a)$ in Mathematica. One way to see the log terms is the following: $\Gamma(m,a)=\int_{a}^{\infty} \text{d}s s^{m-1}e^{-s}$. For $a\ll 1$, this can be written as $\int_{a}^{1}\text{d}s \,s^{m-1}e^{-s}+\Gamma(m,1)$. Now for $m\leq 0$, the remaining integral can be dealt with by Taylor expanding the exponential. For any $m$ there will be a $s^m$ term from the Taylor expansion of the integral that, together with the $s^{m-1}$ term, will leave you $\frac{1}{s}$. Then, doing the $s$ integral gives the log term. See the expansion of $\Gamma(0,a)$ on Wikipedia.
Hmm, my approach doesn't work; the denominators become zero. I'm deleting my answer again.
Incidentally, this question is more suited for math SE. I'll ask for a migration.
Thanks, Pulsar.
I figured it out. There is no contradiction. Both integrals do exactly the same result, which coincides with what one finds from doing the integral using Mathematica and series expanding the quoted Meijer G-function.
The $a^{4n}\ln(a)$ terms do appear in the case where the integration range is broken up at $1$. They show up in the lower-range integral ($\int_{0}^{1}\cdots$). I should have been more careful. Sorry for the confusion.
The direct Taylor expansion that I alluded to was incorrect and does fail.
answered Jan 22, 2014 by S.G. (30 points) [ no revision ] | CommonCrawl |
Search SpringerLink
Mirror symmetry and aging: The role of stimulus figurality and attention to colour
Jasna Martinovic1,2,
Jonas Huber2,3,
Antoniya Boyanova2,
Elena Gheorghiu4,
Josephine Reuther2,5 &
Rafael B. Lemarchand1,2
Attention, Perception, & Psychophysics volume 85, pages 99–112 (2023)Cite this article
Symmetry perception studies have generally used two stimulus types: figural and dot patterns. Here, we designed a novel figural stimulus—a wedge pattern—made of centrally aligned pseudorandomly positioned wedges. To study the effect of pattern figurality and colour on symmetry perception, we compared symmetry detection in multicoloured wedge patterns with nonfigural dot patterns in younger and older adults. Symmetry signal was either segregated or nonsegregated by colour, and the symmetry detection task was performed under two conditions: with or without colour-based attention. In the first experiment, we compared performance for colour-symmetric patterns that varied in the number of wedges (24 vs. 36) and number of colours (2 vs. 3) and found that symmetry detection was facilitated by attention to colour when symmetry and noise signals were segregated by colour. In the second experiment, we compared performance for wedge and dot patterns on a sample of younger and older participants. Effects of attention to colour in segregated stimuli were magnified for wedge compared with dot patterns, with older and younger adults showing different effects of attention to colour on performance. Older adults significantly underperformed on uncued wedge patterns compared with dot patterns, but their performance improved greatly through colour cueing, reaching performance levels similar to young participants. Thus, while confirming the age-related decline in symmetry detection, we found that this deficit could be alleviated in figural multicoloured patterns by attending to the colour that carries the symmetry signal.
Working on a manuscript?
Avoid the common mistakes
Mirror symmetry is ubiquitous in the natural world. It occurs when two halves of a pattern mirror each other across a symmetry axis. Mirror symmetry detection is an effortless process when compared with detection of translational or rotational symmetry (Julesz, 1971). Studies have shown that a 50-ms exposure time is enough to discriminate mirror symmetry from noise, highlighting the speed and efficiency with which the human visual system can processes mirror symmetry (C. C. Chen & Tyler, 2010). However, the visual system is also sensitive to the colours rather than just positions of features, symmetric or otherwise. Symmetry sometimes correlates with colour and brightness modulations within objects such as fruit, flowers, human and animal faces, and bodies. While the role of patterns' element position in mirror symmetry perception has been extensively investigated (for a review, see Bertamini et al., 2018), the extent to which colour and luminance polarity inconsistency of the symmetric elements affects symmetry detection is still debated.
A number of studies have investigated the role of colour in symmetry perception (Gheorghiu et al., 2016; Morales & Pashler, 1999; Wright et al., 2018; Wu & Chen, 2014, 2017). Initially, Morales and Pashler (1999) used two and four-colour nonisoluminant patterns in which the elements (squares) were arranged either symmetric or asymmetric (i.e., mismatched in colour) and found that symmetry detection mechanisms are not colour selective. An increase in the number of colours (from two to four) resulted in longer symmetry detection times and lower accuracy, which led the authors to propose that symmetry in multi-colour patterns could only be detected by a sequential attention-switching mechanism from one colour to the next. Connecting these ideas with feature integration model of attention (1980), Huang and Pashler (2002) further suggested that symmetry may be assessed one colour at a time using internal representations which specify the spatial distribution of a particular feature as either present or absent (e.g., red and nonred). For images that contain more than two colours, colour–symmetry would thus have to be serially evaluated.
On the other hand, Wu and Chen (2014) claimed that symmetry detection was colour selective based on the elevation of symmetry detection thresholds by a noise mask when noise and symmetric elements were of the same chromaticity. Gheorghiu et al. (2016) clarified whether symmetry channels were colour selective or colour sensitive by studying symmetry detection in random dot patterns in which colour and symmetry signals were either correlated or uncorrelated. Measuring symmetry detection under two perceptual conditions—" with" and "without attention" to colour—Gheorghiu et al. showed that while symmetry detection mechanisms are sensitive to colour correlations across the symmetry axis and benefit from attention to colour, they are not colour selective (i.e., there are no colour-tuned symmetry channels). The effects reported in Wu and Chen's (2014) study can be explained by the fact that participants knew the colour of the symmetric pattern, and thus they could selectively attend to the colour carrying the symmetry signal helping them to segregate symmetry from noise and thus facilitating symmetry detection.
In nature, symmetry is a salient attribute of figures (e.g., flowers, animal bodies, faces) rather than backgrounds, leading to higher conspicuity of symmetric animals. To impede predator discrimination prey may exhibit background matching or disruptive colouration, with the latter being particularly effective at decreasing detectability when placed away from the animal's midline (i.e., symmetry axis; Wainwright et al., 2020). On this basis, generalizability of results from random-dot symmetries has been questioned by some authors (e.g., Wilson & Wilkinson, 2002). How similar is the role of colour in symmetry perception for dot patterns compared with figural patterns? Random dot patterns isolate sensitivity to positional information but lack any orientation information. One could argue that this is an important attribute of symmetric figures in everyday life. Using stimuli made of Gabor patches, Machilsen et al. (2009) found that orientation noise decreases the salience of symmetrical contour shapes embedded in backgrounds. Meanwhile, Sharman and Gheorghiu (2019) showed that orientation of elements does not affect symmetry detection in Gabor patterns if these are not embedded in a background, suggesting that symmetry detection mechanisms are solely reliant on positional information. However, neither of these studies examined the role of colour. Therefore, it remains unknown whether colour–symmetry correlations would become more conspicuous in figural stimuli containing both positional and orientation cues.
To address these outstanding issues, we used patterns made of Gaussian blobs similar to those used by Gheorghiu et al. (2016) and contrasted them with a novel wedge pattern made of centrally aligned but pseudorandomly positioned elements. While dot patterns allow for tight control over positional information in the absence of orientation, figural patterns contain both position and orientation information consistent with a figural object (see Fig. 1). As in Gheorghiu et al. (2016), we used a two-interval forced-choice (2IFC) task and measured accuracy for detecting symmetry in two-colour patterns containing 50% position symmetry and three-colour patterns containing 33% position symmetry under four stimulus conditions: (1) segregated patterns, in which symmetric and random (or noise) elements had different colours (e.g., all symmetric elements were red, and all random elements were green); (2) nonsegregated patterns, in which symmetric and random elements were of all colours in equal proportion (e.g., in two-colour patterns half of all symmetric and random elements were red, and the other half green); (3) antisymmetric patterns in which position-symmetric elements were mismatched in colour across the symmetry axis and random elements were assigned different colours in equal proportion across the symmetry axis; (4) colour-grouped antisymmetric patterns in which each half of the stimulus was of a different colour (Fig. 2).
Properties of colour-symmetric wedge patterns. An example of how a 24-wedge pattern is built by combining twelve 100% positionally symmetric wedge elements and 12 noise elements. Each wedge pattern contains a symmetry signal (i.e., 100% position symmetry, as reflected by the weight of evidence W score of perceptual goodness equal to 0.5 meaning that there are 6 symmetric pairs out of a total of 12 elements) and noise (0% position symmetry). This results in a symmetric pattern with 50% position symmetry for the two-colour condition (W = 0.25) and 33% position symmetry (W = 0.17) for the three-colour condition. There are three colour arrangement conditions: segregated (left), nonsegregated (middle), and antisymmetric (right), all containing 50% position symmetry. In the segregated condition, the symmetry signal is of a single colour (either red or green) and noise of another colour (either green or red). In the nonsegregated condition, the symmetry signal is distributed evenly across the two colours (both red and green) as with the noise. In the antisymmetric condition, the symmetric elements are made of both colours but with symmetric pairs having opposite colours across the symmetry axis. Note that the number of wedges in each colour is equal across the symmetry axis. (Colour figure online)
Example stimuli for the different colour–symmetry and noise combinations. a Experiment 1: Wedge stimuli consisted of 24 (top) or 36 (bottom) wedges and were of made of two (left) or three (right) colours. There were three colour–symmetry conditions: segregated, nonsegregated, and antisymmetric (see text and Fig. 1 for further details). b Experiment 2 contrasted performance for dot (top) and wedge (bottom) patterns using the segregated, nonsegregated, and antisymmetric conditions. The foil for these conditions was random distributed dots/wedges of all colours in equal proportions. In addition, we included a colour-grouped antisymmetric condition and a colour-grouped random pattern in which one side of the pattern was of one colour and the other side was of a different colour. The colour-grouped noise patterns served as foil for the colour-grouped antisymmetric condition. Note: the colour of the symmetry signal in the segregated conditions is red. (Colour figure online)
Based on previous findings from Gheorghiu et al. (2016), we expected attention to colour to facilitate symmetry detection for segregated stimuli only (i.e., stimuli in which symmetry is carried by a single colour) but have no effect for other stimulus conditions in which symmetry is distributed across all colours. We expected attentional effects to be magnified for wedge patterns. While colour would attract spatial attention to dot and wedge position alike, wedge patterns would carry additional orientation-symmetry information to be processed. If higher perceptual load increases attentional effects (Lavie, 1995), this should not only produce larger benefits for colour-segregated patterns but also larger costs in performance for antisymmetric stimuli, through restricting symmetry analysis to a wholly uninformative part of the stimulus. In fact, when asked to detect symmetry in antisymmetric patterns, participants seem to be able to do so only under conditions in which information on spatial location is easily accessible (e.g., when there is low element density and high contrast homogeneity; Mancini et al., 2005). In nongrouped antisymmetric stimuli there may be fewer residual processing resources for detecting the positional symmetry carried by wedges depicted by the unattended colour. If this is the case, the complete lack of symmetry in elements depicted by the attended colour should lead to reduced or even at-chance performance.
To our knowledge, only one study by Herbert et al. (2002) examined symmetry detection in dot patterns in older adults and found a large reduction in sensitivity to symmetry among participants aged 60 and above (d' ~1–2). The information degradation hypothesis suggests that degraded perceptual signal inputs lead to perceptual processing errors, which in turn contribute to cognitive deficits in otherwise healthy older adults (for a review see Monge & Madden, 2016). Contour integration—a task that requires interelement grouping by orientation—also exhibits age-related costs in performance (Roudaia et al., 2008, 2013). On the other hand, global shape discrimination thresholds obtained with Glass patterns (which lack oriented lines) are similar in younger and older participants, with only a small reduction in sensitivity once noise dots are added (d' change of ~0.3; Norman & Higginbotham, 2020). Could feature-based attention alleviate at least some of the age-related deficits in symmetry perception? In the case of colour symmetry in segregated two-colour displays (Figs. 1 and 2), if attention was applied with optimal efficiency, directing attention to the colour of the symmetric elements would lead to maximal strength signal (i.e., fully filtering out all the distractor elements).
To evaluate our predictions, we conducted two experiments. In Experiment 1, we used our novel wedge patterns to examine how symmetry detection is affected by the number of wedges, number or colours, and colour distribution between symmetric and noise wedges. If wedge patterns containing both position and orientation symmetry engage the same mechanisms as dot patterns which contain only position symmetry, one would expect to obtain similar dependencies: (1) invariance to the number of elements as long as their density remains relatively low (e.g., Rainville & Kingdom, 2002, report a symmetry integration region of ~18 elements on average) and (2) improvements through feature-based attentional cueing when symmetry signal and noise elements are segregated by colour (Gheorghiu et al., 2016). In Experiment 2, we examined aging effects on symmetry detection by comparing performance for symmetric wedge patterns and symmetric dot patterns between younger and older participants. We expected to observe an age-related deficit in symmetry perception (Herbert et al., 2002), which should be alleviated through attentional deployment to colour in displays in which symmetry and noise elements were segregated by colour.
In Experiment 1, a total of 21 participants were tested, but three of them were excluded due to poor overall performance (below 55%). This left 18 participants in the sample (two males, age range: 19–29 years, M = 22, SD = 2). In Experiment 2, there were 26 participants (eight males): 14 younger adults (age range: 20–27 years; M = 22, SD = 2) and 12 older adults (age range: 60–69 years; M = 65, SD = 3). Psychophysics utilizes precise measurement techniques and focuses on effects that are generally large and stable across participants (e.g., Baker et al., 2018); thus, our sample sizes, which were no different from those in previous similar studies (e.g., Gheorghiu et al., 2016), were deemed adequate in this context (Lakens, 2022).
Participants were recruited amongst undergraduate and post-graduate students as well as members of the University of Aberdeen's School of Psychology participant panel. They received either course credit or a monetary reimbursement to compensate them for their time and effort. Participants had normal or corrected-to-normal visual acuity and normal colour vision as assessed by the City University Colour Test (Fletcher, 1975). In Experiment 2, visual acuity was verified using a Snellen chart. Participants gave written informed consent. The research protocol was approved by the University of Aberdeen School of Psychology ethics committee and participants were treated in accordance with the Declaration of Helsinki (1964).
Measurements of screen phosphors by SpectroCAL (Cambridge Research Systems, UK) were used in combination with CIE 1931 colour matching functions to ensure accurate colour reproduction. CRS toolbox for MATLAB was used to control stimulus presentation and collect responses, while CRS Colour toolbox (Westland et al., 2012) was used to generate the nonisoluminant colour stimuli.
Participants sat in a testing chamber at a viewing distance of 80 cm from the screen and responded via a button box (in Experiment 1, Cedrus RB-530, Cedrus Corporation, San Pedro, CA; in Experiment 2, CT-6 box, CRS, UK). In Experiment 1, stimuli were presented on Display++ screen (CRS, UK). In Experiment 2, stimuli were presented on a ViewSonic P227f monitor under the control of a Dell PC equipped with a dedicated visual-stimulus generator (ViSaGe; CRS, UK). The chromatic and luminance output of the monitor were calibrated prior to testing using a ColorCal2 (CRS, UK).
A mid-grey background (CIE 1931, x = 0.2848, y = 0.2932, Luminance 23 cd/m2) was used. We used the unique hues—red (0.3521, 0.2966, 49.887 cd/m2), green (0.2618, 0.3612, 49.887 cd/m2), and yellow (0.348, 0.364, 49.887 cd/m2) from the normative data set by Wuerger (2013). Stimuli consisted of wedge/dot patterns containing either two (red and green) or three (red, green, yellow) colours with respectively 50% and 33% wedges/dots arranged symmetrically in the symmetric condition, while the remaining wedges/dots were randomly positioned and drawn equally from the remaining colours (Fig. 1). In the two-colour patterns, 50% of the elements were green and 50% were red. In three-colour patterns, 33% of elements were depicted in each colour.
There were five stimulus conditions: (1) "segregated" condition, in which the symmetric dots or wedges were of one colour, and the random (or "noise") dots/wedges were of the remaining colour(s); (2) "random-segregated" condition, which was the same as the segregated condition, except that the colour of the symmetric elements was randomly assigned to a different colour on each trial instead of being the same across the entire experiment; (3) "nonsegregated" condition, in which the symmetric elements were of all colours in equal proportion, as were the noise elements; (4) "antisymmetric" stimuli, in which position-symmetric dots were mismatched in colour across the symmetry axis; (5) "colour-grouped antisymmetric patterns" was an antisymmetric pattern in which all elements on one half of the pattern were of one colour, while the other half had a different colour. Such colour-grouped patterns could only be generated for two-colour stimuli. In the random dots and wedge patterns (0% symmetry signal), the noise dots/wedges were made of all colours in equal proportions. We also used a colour-grouped random pattern in which half of the random pattern was of one colour (either red or green) and the other half was of a different colour (either green or red). This was used for comparison with colour-grouped antisymmetric patterns.
The MATLAB scripts for generating wedge patterns can be found in our study's online depository (https://osf.io/mf9ug/). The holographic model of regularity (van der Helm & Leeuwenberg, 1996) states that the weight of evidence for regularity in a pattern can be expressed as:
$$ W=\mathrm{E}/\mathrm{N} $$
where W = perceptual goodness, E = evidence for regularity, equaling the number of symmetric pairs and N = total amount of information. In our case, E would be the pairs of wedges that overlap across the vertical symmetry axis and N would be the total number of wedges. Perfect symmetry would have W = 0.5. For our two-colour symmetric patterns W = 0.25, for three-colour symmetric patterns W = 0.17 and for random patterns W = 0.
In Experiment 1, we contrasted performance for two- and three-coloured patterns consisting of 24 or 36 wedges. The wedges covered 50% of the area of a circle subtending 13.79° in diameter. To avoid visual discomfort resulting from too many wedges proximal to each other in the centre of the circle, an area 3.10° in diameter at the centre of the image was left blank (Figs. 1 and 2). We used three types of colour symmetry: segregated, nonsegregated, and antisymmetric.
While Experiment 1 focused on characterizing how performance driven by the novel wedge stimulus relates to number of wedges or colours in a sample of younger participants, Experiment 2 aimed to contrast it to symmetry perception from the more classical random dot patterns in younger and older adults. As older adults were expected to have poorer performance than younger adults, we needed to ensure that robustly above-chance performance could be obtained in both groups. Thus, we conducted pilots in which we manipulated the number and density of wedges until we were satisfied that our experiment would yield reliably above-chance performance in older adults. Another aim of the pilots was to ensure relatively similar performance for segregated dot and wedge patterns. This would make the findings from different colour–symmetry conditions more easily interpretable through providing a point of correspondence between the two stimulus types at the upper end of performance. The experiment used wedge patterns consisting of 16 elements, occupying 20% of the surface area of a circle subtending 14.74° in diameter (Fig. 2) and dot patterns similar to those used by Gheorghiu et al. (2016), consisting of 96 Gaussian blobs (0.41° diameter with a Gaussian size standard deviation factor of 5) spread over an area approx. 11° × 11° visual angle.
We use a two-interval forced-choice (2IFC) procedure. At the start of each trial, a fixation cross would appear for 700 ms, and was followed by the sequential presentation of two images, one containing a symmetric pattern (i.e., either segregated, random segregated, nonsegregated, antisymmetric, or colour-grouped antisymmetric) and the other a random pattern (Fig. 3). The symmetric and random patterns were presented in random order and were separated by an inter-stimulus interval of 700 ms. The vertical line of the fixation cross was elongated (0.55° × 5.52°) to reinforce that mirror symmetry detection was to be performed across the vertical axis. Since the wedge-pattern was wheel-shaped, and wedge generation was evaluated with regard to vertical mirror symmetry, participants could potentially pick up on genuine, yet unintended symmetry if they were to evaluate the stimulus in relation to a different axis. Participant's task was to indicate whether the symmetric pattern was in the first or second interval by using a key press. The left button corresponded to the first interval and the right button to the second. Participants were allowed to take as long as required to respond. In Experiment 1, stimulus images were presented for 500 ms (as in Gheorghiu et al., 2016) while in Experiment 2, presentation time was 1,000 ms. The longer stimulus duration in Experiment 2 was chosen following a pilot experiment and was intended to ensure that older adults could perform the symmetry detection task above chance level.
Schematic of the 2IFC procedure. In each trial, participants viewed two intervals—one containing a symmetric pattern and one showing a random/noise pattern (i.e., foil). The order of the patterns was randomized from trial to trial. In this example, the first interval contains the symmetric stimulus in which symmetry and noise are segregated by colour (symmetric wedges are all red), while the second interval contains a noise pattern (0% position symmetry). The fixation cross is elongated along the vertical axis to reinforce that this is the symmetry axis along which the circular patterns are to be judged. (Colour figure online)
This experiment was repeated under two perceptual conditions: first, without attention to colour symmetry, and second, with attention to colour (i.e., participants were a priori told the colour of the symmetric pattern). Similar to Gheorghiu et al. (2016), in these attention-to-colour conditions the stimuli were not physically altered in any way, but participants were verbally informed of the symmetry colour to attend (i.e., the colour carrying the symmetry signal in the segregated condition). Half of subjects were cued to green, and the other half to red.
At the start of each block, participants first performed 16 practice trials, to get familiarized with the stimuli/task. Participants were asked to repeat the practice if they failed to exceed 60% accuracy. In Experiment 1, stimuli were blocked by number of wedges and the number of colours, with 120 trials per block (40 per colour–symmetry condition; segregated, nonsegregated, and antisymmetric). Block order was randomly assigned for each participant. In experiment 2, stimuli were blocked by stimulus type (wedges or dots), with 250 trials per block (50 per colour–symmetry condition; segregated, random segregated, nonsegregated, nongrouped antisymmetric, grouped antisymmetric). Block order was counterbalanced amongst participants, as was the attended colour. Breaks were offered between blocks. Experiment 1 took two hours to complete. Experiment 2 took two and a half hours to complete. Due to the lengthiness of Experiment 2, participants were offered the possibility to complete the study in two separate sessions.
All analyses were performed in R (R Core Team, 2016), using packages gtools (Warnes et al., 2015), reshape2 (Wickham, 2007), dplyr (Wickham et al., 2017), ggplot2 (Wickham, 2009), lme4 (Bates et al., 2015), effectsize (Ben-Shachar et al., 2020), and emmeans (Lenth et al., 2019).
The use of analysis of variance (ANOVA) is questionable when data outcomes are categorical (e.g., correct or incorrect response). Accuracy rates computed from single-trial data may appear suitable for an ANOVA but suffer from the problem that confidence intervals (CIs) may extend beyond interpretable values of 0 and 100% (e.g., a mean of 90% with a CI between 65% and 115%). While ordinary logit models have many advantages over ANOVAs on percentage data, mixed logit models have the further advantage of being able to account for random subject effects (Jaeger, 2008). We used generalized linear mixed-effect models (GLMMs) on the binomial single-trial accuracy data (correct/incorrect), as implemented in the R statistic package lme4. Effect sizes are reported as standardized odds ratios (ORs). An OR of ~1 would imply no difference in the likelihood of the two outcomes (here, correct or incorrect) between conditions, while ORs of 1.68, 3.47, and 6.71 can be taken as equivalent to Cohen's d = 0.2 (small), 0.5 (medium), and 0.8 (large), respectively (H. Chen et al., 2010).
When fitting GLMMs, we applied the maximal random effect structure that was possible while maintaining goodness of fit. We then evaluated the contributions of fixed effects and their interactions by removing the highest order effects, one by one, and performing chi-squared tests to assess whether their removal affected the amount of variance explained. We report all the estimates of the final model, in which none of the remaining effects can be removed without reducing the variance explained. Post hoc tests on any interactions in this final model were performed using omnibus paired t tests, corrected for multiple comparisons (p < .05) with the "mvt" method from emmeans package, which relies on the multivariate t distribution with the same covariance structure as the estimates to determine the adjustment.
Experiment 1: Symmetry detection in wedge patterns
We divided the dataset into two partly overlapping sets prior to submitting it to generalized linear mixed-effect model analysis. The first set allows us to examine the interplay between the number of wedges (24 or 36), number of colours (two or three), type of colour symmetry (segregated or nonsegregated), and attention (uncued or cued). The second set only includes two-coloured patterns, allowing us to examine the significance of the type of colour symmetry in more detail, by including antisymmetric in addition to segregated and non-segregated patterns.
Figure 4 depicts accuracy as a function of stimulus type and attention condition (cued/uncued) for 24 and 36 number of wedges, and 2 and 3 number of colours conditions. Individual participant data points are overlaid onto the box plot. The full details of the best fitting model are presented in the Supplementary Materials.
Results for Experiment 1: Box plot showing accuracy in the symmetry detection task. Dots indicate individual data points. The dashed grey line indicates the chance level (50% accuracy). (Colour figure online)
For our first analysis, the best fitting model included the three-way interaction between number of colours, number of wedges and colour–symmetry type, χ2(1) = 5.481, p = .019, as well as two two-way interactions of attention, the first one with colour symmetry, χ2(1) = 22.198, p < .001, and the second one with number of colours, χ2(1) = 4.0161, p = .045. The four-way interaction, all the other three-way interactions and the two-way interaction between attention and number of wedges did not contribute to the fit and were thus removed (all ps > .095). For full details of the statistical tests, see the Supplementary Materials.
In Fig. 5a, the left plot visualizes the three-way interaction between number of colours, number of wedges, and type of colour–symmetry patterns, the middle plot shows the two-way interaction between attention and type of colour–symmetry patterns and the right plot visualizes the interaction between attention and number of colours. In the segregated condition, performance for the two-colour 24-wedge patterns was significantly higher compared with both three-colour 24-wedge (z = 3.181, p = .029) and two-colour 36-wedge patterns (z = 4.373, p < .001). However, for non-segregated patterns performance did not depend on number of colours or wedges (z < 2.876, p > .072). Thus, two-colour 24-wedge patterns were not associated with better performance more generally, but only in the segregated condition. For the two-way interactions, significantly higher performance was obtained with-attention only in the segregated condition (z = 4.188, p < .001). Finally, number of colours was another factor that interacted with attention—while for two-colour patterns, attention did produce a significant increase in performance (z = 3.333, p = .004), this did not occur for three-colour patterns (z = −1.598, p = .360).
Plots of all the interactions from Experiment 1. a Interactions from the first analysis, which included patterns that differed in the number of colours. Estimated marginal means derived from the best fitting GLMM are presented on the y-axis, while the levels of factors involved in the interactions are presented on the x-axis. The other factors are collapsed. b Interaction plots for the second analysis involving all three symmetry conditions (nonsegregated, segregated and antisymmetric) for 24 or 36 wedges, with and without attention. Estimated response rates derived from the best fitting GLMM are shown on the y-axis, with only the factors involved in the interaction shown on the x-axis. Dashed lines indicate chance level, shaded blue areas indicate 95% confidence intervals of the estimate, and red arrows demarcate statistically significant differences (note that conditions for which the red errors overlap are not statistically different from each other). (Colour figure online)
For our second analysis, we only examined two-colour patterns, allowing us to also include responses to antisymmetric patterns. Interactions between the number of wedges and colour–symmetry type, χ2(2) = 13.219, p = .001, and colour–symmetry type and attention, χ2(2) = 94.871, p < .001, contributed significantly to the model. All the other interactions were not significant (all ps > .511; for more detail, see the Supplementary Materials).
We evaluated the two interactions using omnibus paired t tests corrected for multiple comparisons, with interaction plots depicted in Fig. 5b (attention and colour–symmetry type on the left; number of wedges and colour–symmetry type on the right). In terms of the effect of attention to different types of colour symmetry, two things stand out: (1) while attention to colour creates a large benefit for segregated patterns (z = 8.109, p < .001), it does not have a pronounced effect on non-segregated (z = 2.781, p = .0593) or antisymmetric (z = 0.366, p = .99) patterns; (2) segregation produces a benefit even without cueing, with superior performance compared with uncued nonsegregated (z = 7.080, p < .001) and antisymmetric (z = 5.289, p < .001) patterns. Number of wedges interacted with different colour–symmetry types, so that performance was better for segregated 24-wedge patterns compared with 36-wedge patterns (z = 4.331, p < .001) but no significant differences were observed for nonsegregated (z = 0.527, p = .995) and antisymmetric patterns (z = 1.622, p = .583).
Interim discussion
The results are highly consistent across the two analyses: attention improves performance for segregated patterns alone. Thus, we replicate the observations on colour symmetry reported by Gheorghiu et al. (2016) and extend them to figural patterns. The attentional improvement for segregated patterns occurs in addition to the existing but small advantage in relation to nonsegregated patterns. A similar benefit of attention when signal and noise elements are segregated by colour has also been found in other tasks such as global motion coherence (Li & Kingdom, 2001; Martinovic et al., 2009). The attentional benefit for segregated symmetric patterns is more pronounced in two-colour 24-wedge stimuli, indicating that a small number of colours and elements (low density) improves performance. A decrease in performance with number of colours in the stimuli was also reported by Gheorghiu et al. (2016). This is to be expected considering that the amount of symmetry signal decreases from 50% for two colours to 33% for three colours. It is important to note that overall performance in the experiment was relatively low (M = 62.9%, SD = 13.1%). Another factor potentially contributing to such low performance could have been the high density of wedge elements within the stimuli. Wedges covering 50% of the circle area left on average a 5° gap on the circumference between elements for 36-wedge patterns and a 7.5° gap between 24-wedge patterns. Higher density is associated with poorer symmetry detection as it leads to adoption of a smaller symmetry integration region and hence increases susceptibility to positional noise (Rainville & Kingdom, 2002).
This experiment established that symmetry detection in wedge patterns exhibits similar dependencies on stimulus properties (number of elements, colours, attention, and colour symmetry) as dot patterns. In the following experiment, we compare symmetry detection in wedge and dot pattern in younger and older adults. To increase overall performance, wedge patterns are reduced to 16 elements covering 20% of the circle (see Methods for more detail).
Experiment 2: Symmetry detection for dot and wedge patterns in younger and older adults
Figure 6a depicts average accuracy for older and younger participants for wedge and dot patterns in the symmetry detection task, with individual participant data (black dots) overlaid onto the graph. The full details of the best fitting model are presented in the Supplementary Materials.
Results for Experiment 2. a Average accuracy in the symmetry detection task. Dots indicate individual participant data. The dashed grey line emphasizes 50% accuracy equivalent to chance performance. b Three-way interaction plots from the best fitting model, depicting data collapsed so as to visualize only those factors involved in the interactions. The left graph depicts the interaction between stimulus type, colour–symmetry and attention, collapsing across age. The right graph depicts the interaction between stimulus type, attention, and age, collapsing across colour–symmetry combinations. Model predictions are back transformed to reflect accuracy measures. Error bars depict 95% confidence intervals. (Colour figure online)
The best fitting model included the following three-way interactions: Colour Symmetry × Stimulus Type (wedge, dots) × Attention, χ2(4) = 16.295, p = .003, and Stimulus Type × Attention × Age Group, χ2(1) = 20.303, p < .001. Figure 6b depicts the post hoc analyses, which focused on decomposing these two interactions.
First, we deconstructed the interaction between stimulus type, colour–symmetry type, and attention (see left panel in Fig. 6b). This interaction reveals that symmetry detection is not only more difficult in wedge stimuli but also associated with a somewhat different pattern of attentional effects for different colour–symmetry conditions. In fact, similar performance for wedges and dots in the segregated condition (both uncued and cued) despite a much poorer performance for the majority of other wedge-pattern conditions implies that colour–symmetry correlations may be more efficiently processed in wedge compared with dot patterns. More pronounced costs of cueing for nongrouped antisymmetric wedge patterns are in line with this account.
Overall, the post hoc analysis revealed some broad similarities in how cueing affected dots and wedges: segregated patterns were facilitated by attentional cueing (dots: z = 3.419, p = .012; wedges z = 6.265, p < .001), while nongrouped antisymmetric patterns were inhibited/hindered by feature-based attention (dots: z = 3.748, p = .003, wedges: z = 5.697, p < .001). In addition, cueing also increased performance for random segregated patterns, but only for wedge patterns (z = 4.522, p < .001). Effects of cueing were absent for all other colour–symmetry types (zs < 1.36, ps > .0947). Wedge and dot stimuli led to similar performance in the uncued segregated condition (z = 1.71, p = .769), which was unsurprising as our pilots aimed to match performance for this condition. There were no significant differences between uncued colour-grouped antisymmetric dot and wedge patterns (z = 2.886, p = .07). However, in all other uncued conditions, performance was poorer for wedges (random segregated: z = 4.288, p < .001; nonsegregated: z = 8.640, p < .001; nongrouped antisymmetric: z = 6.376, p < .001). When comparing cued wedge and dot patterns, performance remained poorer for nonsegregated (z = 9.450, p < .001) and nongrouped antisymmetric (z = 8.238, p < .001) wedges, became comparable for random segregated (z = 0.317, p = .990) and remained comparable for segregated (z = 1.119, p = .990) patterns.
Secondly, we deconstructed the interaction between stimulus type, attention and age group. Older and younger participants did not differ in uncued dots (z = 0.917, p = .940) or cued wedges (z = 1.298, p = .7541) conditions, but older adults performed poorer with uncued wedges (z = 2.884, p = .0325) and cued dots (z = 3.256, p = .0100). This shows that for older adults colour-based attentional cueing in dot patterns does not improve symmetry detection in a similar way as for younger observers, but it can improve their performance in wedge patterns, compared with their uncued performance.
This study characterizes colour–symmetry perception in younger and older adults using two types of stimuli: classical dot patterns and novel figural wedge patterns. Wedge patterns are constructed from centrally aligned but pseudorandomly positioned wedges, thus allowing for randomization of wedges locations/orientations. As in the previous work of Gheorghiu et al. (2016), we examine the relation between the position symmetry signal and colour symmetry, as well as the role of attention to colour. We replicate the findings that attention to colour increases accuracy when position and colour symmetry signals are correlated and decreases accuracy when colour is anti-correlated. Segregation of signal and noise by colour in wedge stimuli also leads to somewhat higher levels of performance even without cueing, in line with previous findings on automatic grouping-by-colour for weak motion signals (Martinovic et al., 2009). Finally, we also replicate a previous report of age-related costs to symmetry perception (Herbert et al., 2002). In addition, we find that older and younger adults show different effects of feature-based attention on performance. Older adults perform poorer when colour and symmetry are correlated but uncued in wedge patterns, but their performance with this stimulus improves greatly through colour cueing, reaching similar levels as in young participants. Meanwhile, younger adults benefit from attentional cueing of colour–symmetry correlations in dot patterns. However, a similar benefit from attentional cueing of symmetric dots' colour fails to materialize in older adults. While confirming the age-related decline in symmetry perception, we thus also show that these costs can be overcome in figural colour–symmetric patterns by attending to the colour that carries the symmetry signal. The ability to group and attend signals based on their hue may thus be a key route to improving signal-to-noise ratio and overcoming age-related costs caused by noisier processing (Monge & Madden, 2016).
While we replicate previously observed effects of attentional cueing to colour–symmetry correlations (Gheorghiu et al., 2016), we also find that such attentional effects are magnified for wedge patterns. Such benefits and costs of attention to colour of wedge patterns are in line with object-based accounts of attentional selection (Desimone & Duncan, 1995). However, there is another, more parsimonious explanation. In addition to positional information, wedge patterns also contain orientation information. This would introduce two sources of signal and noise—position and orientation (and/or segment length)—while dot patterns only contain position signals. Future work could also investigate how the length of the oriented wedge segments affect symmetry detection by making use of a coloured ringed stimulus to reduce orientation information and eliminate figurality (see Fig. 7). Such ring patterns could also be created by segmenting radial-frequency symmetric patterns (i.e., sinusoidally modulated circular patterns introduced by Wilson & Wilkinson, 2002). Those segments could be arranged either collinearly or orthogonally to the path of the imaginary symmetric contour. Computing spatial correspondences for short radial frequency segments (collinear/orthogonal) or ring-bar patterns should not be more difficult for the visual system than for dot patterns with the same positional information. As shape discrimination in older adults is similar to that of younger adults in the absence of noise (Norman & Higginbotham, 2020), we expect the same would be the case for symmetry discrimination when positional symmetry signal is 100%. By gradually adding noise to the signal elements, one could evaluate if symmetry perception in older adults is more susceptible to noise, similarly to global shape processing. If this is the case, then it would imply that increased susceptibility to noise in mid-level spatial integration mechanisms is an important driver of age-related costs in perceptual organization.
Wedge pattern (left), bar pattern with the segments arranged orthogonally to the imaginary circular path (centre), and ring-bar pattern (right) stimuli. Bar patterns can be created either from wedge patterns by removing figurality (global orientation information consistent with a figure/shape) through removal of the inner section of the circle (i.e., shortening the segments) or by symmetrically segmenting radial-frequency patterns such as those used by Wilson and Wilkinson (2002). Ring-bar patterns contain local orientation information embedded within a circular outline, making them more figural. Both bar and ring-bar patterns can be made of segments that are either collinear, orthogonal, or random to the imaginary circular contour path. They could be useful in future research investigating the interaction between position, orientation, length, and colour in symmetry perception. (Colour figure online)
Bilaterally symmetric random dot patterns display an invariance to the number of elements as long as element density remains relatively low (e.g., Wenderoth, 1996). Rainville and Kingdom (2002) demonstrate that the spatial integration region for detecting symmetry is scale invariant, with 13–27 elements needed to perform the task successfully by different observers. The spatial extent of the wedge patterns has an additional constraint—they have to occupy a predefined area within a circle. The spatial extent of dot patterns is more arbitrary and can be more freely chosen by the experimenter. Rainville and Kingdom (2002) concluded that positional jitter can be tolerated until it exceeds the average spacing between elements. With 16, 24, and 36 wedge elements occupying 1° at circumference, this would mean that tolerance to noise would operate in average windows of 22.5°, 15°, and 10°. However, any increases in wedge size lead to a reduction of these windows. Of course, the same constraint would apply to the newly proposed bar and ring-bar patterns (see Fig. 7).
In this light, our first experiment set out to evaluate the influence of both the number of wedges and the number of colours on performance. While we do not find a main effect of wedge number, overall performance for 24 and 36 wedge patterns is reliably above chance but remains relatively low. There are also interactions of wedge number with attention and colour symmetry. Attention to correlated colour–symmetry patterns, in which signal and noise are segregated by colour, leads to biggest improvements in performance for two-coloured 24-wedge patterns. This could mean that attention is less able to overcome the costs introduced by increased jitter in the 36-wedge patterns or lower overall symmetry signal strength in three-colour patterns. Future studies should investigate the ability of attention-to-colour to improve grouping—with bilateral symmetry as a special case of this more general integrative process. This could be achieved by parametrically manipulating the amount of positional jitter in several steps, from 0% (all noise elements outside the tolerance region) to 100% (each noise element is within a signal element's tolerance region).
Older adults may be particularly vulnerable to noise due to the degradation of sensory information brought about by healthy ageing (Monge & Madden, 2016). It has been argued that older adults also have specific difficulties when various competing regions need to be assigned figure or ground status during perceptual organization (Anderson et al., 2016; Lass et al., 2017). This is assumed to stem from deficits in inhibitory processing that are particularly pronounced when competition is high and in scenes that are more ambiguous or difficult to resolve. Our data is consistent with both the information degradation account and the reduced inhibition account, showing that in the absence of attentional cueing, younger and older adults perform similarly on nonsegregated dot patterns (78% and 76%, respectively), but younger adults perform better than older adults on the more challenging non-segregated wedge patterns (62% vs. 55%). The slightly worse performance in older than younger adults found in our study is consistent with reports from a single previous study on age-related changes in symmetry perception (Herbert et al., 2002). Further to that, we find differential effects of attention to colour in older and young adults. Older adults show a benefit from attention for wedge stimuli but not dot patterns. As mentioned earlier, figural wedges contain both position and orientation information, lowering uncued performance and hence creating plenty of room for potential attentional improvements. Our findings from Experiment 2 (Fig. 6) are consistent with this explanation—while younger and older adults perform similarly for uncued dot patterns, younger adults benefit from attention to colour in dot stimuli, while older adults fail to improve with attentional cueing. For wedge patterns, older adults perform poorer than younger adults but overcome this deficit through colour cueing.
Why would the benefits of attentional cueing dissociate between the two stimulus types in younger and older adults? Performance for wedge patterns is poorer in older adults, which in turn makes the colour cue more beneficial in guiding performance, similarly to Madden's (1992) findings for spatial cueing. As colour–symmetry correlations modulate the efficiency of attentional cues differentially for dot and wedge stimuli, with magnified attentional effects for wedges, the superposition of these effects with ageing effects might lead to different outcomes when performance itself differs. Cueing can clearly lead to considerable improvements when starting from a relatively low wedge-symmetry detection level of ~60% in older adults. Here, benefits due to cueing can far surpass the more limited room for costs in nongrouped antisymmetric patterns. This would not be the case for younger adults, who already perform well above the chance level, and thus have less scope for attentional improvements due to colour–symmetry correlations. As performance is the outcome rather than a predictor, generalized linear mixed-effect models such as those fit in this study cannot capture these types of effects.
The lack of improvement in performance due to attentional cueing to features (colour) for older adults in dot patterns is more difficult to explain. It may be due to different feature-based attention strategies between younger and older adults, although Madden (1992) found the two age groups exhibit similar behaviour in terms of their use of focal and distributed spatial attention in a visual search task with spatial cues. In our colour-cued conditions, participants need to balance focusing their attention to the positions of elements depicted in the attended colour with distributing attention to positions of all the elements, as the cue is clearly not equally informative or useful in all trials. In spatial cueing, the cue can be valid if it points to the target location (e.g., left-pointing arrow, if the target is left), invalid if it points to a non-target location (e.g., right-pointing arrow), and neutral if it is uninformative about target location (e.g., double-sided arrow; see Posner et al., 1980). It could be argued that in our cued blocks, the colour cue is valid for the segregated condition and 50% of trials from random-segregated condition (i.e., 30% of total trials), as in those cases it correctly directs attention to the symmetry signal. In colour-grouped antisymmetric conditions (20% trials), the colour cue would be neutral as it would be clear to the participants that in order to detect symmetry they need to distribute their attention to elements on both sides of the vertical axis (i.e., across both colours). On nonsegregated trials, nongrouped antisymmetric trials and the remaining half of random-segregated trials (50% of the total) the colour cue would actually be invalid, directing participants' attention away from some if not all of the symmetry signal. For example, when attending to one colour in the non-grouped antisymmetric condition, the antisymmetric pattern should appear equally symmetric as the fully random/noise pattern, and this is confirmed by at-chance performance for such antisymmetric wedge stimuli. Future studies should determine whether older and younger adults differ in their evaluation of featural cue validity (here, colour) and consequent strategic shifts between focal, feature-based, cue-driven attention and attention that is distributed equally across all the elements of a display. Older adults' performance on perceptual and attentional tasks is known to be slower and more effortful (for an overview, see Monge & Madden, 2016). It is likely that such distinct performance parameters may also drive different attentional strategies, which would be yet another source of differences between younger and older adults, in addition to a general degradation of perceptual information or a specific reduction of inhibitory mechanisms proposed by existing models of healthy ageing (Betts et al., 2005; Monge & Madden, 2016; for attentional strategies and ageing see Vallesi et al., 2021).
Orientation information is highly relevant to shape perception in both younger and older participants (Roudaia et al., 2014), the latter of which exhibit poorer performance when asked to discriminate different noncircular patterns (d' reduction of ~0.7). Pilz et al. (2020) found a substantial age-related decline for oblique but not for cardinal orientations. This impairment in oblique orientation perception might have contributed to poorer symmetry-detection performance for wedge patterns in the older group in the absence of colour-based attention. Attention to colour could decrease the noisiness in symmetry detection in a similar way in which categorical representations (e.g., a cardinal orientation) provide more noise-resistant templates (see Lu & Dosher, 1998). This would lead to improved perceptual performance in older adults. Thus, the main practically relevant outcome of our study is that providing a priori knowledge about the features of objects (e.g., colour) may be a good heuristic for improving age-aware design: Through assisting older adults in selecting the dimension of interest when processing information in complex visual environments, their performance can be improved up to the level exhibited by younger observers.
The study was not preregistered. Data, R analysis scripts, and MATLAB scripts for generating wedge patterns are available on Open Science Framework (https://osf.io/mf9ug/).
For the purpose of open access, J.M. has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
Anderson, J. A., Healey, M. K., Hasher, L., & Peterson, M. A. (2016). Age-related deficits in inhibition in figure-ground assignment. Journal of Vision, 16(7), 6. https://doi.org/10.1167/16.7.6
Baker, D. H., Lygo, F. A., Meese, T. S., & Georgeson, M. A. (2018). Binocular summation revisited: Beyond √2. Psychological Bulletin, 144(11), 1186–1199. https://doi.org/10.1037/bul0000163
Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
Ben-Shachar, M., Lüdecke, D., & Makowski, D. (2020). effectsize: Estimation of effect size indices and standardized parameters. Journal of Open Source Software, 5(56), 2815. https://doi.org/10.21105/joss.02815
Bertamini, M., Silvanto, J., Norcia, A. M., Makin, A. D. J., & Wagemans, J. (2018). The neural basis of visual symmetry and its role in mid- and high-level visual processing. Annals of the New York Academy of Sciences. https://doi.org/10.1111/nyas.13667
Betts, L. R., Taylor, C. P., Sekuler, A. B., & Bennett, P. J. (2005). Aging reduces center-surround antagonism in visual motion processing. Neuron, 45(3), 361–366. https://doi.org/10.1016/j.neuron.2004.12.041
Chen, C. C., & Tyler, C. W. (2010). Symmetry: Modeling the effects of masking noise, axial cueing and salience. PLOS ONE, 5(3), Article e9840. https://doi.org/10.1371/journal.pone.0009840
Chen, H., Cohen, P., & Chen, S. (2010). How big is a big odds ratio? Interpreting the magnitudes of odds ratios in epidemiological studies. Communications in Statistics—Simulation and Computation, 39(4), 860–864. https://doi.org/10.1080/03610911003650383
Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193–222.
Fletcher, R. (1975). The City University colour vision test. Keeler.
Gheorghiu, E., Kingdom, F. A. A., Remkes, A., Li, H. C. O., & Rainville, S. (2016). The role of color and attention-to-color in mirror-symmetry perception. Scientific Reports, 6, Article 29287. https://doi.org/10.1038/srep29287
Herbert, A. M., Overbury, O., Singh, J., & Faubert, J. (2002). Aging and bilateral symmetry detection (Proceedings paper). Journals of Gerontology Series B—Psychological Sciences and Social Sciences, 57(3), P241–P245. https://doi.org/10.1093/geronb/57.3.P241
Huang, L., & Pashler, H. (2002). Symmetry detection and visual attention: A "binary-map" hypothesis. Vision Research, 42(11), 1421–1430. https://doi.org/10.1016/S0042-6989(02)00059-7
Jaeger, T. F. (2008). Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language, 59(4), 434–446. https://doi.org/10.1016/j.jml.2007.11.007
Julesz, B. (1971). Foundations of cyclopean perception. University of Chicago Press.
Lakens, D. (2022). Sample size justification. Collabra: Psychology, 8(1), Article 33267. https://doi.org/10.1525/collabra.33267
Lass, J. W., Bennett, P. J., Peterson, M. A., & Sekuler, A. B. (2017). Effects of aging on figure-ground perception: Convexity context effects and competition resolution. Journal of Vision, 17(2), 15–15. https://doi.org/10.1167/17.2.15
Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology: Human Perception and Performance, 21(3), 451–468.
Lenth, R. V., Singmann, H., Love, J., Buerkner, P., & Herve, M. (2019). emmeans: Estimated Marginal means, aka least squares means (Version 1.4.3.01) [Computer software]. https://CRAN.R-project.org/package=emmeans
Li, H.-C. O., & Kingdom, F. A. A. (2001). Segregation by colour/luminance does not necessarily facilitate motion discrimination in noise. Perception & Psychophysics, 63, 660–675.
Lu, Z.-L., & Dosher, B. A. (1998). External noise distinguishes attention mechanisms. Vision Research, 38(9), 1183–1198. https://doi.org/10.1016/S0042-6989(97)00273-3
Machilsen, B., Pauwels, M., & Wagemans, J. (2009). The role of vertical mirror symmetry in visual shape detection. Journal of Vision, 9(12), 11–11. https://doi.org/10.1167/9.12.11
Madden, D. J. (1992). Selective attention and visual search: Revision of an allocation model and application to age differences. Journal of Experimental Psychology: Human Perception and Performance, 18(3), 821–836. https://doi.org/10.1037//0096-1523.18.3.821
Mancini, S., Sally, S. L., & Gurnsey, R. (2005). Detection of symmetry and anti-symmetry. Vision Research, 45(16), 2145–2160. https://doi.org/10.1016/j.visres.2005.02.004
Martinovic, J., Meyer, G., Muller, M. M., & Wuerger, S. M. (2009). S-cone signals invisible to the motion system can improve motion extraction via grouping by color. Visual Neuroscience, 26(2), 237–248. https://doi.org/10.1017/s095252380909004x
Monge, Z. A., & Madden, D. J. (2016). Linking cognitive and visual perceptual decline in healthy aging: The information degradation hypothesis. Neuroscience and Biobehavioral Reviews, 69, 166–173. https://doi.org/10.1016/j.neubiorev.2016.07.031
Morales, D., & Pashler, H. (1999). No role for colour in symmetry perception. Nature, 399(6732), 115–116. https://doi.org/10.1038/20103
Norman, J. F., & Higginbotham, A. J. (2020). Aging and the perception of global structure. PLOS ONE, 15(5), Article e0233786. https://doi.org/10.1371/journal.pone.0233786
Pilz, K. S., Äijälä, J. M., & Manassi, M. (2020). Selective age-related changes in orientation perception. Journal of Vision, 20(13), 13–13. https://doi.org/10.1167/jov.20.13.13
Posner, M. I. C., Snyder, R. R., & Davidson, B. J. (1980). Attention and detection of signals. Journal of Experimental Psychology: General, 109, 160–174.
R Core Team. (2016). R: A language and environment for statisticall computing. R Foundation for Statistical Computing https://www.R-project.org/
Rainville, S. J. M., & Kingdom, F. A. A. (2002). Scale invariance is driven by stimulus density. Vision Research, 42(3), 351–367. https://doi.org/10.1016/S0042-6989(01)00290-5
Roudaia, E., Bennett, P. J., & Sekuler, A. B. (2008). The effect of aging on contour integration. Vision Research, 48(28), 2767–2774. https://doi.org/10.1016/j.visres.2008.07.026
Roudaia, E., Bennett, P. J., & Sekuler, A. B. (2013). Contour integration and aging: The effects of element spacing, orientation alignment and stimulus duration. Frontiers in Psychology, 4, 356. https://doi.org/10.3389/fpsyg.2013.00356
Roudaia, E., Sekuler, A. B., & Bennett, P. J. (2014). Aging and the integration of orientation and position in shape perception. Journal of Vision, 14(5), 12–12. https://doi.org/10.1167/14.5.12
Sharman, R. J., & Gheorghiu, E. (2019). Orientation of pattern elements does not influence mirror-symmetry perception. Journal of Vision, 19(10), 151c–151c. https://doi.org/10.1167/19.10.151c
Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136.
Vallesi, A., Tronelli, V., Lomi, F., & Pezzetta, R. (2021). Age differences in sustained attention tasks: A meta-analysis. Psychonomic Bulletin & Review, 28(6), 1755–1775. https://doi.org/10.3758/s13423-021-01908-x
van der Helm, P. A., & Leeuwenberg, E. L. J. (1996). Goodness of visual regularities: A nontransformational approach. Psychological Review, 103(3), 429–456. https://doi.org/10.1037/0033-295x.103.3.429
Wainwright, J. B., Scott-Samuel, N. E., & Cuthill, I. C. (2020). Overcoming the detectability costs of symmetrical coloration. Proceedings of the Royal Society B: Biological Sciences, 287(1918), Article 20192664. https://doi.org/10.1098/rspb.2019.2664
Warnes, G. R., Bolker, B., & Lumley, T. (2015). gtools: Various R programming tools (R Package Version 3.5.0) [Computer software]. https://CRAN.R-project.org/package=gtools
Wenderoth, P. (1996). The effects of dot pattern parameters and constraints on the relative salience of vertical bilateral symmetry. Vision Research, 36(15), 2311–2320. https://doi.org/10.1016/0042-6989(95)00252-9
Westland, S., Ripamonti, C., & Cheung, V. (2012). Computational colour science using MATLAB (2nd ed.). John Wiley & Sons.
Wickham, H. (2007). Reshaping data with the reshape package. Journal of Statistical Software, 21, 1–20 http://www.jstatsoft.org/v21/i12/
Wickham, H. (2009). ggplot2: Elegant graphics for data analysis. Springer.
Wickham, H., Francois, R., Henry, L., & Mueller, K. (2017). dplyr: A grammar of data manipulation (R Package Version 0.7.2). https://CRAN.R-project.org/package=dplyr
Wilson, H. R., & Wilkinson, F. (2002). Symmetry perception: A novel approach for biological shapes. Vision Research, 42(5), 589–597. https://doi.org/10.1016/S0042-6989(01)00299-1
Wright, D., Mitchell, C., Dering, B. R., & Gheorghiu, E. (2018). Luminance-polarity distribution across the symmetry axis affects the electrophysiological response to symmetry. NeuroImage, 173, 484–497. https://doi.org/10.1016/j.neuroimage.2018.02.008
Wu, C. C., & Chen, C. C. (2014). The symmetry detection mechanisms are color selective. Scientific Reports, 4(1), 1–6. https://doi.org/10.1038/srep03893
Wu, C. C., & Chen, C. C. (2017). The integration of color-selective mechanisms in symmetry detection. Scientific Reports, 7(13), Article 42972. https://doi.org/10.1038/srep42972
Wuerger, S. (2013). Colour constancy across the life span: evidence for compensatory mechanisms. PLoS One, 8. https://doi.org/10.1371/journal.pone.0063921
The project was supported by UKRI Biotechnology and Biological Sciences Research Council (BBSRC) project grant BB/R009287/1 to J.M. and a BBSRC Eastbio Research Experience Placement award from the Doctoral Training Programme BB/M010996/1 to R.L. and J.M. During the early stages of the project, E.G., was supported by a Welcome Trust grant (WT106969/Z/15/Z).
Department of Psychology, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, 7 George Square, Edinburgh, Scotland, EH8 9JZ, UK
Jasna Martinovic & Rafael B. Lemarchand
School of Psychology, University of Aberdeen, Aberdeen, UK
Jasna Martinovic, Jonas Huber, Antoniya Boyanova, Josephine Reuther & Rafael B. Lemarchand
University College London, London, UK
Jonas Huber
Department of Psychology, Faculty of Natural Sciences, University of Stirling, Stirling, UK
Elena Gheorghiu
Department of Experimental Psychology, University of Göttingen, Göttingen, Germany
Josephine Reuther
Jasna Martinovic
Antoniya Boyanova
Rafael B. Lemarchand
J.M. and R.L. designed the experiments; A.B., J.H., and R.L. collected the data; J.M., J.H., R.L., and J.R. analyzed the data; J.M., E.G., A.B., J.H., J.R., and R.L. wrote the manuscript.
Correspondence to Jasna Martinovic.
The authors have no relevant financial or nonfinancial interests to disclose.
Significance statement
We designed a novel figural stimulus—a wedge pattern—made of centrally aligned pseudorandomly positioned wedges. To study the effect of pattern figurality and colour on symmetry perception, we compared symmetry detection in multicoloured wedge patterns with nonfigural dot patterns in younger and older adults. Colour–symmetry correlations were either cued or uncued. Such colour-based attention modulated performance more strongly for wedge patterns. Furthermore, older and younger adults showed different effects of attention on performance. In figural patterns, which posed a particular challenge for older adults, age-related performance costs were alleviated by attending to the colour that carried the symmetry signal.
ESM 1
(DOCX 24 kb)
Martinovic, J., Huber, J., Boyanova, A. et al. Mirror symmetry and aging: The role of stimulus figurality and attention to colour. Atten Percept Psychophys 85, 99–112 (2023). https://doi.org/10.3758/s13414-022-02565-5
Issue Date: January 2023
Perceptual organization
Over 10 million scientific documents at your fingertips
Switch Edition
Corporate Edition
Not affiliated
© 2023 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
Are suspicious activity reporting requirements for cryptocurrency exchanges effective?
Daehan Kim1,
Mehmet Huseyin Bilgin2 &
Doojin Ryu ORCID: orcid.org/0000-0002-0059-48871
Financial Innovation volume 7, Article number: 78 (2021) Cite this article
This study analyzes the impact of a newly emerging type of anti-money laundering regulation that obligates cryptocurrency exchanges to report suspicious transactions to financial authorities. We build a theoretical model for the reporting decision structure of a private bank or cryptocurrency exchange and show that an inferior ability to detect money laundering (ML) increases the ratio of reported transactions to unreported transactions. If a representative money launderer makes an optimal portfolio choice, then this ratio increases further. Our findings suggest that cryptocurrency exchanges will exhibit more excessive reporting behavior under this regulation than private banks. We attribute this result to cryptocurrency exchanges' inferior ML detection abilities and their proximity to the underground economy.
During the current cryptocurrency boom, numerous cryptocurrency exchanges have emerged, and they now comprise a considerable fraction of the financial industry. These new exchanges must be considered, in order to accurately analyze the recent banking and financial sectors. As regulatory authorities worldwide extend the application of financial regulations from traditional financial institutions to cryptocurrency exchanges, there is an urgent need to study the regulation of cryptocurrency transactions.
Theoretically, cryptocurrency market regulations have two conflicting effects. On the one hand, regulations can function as restrictions for market participants, negatively impacting the market. On the other hand, they may boost the market by strengthening its credibility and stability. Under this framework, empirical studies assess the impact of regulations on the cryptocurrency market. Borri and Shakhnov (2020) identify that cryptocurrency investors react negatively to regulations or announcements about forthcoming regulations. Chokor and Alfieri (2021) and Shanaev et al. (2020) also draw similar conclusions, using the reaction of price movements as a proxy for the impact of regulations on the cryptocurrency market. In contrast, Feinstein and Werbach (2021) investigate the reaction of trading volume, criticizing the use of price movements as a proxy, and find no sufficient evidence to assert the significant impact of the regulations. In addition, Borri and Shakhnov (2020) and Feinstein and Werbach (2021) draw inconsistent results on the international spillover of regulations. However, compared to studies on other financial assets, those on cryptocurrency regulations are in an early phase. To integrate the conflicting ideas suggested by existing studies, more in-depth theoretical research considering both the investor and exchange intermediary sides is required.
One of the most important purposes of cryptocurrency regulations is to prevent money laundering (ML). Owing to the decentralized nature of cryptocurrencies, criminals can use cryptocurrency exchanges to launder dirty money; proper anti-money laundering (AML) actions in the cryptocurrency market can, therefore, improve overall AML performance in the economy.
This study specifically focuses on the duty to report suspicious activities imposed upon cryptocurrency exchanges. Governments do not directly detect ML activities. For several reasons, such as privacy rights, governments do not have the right to directly monitor all transactions made through private banks or cryptocurrency exchanges. Even if a government did have the right to do so, it would not be able to thoroughly check every individual transaction. Thus, although the exact processes may vary by country, financial authorities usually require banks and exchanges to monitor and report transactions for which ML activities are suspected. The authorities then analyze the reports of suspicious transactions thoroughly and identify whether they are true ML transactions. Some studies, such as those of Brenig et al. (2015) and Dupuis and Gleason (2020) particularly investigate cryptocurrency-backed ML activities. Furthermore, Bhaskar and Chuen's (2015) study implies that exchanges may go out of business because they are not capable of complying with strict AML regulations. Unfortunately, no prior studies have focused on the duty of cryptocurrency exchanges to report suspicious activities. The lack of research explicitly studying whether a cryptocurrency exchange can faithfully comply with this reporting duty may stem from the fact that such exchanges are still in the early stages of adoption. Compared to traditional private financial institutions, cryptocurrency exchanges are new, small, and illiquid. This study pays attention to these distinct characteristics.
We analyze the impact of a newly emerging type of AML regulation requiring cryptocurrency exchanges to report transactions for which ML is suspected. Based on background information on ML practices, the structure of the AML regulations, and the characteristics of cryptocurrency exchanges, we build two models to derive some findings on exchanges' behavior. The first model illustrates a cryptocurrency exchange's decision structure to report a transaction as an ML-suspected case. The second model describes the proportion of total illegal gains that a money launderer chooses to launder. Whereas the first model focuses on the decision of a representative cryptocurrency exchange, the second model focuses on the decision of a representative money launderer. The second model extends the discussion of the first model by endogenizing the money launderer, which is treated as an exogenous actor in the first model. We claim that when cryptocurrency exchanges are obligated to report suspicious transactions, they will not faithfully comply with this regulation, but rather will report an excessive number of transactions, which is uninformative to the regulatory authority.
Our models suggest two main findings on the potential consequences of applying AML regulations to cryptocurrency exchanges. First, the relatively short history, small exchange size, and illiquidity of the cryptocurrency market increase the threat that a cryptocurrency exchange will be punished by authorities for reporting an excessive number of ML-suspected cases. Second, some cryptocurrency exchanges that largely depend on revenues from ML transactions may intentionally lower the ML detection probability by increasing the number of reports of suspected ML.
This study makes some additional contributions to the literature. We develop a model describing the reporting decision structure of a financial institution entrusted with monitoring ML. Furthermore, this study relaxes the assumption in the existing literature that all illegal pecuniary gains must be laundered for use. With this assumption relaxed, our study attempts to make a novel approach to analyze ML using portfolio choice theory.
Our study is also expected to provide policy implications for global financial regulatory authorities. Financial authorities around the world have already begun to design AML regulations for cryptocurrency exchanges. The Financial Action Task Force (FATF), an intergovernmental organization to combat ML, suggests guidelines on how financial authorities worldwide should respond to cryptocurrency technology (FATF 2012, 2019). To mitigate the risk associated with this new technology, it recommends that global financial authorities encourage cryptocurrency exchanges to be licensed or registered and subject to ML monitoring compliance. In accordance with the FATF's recommendation, global authorities are expected to arrange measures to adopt reporting obligations for cryptocurrency exchanges.
The remainder of this paper is organized as follows. "Research background" section provides background information; Cryptocurrency exchange's decision" section explains the model of a cryptocurrency exchange decision; "Money launderer's portfolio choice problem" section incorporates the portfolio choice model of a money launderer; "Policy implications" section suggests policy implications based on the findings; "Conclusion" section concludes the paper.
One of the aims of this study is to analyze the impact of a newly emerging type of AML regulation that obligates cryptocurrency exchanges to report suspicious transactions to financial authorities. Whereas there are many websites or documents on the AML regulation of cryptocurrency and suspicious activity reports, the academic literature on this topic is limited.
Nevertheless, to sustain a money-making process, a criminal with illegal pecuniary gains (e.g., profits from drug sales) does not leave the money as is, but instead prefers to reinvest it. For the gains to be reinvested into either legal or illegal sectors, the money needs to be laundered (Masciandaro 1999). Dirty money that remains dirty cannot be utilized outside the sector from which it originated. In this sense, ML is a practice of changing potential purchasing power into actual purchasing power (Masciandaro 1998). Once the government notices that certain money is dirty, it will not allow the money to be used for any purpose. Money laundering is the act of concealing the source of dirty money, increasing the information asymmetry between the supervising authority and the owner of the money (Brenig et al. 2015).
An ML process consists of three stages: placement, layering, and integration (Brenig et al. 2015; Albrecht et al. 2008). Suppose that a criminal has obtained money by selling illegal drugs. The criminal takes this money to the financial sector by depositing the money into a bank. This initial stage is called placement. This money then moves repeatedly from place to place in multiple layers to prevent the government from tracing its source. Therefore, this process is referred to as layering. Finally, the money settles in a clean zone and can be used for a new business. This final stage is called integration.
In the past, the placement and layering stages only involved traditional types of the financial institution, such as private banks and stock exchanges, but ML processes now often involve cryptocurrency transactions. Cryptocurrency is a currency that allows digital payments, but cryptocurrencies differ significantly from traditional fiat-money-based digital payment systems. When a dollar is transferred, a financial intermediary, such as a credit card company, must verify the validity of the transaction. Cryptocurrency payments do not depend on such third parties; instead, the peer-to-peer network of blockchain technology verifies the transaction. This process resolves the famous "double spending problem" (Dwyer 2015; Nakamoto 2008). A cryptocurrency remittance is verifiable for the receiver, but is not easily observable by traditional financial institutions under government supervision. With services provided by some companies, such as Chainalysis, authorities can trace transfers of money to some extent (Dupuis and Gleason 2020), but this traceability is certainly limited compared to that of traditional online payments. Cryptocurrency, therefore, offers a huge opportunity for illegal market participants. Foley et al. (2019) estimate that a quarter of bitcoin users are involved in illegal activities. Although they mention that the popularity of cryptocurrency reduces the proportion used for illegal activities, it is natural to expect that the proximity of cryptocurrency to illegal activities is higher than that in the case of fiat money. Thus, the portion of ML transactions within a cryptocurrency exchange may be higher than that within a private bank. If criminals have sufficient information, they will not use a cryptocurrency exchange that cooperates with the government. Thus, a cryptocurrency exchange that is highly dependent on fee revenues from ML transactions may not actively participate in AML actions led by authorities, but may instead choose to be helpful to money launderers.
In an indirect ML monitoring structure, in which the government delegates ML obligations to private banks, the principal-agent problem of ML monitoring proposed by Masciandaro (1999) is inevitable. The principal, which is the government authority, wants to maximize the detected number of ML attempts. Conversely, the agent, which is a private bank in the study, tries to maximize its profits, considering the possibility of government sanctions. Takáts (2011) reports that the discrepancy between the principal and the agent causes excessive reporting. Sometimes, the government identifies ML transactions that are not reported by banks. In these cases, the government imposes sanctions on the bank for failing to properly report suspicious transactions. A bank that dislikes being fined by the authority for its failures tends to report transactions that are less suspicious along with sufficiently suspicious transactions, making the reports uninformative. Likening the private bank to the boy who cried wolf, Takáts (2011) describes this overreporting tendency as "crying wolf." Banks may excessively report not only to avoid the threat of penalties, but also because of the high cost of careful monitoring (Masciandaro and Filotto 2001).
To combat ML, authorities worldwide have set up legal devices that require not only private banks, but also cryptocurrency exchanges to monitor and report suspicious transactions. For instance, the US Financial Crimes Enforcement Network (FinCEN) has worked on extending its longstanding AML regulation to cryptocurrency exchanges. It requires that cryptocurrency exchanges comply with AML regulations including setting registration, record keeping, and reporting obligations (Böhme et al. 2015; FinCEN 2019). Similarly, a recently amended South Korean lawFootnote 1 was enacted in March 2021. The newly enforced rule mandates that cryptocurrency exchanges must be registered to Korea Financial Intelligence Unit under Financial Services Commission and report transactions that raise suspicions of ML attempts.
After ML processes, dirty bitcoins can be reinvested as clean bitcoins and dirty dollars can be reinvested as clean dollars. However, in some cases, a criminal may want to convert its bitcoins to dollars, and vice versa. In these cases, the financial authority can catch ML practices backed by cryptocurrencies if proper regulations are applied to cryptocurrency exchanges in a way that regulations are imposed on traditional financial institutions. To analyze the impacts of these regulations, we need to understand the business structures of cryptocurrency exchanges.
In fact, it is difficult to identify a single form of cryptocurrency exchange business. Each cryptocurrency exchange has a different affiliation, profit system, supported fiats, cryptocurrencies, and so on. Nonetheless, all exchanges have one common feature: every exchange receives transaction fees as a basic source of revenue. When a transaction is made, both seller and buyer pay the fees. Revenue is directly related to total trading volume. This relationship also holds for private banks because a private bank making money through the lending deposit spread ultimately benefits from a greater number of transactions as well. However, it may be true that the cryptocurrency exchange business depends more directly on trading volumes.
The most important difference between a cryptocurrency exchange and a general private bank in the context of this study is that they have different ML monitoring abilities. As most cryptocurrency exchanges have emerged recently, they are likely to lack data and experience in analyzing those data. Dupuis and Gleason (2020) mention that decentralized exchanges that allow users to control their own private keys, which are expected to be good ML channels, are still in their early stages. Thus, even if they are obligated by law to monitor transactions, they are not expected to carry out monitoring practices successfully. The fact that they are focused on stabilizing their profit systems and surviving in the volatile cryptocurrency market further worsens the problem.
Cryptocurrency exchange's decision
The concept of "crying wolf," that is, private banks' excessive reporting tendency introduced by Takáts (2011), can also appear when cryptocurrency exchanges are subject to analogous requirements. Under a regulation system, in which a high rate of type II error is punished explicitly and a high rate of type I error is not explicitly punished, a cryptocurrency exchange will decide to overreport transactions.
Furthermore, the degree of this excessive reporting may be higher for cryptocurrency exchanges compared to the behavior of private banks. The first reason for this is the exchange's lack of ability. Unlike the private banking system, cryptocurrency is a relatively novel concept, and nearly all cryptocurrency exchanges are newly established with relatively low trading volumes compared to traditional financial exchanges. Thus, a cryptocurrency exchange business faces an inevitable problem, in that it lacks experience in detecting ML transactions. In other words, it is not accustomed to carrying out ML analyses using its own detection model. Owing to the drawbacks of a rule-based system, machine learning techniques are currently widely used for detecting anomalies, including ML (Chen et al. 2018). However, statistical analyses using models, particularly machine learning models, require rich data. Even if a cryptocurrency exchange has a good detection model, it may not make good use of it, given that newly launched and illiquid exchanges generally have accumulated too little data. According to previous studies, cryptocurrency markets are often illiquid (Loi 2018; Smales 2019; Yermack 2015). Cryptocurrencies and exchanges addressed in the prior academic literature are usually major cryptocurrencies and major exchanges; thus, the illiquidity problem of cryptocurrency markets in the real world would be more severe than what is reported in the literature. Coinmarketcap (https://coinmarketcap.com) provides information on the liquidity of various cryptocurrency exchanges using its average liquidity score ranging from 0 to 1000. Binance is one of the most liquid and popular exchanges, with a score of 720, as of July 31, 2021. This is an overwhelmingly high score compared to many illiquid exchanges. For example, OTCBTC has a liquidity score of 1. As of July 31, there are almost no sell orders and no buy orders in the BTC/USD marketFootnote 2 of OTCBTC. These severely illiquid exchanges are unlikely to have sufficient data. In addition, few of them are expected to have sufficient personnel to dedicate to ML detection. Creating a new cryptocurrency exchange is not complicated, and small groups of people or individuals can easily develop new exchanges. These small businesses may not be able to afford personnel for ML detection. In sum, cryptocurrency exchanges lack a variety of necessary resources to meet reporting requirements, leading to overall inferiority in ML detection. The following model explains why an ML monitoring institution with an inferior detection ability overreports to a high degree.
We define the indicator function \(I_{i}\), which represents the true characteristic of transaction \(i\), as follows:
$$I_{i} = \left\{ {\begin{array}{*{20}l} {0,} \hfill & { \quad i\, is\, a\, normal\, transaction} \hfill \\ {1,} \hfill & {\quad i\, is\, an \,ML\, transaction} \hfill \\ \end{array} } \right..$$
The signal \(P_{i} = \widehat{{I_{i} }}\) is an estimator of \(I_{i}\), indicating the strength of the signal that transaction \(i\) is an ML transaction, measured by the detecting ability of a bank or cryptocurrency exchange. A high value of \(P_{i}\) implies that transaction \(i\) is highly suspicious. Whereas \(I_{i}\) has a fixed value for a given transaction \(i\), \(P_{i}\) is a random variable. The accumulated data and detection technique determine the effectiveness of \(P_{i}\) as an estimator for \(I_{i}\). Here, effectiveness can be evaluated in terms of measures, such as bias, relative efficiency, and the mean squared error. As the detection ability improves, \(Bias\left( {P_{i} } \right) = E\left[ {P_{i} } \right] - I_{i}\) and \(Var\left( {P_{i} } \right)\) will generally decrease.
When the signal from transaction \(i\) is observed, an institution entrusted with monitoring decides whether to report the transaction based on the following standard:
$$\begin{aligned} & Report\, i\quad if\quad P^{min} \le P_{i} \\ & Do\, not\, report\, i\quad if\quad P^{min} \ge P_{i} \\ \end{aligned}$$
The threshold \(P^{min} { }\) is the minimum strength at which \(i\) is reported to the financial authority. The exchange can set a value of \(P^{min}\) between zero and one. This reporting system is rational because otherwise, the monitoring institution would end up leaving a transaction likely to be an ML action and reporting a less likely one. The excessive reporting tendency is defined by a low value of \(P^{min}\). This study aims to show that \(P^{min}\) is lower for cryptocurrency exchanges than for private banks.
Assume that the financial authority that practices AML regulation can still identify an ML transaction, even if that transaction is not reported by the monitoring institution. In his model, Takáts (2011) assumes that an unreported case, as well as a reported case, is subject to a positive investigation effort.Footnote 3 When the authority imposes fines for any unreported ML cases that it identifies, a monitoring institution cares about type II errors. The type II error probability in this model is defined as the probability that a transaction is not reported, given that it is actually an ML attempt. This conditional probability is given by
$$Pr\left( {not \,reported{|}ML} \right) = Pr\left( {P_{i} \le P^{min} {|}I_{i} = 1} \right).$$
Let the strength of the signal \(P_{i}\) be a random variable, such that
$$P_{i} \sim Beta\left( {\alpha , \beta } \right), where \,\alpha , \beta \in \left( {0, \infty } \right).$$
A greater value of \(\alpha\) relative to \(\beta\) moves the expected value \({\text{E}}\left[ {P_{i} } \right] = \frac{\alpha }{\alpha + \beta }\) toward one, shifting the overall weight of the beta distribution curve to the right. \({\text{E}}\left[ {P_{i} } \right] = \frac{\alpha }{\alpha + \beta }\) approaches \(I_{i}\) as the detection ability increases. When \(I_{i} = 1\), the overall weight is shifted to the right when the ability is higher.
Using the mathematical property of \(\frac{\alpha }{\alpha + \beta }\to _{monotonically} 1 \,as\, \alpha \to \infty\), we can create distributions under \(I_{i} = 1\) by fixing \(\beta\) and varying \({\upalpha }\) to illustrate different detection ability levels.Footnote 4 When \(\beta\) is fixed, \(\alpha\) can be used as a proxy for detection ability, as it is larger when the ability is higher if \(I_{i} = 1\). Two different detection ability levels are indicated by this model, as shown in Fig. 1. Both distributions in the figure indicate the probability density function (PDF) of \(P_{i}\) when transaction \(i\) is an ML case. In particular, Panel A of Fig. 1 shows the distribution curve of \(P_{i}\) when the goodness of the estimator is high, that is, when the detection ability is superior. In contrast, Panel B shows the distribution curve of \(P_{i}\) when the goodness of the estimator is relatively low, that is, when the ability is relatively inferior.
a (Left panel) shows the beta distribution for \({{\varvec{\upalpha}}} = 30, {{\varvec{\upbeta}}} = 2\). b (Right panel) shows the beta distribution for \({{\varvec{\upalpha}}} = 5, {{\varvec{\upbeta}}} = 2\). The left and right graphs are the PDFs of the random variable \({\varvec{P}}_{{\varvec{i}}}\) when \({\varvec{I}}_{{\varvec{i}}} = 1\) for monitoring institutions with superior and inferior ML detection abilities, respectively
Recall that the probability of committing a type II error is given by \(Pr\left( {P_{i} \le P^{min} {|}I_{i} = 1} \right)\). For both superior and inferior exchanges, it is straightforward to see that the probability of a type II error is greater for the inferior exchange for a given level of \(P^{min}\) (e.g., \(P^{min} = 0.9\)). To maintain its type II error probability similar to that of a superior exchange, an inferior exchange lowers its level of \(P^{min}\), the minimum strength for reporting.
As already stated, cryptocurrency exchanges have lower detection abilities than private banks have, considering their limitations due to their short histories, small sizes, and illiquidity. Thus, we can consider ordinary private banks as superior exchanges and cryptocurrency exchanges as inferior exchanges. Under a similar reporting system, the excessive reporting behavior observed in private banks is likely to be even greater among cryptocurrency exchanges. In addition, by matching the two panels in Fig. 1 to an old, large, liquid exchange and a new, small, illiquid exchange, we can infer that the magnitude of overreporting is greater for newer, smaller, and less liquid exchanges.
A cryptocurrency exchange will try to reduce the probability of type II errors as much as possible, but not to an extreme amount. Although no direct sanctions are applied, exchanges face reporting costs. If the reporting cost were zero, private banks under the longstanding regulation would report every transaction. Each financial authority already has a preset form and guideline and, crucially, it often requires monitoring institutions to describe the transaction. When an exchange reports a transaction for being suspicious, it needs to state why the transaction is regarded as ML, so that the exchange is not punished for an intentional reporting insincerity. This reporting cost does not appear only when the transaction being reported is a true ML case, but also when the transaction is actually a normal case. Due to the trade-off relationship between increasing the number of reports and reducing reporting costs, \(P^{min}\) will not fall to zero, but to a certain optimal point where the total loss is minimized. The total loss is the sum of the expected reporting failure sanctions and expected total reporting costs.
The loss of expected reporting failure sanction is a function of \(P^{min}\), defined by
$$\begin{aligned} {\mathcal{L}}_{fail} \left( {P^{min} } \right) & = \gamma nPr\left( {ML \wedge not \,reported} \right) = \gamma nPr\left( {ML} \right)Pr\left( {not \,reported{|}ML} \right) \\ & = \gamma nPr\left( {I_{i} = 1} \right)Pr\left( {P_{i} \le P^{min} {|}I_{i} = 1} \right) = \gamma n\delta \mathop \smallint \limits_{0}^{{P^{min} }} f_{1} \left( {P_{i} } \right)dP_{i} \\ \end{aligned}$$
where \(n\) refers to the number of transactions on the exchange, \(\delta\) denotes the probability that transaction \(i\) is an ML transaction, and \(\gamma\) is an authority constant implying government sanctions caused by an unreported ML case. In reality, the authority constant may be related to the identification of reported cases, but we set it as a constant for simplicity. \(f_{1} \left( {P_{i} } \right)\) is the probability density function of \(P_{i}\) when \(i\) is an ML transaction. Differentiating \({\mathcal{L}}_{fail}\) with respect to \(P^{min}\) yields
$${\mathcal{L}}_{fail}^{\prime } \left( {P^{min} } \right) = \gamma n\delta f_{1} \left( {P^{min} } \right).$$
The loss of total reporting cost is also a function of \(P^{min}\), defined by
$${\mathcal{L}}_{cost} \left( {P^{min} } \right) = n\left[ {\delta \mathop \smallint \limits_{{P^{min} }}^{1} C\left( {P_{i} } \right)f_{1} \left( {P_{i} } \right)dP_{i} + \left( {1 - \delta } \right)\mathop \smallint \limits_{{P^{min} }}^{1} C\left( {P_{i} } \right)f_{0} \left( {P_{i} } \right)dP_{i} } \right],$$
where \(C\left( {P_{i} } \right)\) is the cost function which depends on \(P_{i}\). If \(P_{i}\) is small, it is difficult for an exchange to justify its reporting. Thus, the smaller the \(P_{i}\), the higher the cost \(C\left( {P_{i} } \right)\) will be. \(f_{0} \left( {P_{i} } \right)\) is the probability density function of \(P_{i}\) when \(i\) is a normal transaction. Differentiating \({\mathcal{L}}_{cost}\) with respect to \(P^{min}\) yields
$${\mathcal{L}}_{cost}^{\prime } \left( {P^{min} } \right) = - n\left[ {\delta C\left( {P^{min} } \right)f_{1} \left( {P^{min} } \right) + \left( {1 - \delta } \right)C\left( {P^{min} } \right)f_{0} \left( {P^{min} } \right)} \right].$$
The total loss is the sum of two kinds of loss:
$${\mathcal{L}}\left( {P^{min} } \right) = {\mathcal{L}}_{fail} + {\mathcal{L}}_{cost} .$$
By the first-order condition, the total loss is minimized when
$${\mathcal{L}}_{fail}^{\prime } \left( {P^{min} } \right) + {\mathcal{L}}_{cost}^{\prime } \left( {P^{min} } \right) = 0.$$
It follows that the optimal threshold \(P^{min*}\) satisfies the following condition:
$$\left( {\frac{\gamma }{{C\left( {P^{min*} } \right)}} - 1} \right) = \frac{{\left( {1 - \delta } \right)f_{0} \left( {P^{min*} } \right)}}{{\delta f_{1} \left( {P^{min*} } \right)}}.$$
This is the extent to which a cryptocurrency exchange will adjust \(P^{min}\) down.
Money launderer's portfolio choice problem
In the model depicting a cryptocurrency exchange's decision in the previous section, we do not discuss the behavior of money launderers. The number of ML transactions is given and \(\delta = {\text{Pr}}\left( {ML} \right)\) is treated as exogenous. However, a cryptocurrency exchange considers not only the financial authority's behavior, but also the money launderers' behavior. Thus, we build a second model using portfolio choice theory to analyze the decision of a representative money launderer and its effect on the cryptocurrency exchange's decision.
Cryptocurrencies are known to be favored by illegal market participants. A criminal who wants to launder illegal gains and convert them to fiat money is likely to use a cryptocurrency exchange. We can infer that the fraction of ML transactions performed on a cryptocurrency exchange is much greater than that of ML transactions performed through ordinary private banks. Assuming that money launderers are aware of which exchanges are safer or riskier for performing ML activities than others and, thus, can choose the safest cryptocurrency exchange as an ML channel, an exchange that is highly reliant on revenue from ML transactions may try to conceal money launderers' activities from being detected. We call these types of businesses ML-friendly cryptocurrency exchanges. An ML-friendly exchange can be aware that excessive reporting is uninformative to the government. Then, the exchange may prefer to overreport suspicious transactions because it still fears penalties for reporting failures. This can be better understood by addressing an example. Suppose ten transactions are made through an exchange and two of them, denoted as A and B, are actual ML cases. A and B are estimated by the exchange to be the most and second-most suspicious transactions. The exchange is contemplating whether to change the currently set \(P^{min}\), which lets A and B be reported. Raising \(P^{min}\) to report only B runs the risk of a reporting failure sanction caused by A. On the other hand, lowering \(P^{min}\) to include other less suspicious cases dilutes the report without making further reporting failure sanction risk. The government has limited resources; thereby it can waste its resources on insignificant reports if \(P^{min}\) is lowered. By setting the threshold \(P^{min}\) low, the exchange can deter the apprehension of money launderers.
Masciandaro (1998, 1999) assumes that dirty money must be laundered before it can be reinvested. The reason for this is that those who are willing to reinvest illegal gains try to maintain secrecy by using an ML process. However, the assumption that reinvestment must always be preceded by ML seems inadequate. It is true that reinvesting dirty money into another sector requires ML, but one can still invest the money in the sector in which it originated. For example, profits from drug sales can be reinvested to expand the drug business. This process does not necessarily require ML, and ML may expose the money to the risk of identification by the authority. Ferwerda (2009) concedes that not all gains need to be laundered in practice; however, for simplicity, he assumes that unlaundered gains can be incorporated in the ML detection probability by lowering the probability value. This study distinguishes money that does not need to be laundered from money that needs to be laundered.
In our study, the money launderer is the same person as the criminal. McCarthy et al. (2015) include a professional money launderer in their model, whereas, in our discussion, we assume that the money launderer is the criminal for simplicity. As we allow for the possibility of reinvestment in the original sector without ML, we assume that a money launderer with dirty money compares the profitability of investing in the original sector or investing in other sectors that require ML. We refer to investment in another sector as an investment in a clean zone. Although laundered money can become dirty again, we use the term clean zone to denote outside sectors in general. In this model, there are only two distinct zones: a dirty underground zone and a clean zone (Fig. 2).
Model describing the two choices available to a money launderer
Let \(m\) denote an initial amount of illegal money held by a representative would-be money launderer. The money launderer with the fixed illegal fund \(m\) divides the fund into two parts for diversification. Defining \(\theta\) (\(0 < \theta < 1\)) as a proportion of the initial fund that the launderer chooses to send to the clean zone, \(\theta m\) goes to the clean zone via a cryptocurrency exchange. The transaction fee \(\tau\) determined by the exchange is lost in the ML process. In practice, dirty money has to go through several institutions to set up layers, but, for simplicity, we assume that the ML is implemented through a single cryptocurrency exchange. When \(\theta m\) is laundered, the successfully washed money can be invested outside of the original sector with a return of \(r_{C}\). Considering the loss of \(\tau\) and the return of \(r_{C}\), we express \(r_{ML}\), the total return from the ML process, as follows:
$$\theta m\left( {1 + r_{ML} } \right) = \theta m\left( {1 - \tau } \right)\left( {1 + r_{C} } \right)$$
However, ML is not certain to succeed, but rather involves some risk. Hinterseer (2002) suggests that each financial investment can be framed in an R3 space of \(\left( {return, risk, secrecy} \right)\). The risk is the embedded financial risk describing the deviations caused by the upward and downward movements of an asset. In addition to the traditional dimensions of financial return and risk, this space includes a secrecy dimension. Associated with legal risk, this dimension signifies concealment from the public and supervising authority. A financial decision, such as ML, needs enough secrecy to avoid detection. In fact, the risk and secrecy dimensions used by Hinterseer (2002) do not necessarily need to be thoroughly separated, but they both imply probabilities. In this sense, the model in this study treats detection risk as if it is a financial risk.
In this model, D denotes the probability that an ML attempt is detected by the authority and the launderer forfeits the money. Even in this case, the money launderer pays the transaction fee τ to the cryptocurrency exchange.Footnote 5 Incorporating this probability into the previous equality yields
$$\theta m\left( {1 + r_{ML} } \right) = \left\{ {\begin{array}{*{20}l} {\theta m\left( { - \tau } \right)\left( {1 + r_{C} } \right), with \,probability \,D} \\ {\theta m\left( {1 - \tau } \right)\left( {1 + r_{C} } \right), with \,probability\, \left( {1 - D} \right)} \\ \end{array} } \right.$$
We note that because the money confiscated by the authority was expected to earn a return of rC, it is more accurate to convert the confiscated amount θm into future value using rC rather than rD. The expectation and variance of rML is computed as
$$\mu_{ML} = E\left[ {r_{ML} } \right] = \left( {1 - D - \tau } \right)\left( {1 + r_{C} } \right) - 1, \sigma_{ML}^{2} = Var\left( {r_{ML} } \right) = D\left( {1 - D} \right).$$
The remaining fraction of the fund, \(\left( {1 - \theta } \right)m\), stays at the original sector. Whereas the clean money earns a return of \(r_{ML}\), \(\left( {1 - \theta } \right)m\) is assumed to be invested without any risk. This dirty money grows to be \(\left( {1 - \theta } \right)m\left( {1 + r_{D} } \right)\), where \(r_{D}\) is the return in the origin sector.
By considering \(\theta m\) and \(\left( {1 - \theta } \right)m\) as funds invested in risky and riskless assets, respectively, the money launderer's investment decision can be interpreted as a financial portfolio. This model uses the mean–variance framework introduced by Markowitz (1952). In the portfolio, the money launderer decides the share \(\theta\) to transfer to the clean zone through the ML process. The portfolio return is constructed as
$$w = \theta r_{ML} + \left( {1 - \theta } \right)r_{D} .$$
It follows that
$$\mu = E\left[ w \right] = E\left[ {\theta r_{ML} + \left( {1 - \theta } \right)r_{D} } \right] = \theta \mu_{ML} + \left( {1 - \theta } \right)r_{D} ,$$
$$\sigma^{2} = Var\left( w \right) = Var\left( {\theta r_{ML} + \left( {1 - \theta } \right)r_{D} } \right) = \theta^{2} Var\left( {r_{ML} } \right) = \theta^{2} \sigma_{ML}^{2} .$$
Then, we can obtain a capital allocation line (CAL) as follows:
$${\upmu } = \frac{{\mu_{ML} - r_{D} }}{{\sigma_{ML} }}\sigma + r_{D} .$$
The optimal pair \(\left( {\sigma^{*} , \mu^{*} } \right)\) is the solution to the following utility maximization problem:
$$\mathop {\max }\limits_{\sigma , \mu } U\left( {\sigma , \mu } \right),\quad s.t. \mu = \frac{{\mu_{ML} - r_{D} }}{{\sigma_{ML} }}\sigma + r_{D} .$$
By equating the \(MRS_{\sigma , \mu }\) to the slope of the CAL, we can determine the optimal pair.
We are not interested in \(\left( {\sigma^{*} , \mu^{*} } \right)\) per se, but rather in how a cryptocurrency exchange's manipulation of \(D\) affects \(\theta\). In fact, a cryptocurrency exchange can control not only \(D\), but also the transaction fee \(\tau\). Thus, we can also check the relative effects of \(\tau\) and \(D\). The transaction fee that must be paid to the cryptocurrency exchange only affects \(\mu_{ML}\) and not \(\sigma_{ML}\). The inequality \(\frac{{\partial \mu_{ML} }}{\partial f} = - \left( {1 + r_{C} } \right) < 0\) holds and, thus, lowering \(\tau\) positively affects \(\mu_{ML}\). The partial derivative of the slope of CAL with respect to \(\tau\) is negative:
$$\frac{\partial }{\partial f}\left( {\frac{{\mu_{ML} - r_{D} }}{{\sigma_{ML} }}} \right) = \frac{1}{{\sigma_{ML} }}\frac{{\partial \mu_{ML} }}{\partial f} < 0.$$
Thus, a reduction in \(\tau\) causes the slope to increase. Then, the substitution effect increases \(\theta m\), the fraction of funds that undergo ML. The sign of the income effect depends on the degree of absolute risk aversion. However, because money launderers are aggressive agents who bear the risk of punishment, we assume that a representative money launderer's degree of absolute risk aversion is decreasing or at least constant. Based on this assumption of non-increasing absolute risk aversion, we can conclude that both substitution and income effects are positive for \(\theta m\).
Now, let \(\tau\) be fixed, so that we can focus on the impact of changes in \(D\). A cryptocurrency exchange can reduce \(D\) through excessive reporting. We check how \(D\) affects \(\frac{{\mu_{ML} - r_{D} }}{{\sigma_{ML} }}\), the slope of the CAL. The partial derivative of the slope of the CAL with respect to \(D\) is calculated as
$$\frac{\partial }{\partial D}\left( {\frac{{\mu_{ML} - r_{D} }}{{\sigma_{ML} }}} \right) = \frac{{\left[ {\left( {D + \tau - 1} \right)\left( {1 + r_{C} } \right) + 1} \right]\frac{1 - 2D}{{2\sqrt {D\left( {1 - D} \right)} }}}}{{D\left( {1 - D} \right)}},\quad where\quad D \in \left( {0, 1} \right).$$
This derivative is negative if and only if
$$4\left( {1 + r_{C} } \right)D^{2} + \left[ {\left( {2\tau - 5} \right)\left( {\left( {1 + r_{C} } \right)} \right) + 2} \right]D + \left( {1 - \tau } \right)\left( {1 + r_{C} } \right) - 1 > 0.$$
This quadratic inequality seems complicated, but it is only complicated for \(0.5 < D\). \(\mu_{ML}\) is a monotonically decreasing function of \(D\), whereas \(\sigma_{ML}\) is not a monotonic function of \(D\). \(\sigma_{ML}\) is maximized when \(D = 0.5\). When \(0 < D < 0.5\), a decrease in \(D\) leads to an increase in \(\mu_{ML}\) and a decrease in \(\sigma_{ML}\). In this interval, it is easy to see, without any complicated computation, that reducing D increases the slope of the CAL. The implications of lowering \(\tau\) are the same as those of lowering \(D\). Assuming that absolute risk aversion is non-increasing, reducing \(D\) increases the demand for ML, \(\theta m\), when \(D\) is less than 0.5. In fact, it is difficult for \(D\) to exceed 0.5 when an ML monitoring is practiced by cryptocurrency exchanges with inferior techniques; thus, it is likely that intentionally reducing \(D\) through excessive reporting can be a tool for a cryptocurrency exchange business. Moreover, reducing \(\tau\) does not always increase revenue, as revenue is calculated as \(\theta m\tau\). To a certain extent, decreasing \(\tau\) increases the demand for ML, \(\theta m\), which leads to higher revenues. However, when \(\tau\) is too small, a further decrease in \(\tau\) leads to lower revenue. The effect of manipulating \(D\) is free of this problem occurring when controlling \(\tau\).
Ferwerda (2009), who incorporates ML into the market offense function proposed by Becker (1968), suggests that the ML detection probability negatively affects the amount of criminal activity, which is related to \(m\) in our study. Our model shows that the detection probability negatively affects the ML demand \(\theta m\), for a fixed amount of illegal gain \(m\). A cryptocurrency exchange sets the threshold \(P^{min}\) low enough not only to avoid sanctions, but also to reduce the detection probability to cater to money launderers. Then, \(P^{min}\) is lower in this model than in the first model. There is an additional effect. When the demand for ML increases owing to the lower detection probability, \(\delta\) increases. Then, even when \(Pr\left( {not reported{|}ML} \right)\) is given, the absolute number of reporting failure cases increases. Consequently, the exchange reduces \(P^{min}\) even further, and this can be identified through Eq. (11). When we incorporate the behavior of money launderers into the analysis, we find that cryptocurrency exchanges in which ML activities comprise a large proportion of total transactions may reduce \(P^{min}\) below that indicated by the result in the first model. An excessively low \(P^{min}\) dilutes suspicious reports, making the naive application of existing private bank regulations to cryptocurrency exchanges ineffective.
Cryptocurrency is booming. Although opinions on its value or potential power may differ, its impact on the economy must be considered. In particular, governments worldwide are most concerned about its wide use in the underground economy and investor protections (Böhme et al. 2015). On the grounds that illegal pecuniary gains from the underground economy are followed by ML, governments are working to apply the AML regulation that originally targeted the traditional private financial sector to the cryptocurrency market. For example, in fact, regulations compelling cryptocurrency exchanges to report suspicious transactions are emerging.
The consequences of regulatory reforms or adoptions do not depend only on the current behavior of those being regulated. Because those affected by the regulations react to them, the regulator faces a new set of actions from those being regulated. For this reason, a regulator should not take the current set of actions for granted. Successful implementation of regulations requires the authorities to not only consider the problems faced in the present state, but also predict long-run consequences (Kane 1988). This lesson also applies to the design of regulations for cryptocurrency exchanges.
Takáts (2011) proposes that private banks taking on an ML monitoring role tend to report an excessive number of transactions to avoid punishment for reporting failures. To alleviate this behavior and make the set of reports to the government more informative, he suggests a few corrective policy measures, such as reducing the punishment for reporting failures and introducing reporting fees. In other words, measures that punish type II errors less and indirectly reduce type I errors can improve the effectiveness of regulations. Similar measures may be valid for cryptocurrency exchanges. However, given our finding that overreporting is expected to be greater for cryptocurrency exchanges, further policy measures are needed.
In addition, despite some conflicting views, there is a general consensus that the direction of regulatory impact on the cryptocurrency market is generally negative or, at least, non-positive (Borri and Shakhnov 2020; Chokor and Alfieri 2021; Feinstein and Werbach 2021; Shanaev et al. 2020). This implies that the cryptocurrency regulations should be elaborate and not excessively tight, so as to reduce the burdens caused by regulations.
The overreporting behavior of cryptocurrency exchanges is primarily attributed to their inferior detection abilities, and the fact that cryptocurrencies are heavily involved in illegal activities is likely to intensify the problem. Thus, the government needs to work to improve cryptocurrency exchange businesses' detection abilities and ensure transparency in the cryptocurrency world. To improve these abilities, we suggest providing financial and technological support to cryptocurrency exchanges. Anti-money laundering risk assessments can be performed for each exchange prior to the support. For example, financial risk, including ML risk, can be analyzed through clustering algorithms, as suggested by Kou et al. (2014). Government support will be more effective when it concentrates on exchanges with high ML risk. An alternative is to set a capital requirement level for these exchanges, so that highly incompetent exchanges are prevented from entering the market.
We also suggest a differential application of regulations. Not all cryptocurrency exchanges registered on financial authorities' lists can immediately bear full monitoring and reporting obligations. Our model explaining the impacts of detection ability levels shows that newer, smaller, and less liquid exchanges tend to set lower reporting thresholds (i.e., \(P^{min}\)). Hence, reporting deadlines, fines, and reporting fees need to be applied differentially based on an exchange's age, size, and trading volume.
A governmental authority cannot directly manage entire societies and economies; thus, authorities often partially entrust their roles to private institutions to maximize the efficiency of regulations. However, this type of delegation system is bound to create agency problems caused by interest discrepancies. This discrepancy is intensified if a regulatory delegation imposes a compliance cost on the delegate that is partially responsible for the regulation practice. From this perspective, this study proposes the possibility that cryptocurrency exchanges will tend to report excessively if they are obligated to monitor ML transactions and report suspicious cases in the same way as private banks.
Beyond suggesting the mere possibility of overreporting, we claim that the magnitude of overreporting will be stronger for cryptocurrency exchange businesses. Cryptocurrency exchanges generally have short histories, small sizes, and low trading volumes; thus, they lack ML detection abilities. This study develops a model to understand the structure of ML monitoring institutions' reporting decisions. Through this model, we show that cryptocurrency exchanges with limited ML detection abilities choose to overreport suspicious cases more intensely to reduce type II errors, which can be explicitly punished. Moreover, we assume that some cryptocurrency exchanges rely heavily on revenues from ML and are friendly to money launderers. Based on this assumption, we use portfolio selection theory to show that reducing the detection probability through excessive reporting can be a tool for an exchange to increase ML transactions. In consideration of this additional finding, we conclude that cases reported by cryptocurrency exchanges will be even greater. We suggest some policy measures and expect that further studies can be conducted to design these measures in a more refined manner. Finally, we expect our argument that newer, smaller, and less liquid exchanges would report suspicious transactions more than older, larger, and more liquid exchanges to be empirically tested when the regulation is settled and the data are accessible for academia.
The liquidity score of each cryptocurrency exchange is available in "exchanges" section of Coinmarketcap (https://coinmarketcap.com/rankings/exchanges/).
Act on reporting and using specific financial transaction information, §§ 3–6-8. [Republic of Korea, Enforcement Date Mar. 25, 2021].
The BTC/USD market indicates bitcoin market in dollars.
Some may argue that governments usually do not investigate transactions that are not reported. However, we can use the following interpretation to justify positive investigation efforts. For two transactions \(i\) and \(j\) by the same money launderer, if \(i\) is reported and an authority catches the launderer, then \(j\) may also be uncovered by a further investigation. Here, \(j\) is not reported but is identified.
As \(\frac{\alpha }{\alpha +\beta }=1-\frac{\beta }{\alpha +\beta }\) holds, fixing \(\alpha\) and varying \(\beta\) also works.
It is intricate to construct a benchmark model because the punishment implementation can vary by country or even by individual case. Instead of considering a representative form of the punishment implementation, we assume that the money launderer has to pay off the whole amount of ML attempted money when identified, irrespective of the transaction fee. This assumption is convincing, in that it makes sure illegal gains are forfeited, even if they are consumed.
AML:
CAL:
Capital allocation line
FATF:
FinCEN:
US Financial Crimes Enforcement Network
Probability density function
Albrecht WS, Albrecht CC, Albrecht CO, Zimbelman MF (2008) Fraud examination, 3rd edn. South-Western College Pub
Becker GS (1968) Crime and punishment: an economic approach. J Polit Econ 76(2):169–217. https://doi.org/10.1086/259394
Bhaskar ND, Chuen DL (2015) Bitcoin exchanges. Handb Digital Curr. https://doi.org/10.1016/b978-0-12-802117-0.00028-x
Böhme R, Christin N, Edelman B, Moore T (2015) Bitcoin: economics, technology, and governance. J Econ Perspect 29(2):213–238. https://doi.org/10.1257/jep.29.2.213
Borri N, Shakhnov K (2020) Regulation spillovers across cryptocurrency markets. Financ Res Lett 36:101333. https://doi.org/10.1016/j.frl.2019.101333
Brenig C, Accorsi R, Müller G (2015) Economic analysis of cryptocurrency backed money laundering. In: ECIS 2015 completed research papers 20. https://doi.org/10.18151/7217279
Chen Z, Khoa LD, Teoh EN, Nazir A, Karuppiah EK, Lam KS (2018) Machine learning techniques for anti-money laundering (AML) solutions in suspicious transaction detection: a review. Knowl Inf Syst 57(2):245–285. https://doi.org/10.1007/s10115-017-1144-z
Chokor A, Alfieri E (2021) Long and short-term impacts of regulation in the cryptocurrency market. Q Rev Econ Finance 81:157–173. https://doi.org/10.1016/j.qref.2021.05.005
Dupuis D, Gleason K (2020) Money laundering with cryptocurrency: open doors and the regulatory dialectic. J Financ Crime 28(1):60–74. https://doi.org/10.1108/jfc-06-2020-0113
Dwyer GP (2015) The economics of bitcoin and similar private digital currencies. J Financ Stab 17:81–91. https://doi.org/10.1016/j.jfs.2014.11.006
Feinstein BD, Werbach K (2021) The impact of cryptocurrency regulation on trading markets. J Financ Regul 7(1):48–99. https://doi.org/10.1093/jfr/fjab003
Ferwerda J (2009) The economics of crime and money laundering: Does anti-money laundering policy reduce crime? Rev Law Econ. https://doi.org/10.2202/1555-5879.1421
Foley S, Karlsen JR, Putniņš TJ (2019) Sex, drugs, and bitcoin: How much illegal activity is financed through cryptocurrencies? Rev Financ Stud 32(5):1798–1853. https://doi.org/10.1093/rfs/hhz015
Financial Action Task Force (2012) International standards on combating money laundering and the financing of terrorism and proliferation: the FATF recommendations, FATF/OECD, Paris, France (updated as of October 2020). www.fatf-gafi.org/recommendations.html
Financial Action Task Force (2019) Guidance for a risk-based approach to virtual assets and virtual asset service providers
Financial Crimes Enforcement Network, Public Affairs (2019) New FinCEN guidance affirms its longstanding regulatory framework for virtual currencies and a new FinCEN advisory warns of threats posed by virtual currency misuse [press release]. Retrieved from https://www.fincen.gov/news/news-releases/new-fincen-guidance-affirms-its-longstanding-regulatory-framework-virtual
Hinterseer K (2002) Criminal finance: the political economy of money laundering in a comparative legal context. Kluwer Law International, Hague
Kane EJ (1988) Interaction of financial and regulatory innovation. Am Econ Rev 78(2):328–334
Kou G, Peng Y, Wang G (2014) Evaluation of clustering algorithms for financial risk analysis using MCDM methods. Inf Sci 275:1–12. https://doi.org/10.1016/j.ins.2014.02.137
Loi H (2018) The liquidity of bitcoin. Int J Econ Financ 10(1):13–22. https://doi.org/10.5539/ijef.v10n1p13
Markowitz H (1952) Portfolio selection. J Finance 7(1):77. https://doi.org/10.2307/2975974
Masciandaro D (1998) Money laundering regulation: The micro economics. Journal of Money Laundering Control 2(1):49–58. https://doi.org/10.1108/eb027170
Masciandaro D (1999) Money laundering: the economics of regulation. Eur J Law Econ 7:225–240. https://doi.org/10.1023/A:1008776629651
Masciandaro D, Filotto U (2001) Money laundering regulation and bank compliance costs: What do your customers know? Economics and the Italian experience. J Money Laundering Control 5(2):133–145. https://doi.org/10.1108/eb027299
Mccarthy KJ, Santen PV, Fiedler I (2015) Modeling the money launderer: microtheoretical arguments on anti-money laundering policy. Int Rev Law Econ 43:148–155. https://doi.org/10.1016/j.irle.2014.04.006
Nakamoto S (2008) Bitcoin: a peer-to-peer electronic cash system. Retrieved from https://bitcoin.org/bitcoin.pdf
Shanaev S, Sharma S, Ghimire B, Shuraeva A (2020) Taming the blockchain beast? Regulatory implications for the cryptocurrency market. Res Int Bus Financ 51:101080. https://doi.org/10.1016/j.ribaf.2019.101080
Smales L (2019) Bitcoin as a safe haven: Is it even worth considering? Financ Res Lett 30:385–393. https://doi.org/10.1016/j.frl.2018.11.002
Takáts E (2011) A theory of "crying wolf": the economics of money laundering enforcement. J Law Econ Organ 27(1):32–78. https://doi.org/10.1093/jleo/ewp018
Yermack D (2015) Is bitcoin a real currency? An economic appraisal. Handb Digital Curr. https://doi.org/10.1016/b978-0-12-802117-0.00002-3
We appreciate helpful comments and discussions from Takanori Adachi (Tokyo Metropolitan Univ.), Kyoung Jin Choi (Univ. of Calgary), and Gang Kou (Southwestern Univ. of Finance and Economics).
There is no specified project funding.
College of Economics, Sungkyunkwan University, Seoul, 03063, Republic of Korea
Daehan Kim & Doojin Ryu
Faculty of Political Sciences, Istanbul Medeniyet University, Istanbul, Turkey
Mehmet Huseyin Bilgin
Daehan Kim
Doojin Ryu
DK: proposal and original idea. DR, MB: conceptualization; DK: modeling; DK, DR:methodology; DR: validation; DR: resources; DK, MB: literature review; DR, MB: economic and business implication; DK: writing—original draft preparation; DR: writing—review and editing; MB: discussion; DR: project administration. All authors have read and agreed to the published version of the manuscript. All authors read and approved the final manuscript.
Daehan Kim is currently a researcher at the College of Economics, Sungkyunkwan University (SKKU), Seoul, Republic of Korea.
Mehmet Huseyin Bilgin is a full professor of economics and the Chair of the Division of the International Economic Integration at Istanbul Medeniyet University. His current research interests include macroeconomics, international economics, and international finance. Bilgin has published many articles in reputable international journals.
Doojin Ryu, the corresponding author, is a full professor of finance at SKKU. Ryu has published 130 papers in SSCI journals, and globally ranked 4th (2018), 4th (2019), 9th (2020), 14th (2021) in the field of business and finance (Journal Citation Reports—Clarivate Analytics).
Correspondence to Doojin Ryu.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Kim, D., Bilgin, M.H. & Ryu, D. Are suspicious activity reporting requirements for cryptocurrency exchanges effective?. Financ Innov 7, 78 (2021). https://doi.org/10.1186/s40854-021-00294-6
Portfolio choice
E26 (informal economy · underground economy)
G11 (portfolio choice · investment decisions)
K42 (illegal behavior and the enforcement of law) | CommonCrawl |
COBOL - Wikipedia
begin quote from:
https://en.wikipedia.org/wiki/COBOL
COBOL (/ˈkoʊbɒl, -bɔːl/; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is imperative, procedural and, since 2002, object-oriented.
COBOL (disambiguation) · Grace Hopper · Flow-matic · Codasyl
This article is about the compiled programming language. For other uses, see COBOL (disambiguation).
Procedural, imperative, object-oriented
Howard Bromberg, Howard Discount, Vernon Reeves, Jean E. Sammet, William Selden, Gertrude Tierney
CODASYL, ANSI, ISO
First appeared
1959; 59 years ago
Stable release
ISO/IEC 1989:2014 / 2014
Typing discipline
Weak, static
Filename extensions
.cbl, .cob, .cpy
Major implementations
GnuCOBOL, IBM COBOL, Micro Focus Visual COBOL
ACUCOBOL-GT, COBOL-IT, COBOL/2, DEC COBOL-10, DEC VAX COBOL, DOSVS COBOL, Fujitsu COBOL, Hitachi COBOL2002, HP3000 COBOL/II, IBM COBOL SAA, IBM COBOL/400, IBM COBOL/II, IBM Enterprise COBOL, IBM ILE COBOL, IBM OS/VS COBOL, ICL COBOL (VME), isCOBOL, Micro Focus COBOL, Microsoft COBOL, Realia COBOL, Ryan McFarland RM/COBOL, Ryan McFarland RM/COBOL-85, Tandem (NonStop) COBOL85, Tandem (NonStop) SCOBOL, UNIVAC COBOL, Unisys MCP COBOL74, Unisys MCP COBOL85, Unix COBOL X/Open, Visual COBOL, Wang VS COBOL
Influenced by
AIMACO, C++,[a] COMTRAN, Eiffel,[a] FACT, FLOW-MATIC, Smalltalk[a]
CobolScript,[4] PL/I,[5] PL/B[citation needed]
COBOL at Wikibooks
COBOL (/ˈkoʊbɒl, -bɔːl/; an acronym for "common business-oriented language") is a compiled English-like computer programming languagedesigned for business use. It is imperative, procedural and, since 2002, object-oriented. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in legacy applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. But due to its declining popularity and the retirement of experienced COBOL programmers, programs are being migrated to new platforms, rewritten in modern languages or replaced with software packages.[6] Most programming in COBOL is now purely to maintain existing applications.[7]
COBOL was designed in 1959 by CODASYL and was partly based on previous programming language design work by Grace Hopper, commonly referred to as "the (grand)mother of COBOL".[8][9][10] It was created as part of a US Department of Defense effort to create a portable programming language for data processing. Intended as a stopgap, the Department of Defense promptly forced computer manufacturers to provide it, resulting in its widespread adoption.[11] It was standardized in 1968 and has since been revised four times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2014.[12]
COBOL statements have an English-like syntax, which were designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. In contrast with modern, succinct syntax like y = x;, COBOL has a more English-like syntax (in this case, MOVE x TO y). COBOL code is split into four divisions (identification, environment, data and procedure) containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions and just one class.
Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text.[13] COBOL has been criticized throughout its life, however, for its verbosity, design process and poor support for structured programming. These weaknesses result in monolithic and, though intended to be English-like, largely incomprehensible programs with high redundancy.
1History and specification
1.1Background
1.2COBOL 60
1.3COBOL-61 to COBOL-65
1.4COBOL-68
1.7COBOL 2002 and object-oriented COBOL
1.8COBOL 2014
1.9Legacy
2Features
2.1Syntax
2.1.1Metalanguage
2.2Code format
2.3Identification division
2.3.1Object-oriented programming
2.4Environment division
2.4.1Files
2.5Data division
2.5.1Aggregated data
2.5.2Other data levels
2.5.3Data types
2.5.3.1PICTURE clause
2.5.3.2USAGE clause
2.5.4Report writer
2.6Procedure division
2.6.1Procedures
2.6.2Statements
2.6.2.1Control flow
2.6.2.2I/O
2.6.2.3Data manipulation
2.6.3Scope termination
2.6.4Self-modifying code
2.7Hello, world
3Criticism and defense
3.1Lack of structure
3.2Compatibility issues
3.3Verbose syntax
3.4Isolation from the computer science community
3.5Concerns about the design process
3.6Influences on other languages
4See also
5Notes
6References
6.1Citations
6.2Sources
7External links
History and specification[edit]
In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost $600,000. At a time when new programming languages were proliferating at an ever-increasing rate, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster.[14]
Grace Hopper, the inventor of FLOW-MATIC, a predecessor to COBOL
In April 1959, Mary K. Hawes called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages.[15] Representatives included Grace Hopper, inventor of the English-like data processing language FLOW-MATIC, Jean Sammet and Saul Gorn.[16][17]
The group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had a further 175 on order and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs and ease modernization.[18]
Phillips agreed to sponsor the meeting and tasked the delegation with drafting the agenda.[19]
COBOL 60[edit]
On May 28 and 29 of 1959 (exactly one year after the Zürich ALGOL 58 meeting), a meeting was held at the Pentagon to discuss the creation of a common programming language for business. It was attended by 41 people and was chaired by Phillips.[20] The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs.[21]
Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power.[22]
The meeting resulted in the creation of a steering committee and short-, intermediate- and long-range committees. The short-range committee was given to September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees.[23][24] Their official mission, however, was to identify the strengths and weaknesses of existing programming languages and did not explicitly direct them to create a new language.[21] The deadline was met with disbelief by the short-range committee.[25] One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap.[26]
The steering committee met on June 4 and agreed to name the entire activity as the Committee on Data Systems Languages, or CODASYL, and to form an executive committee.[27]
The short-range committee was made up of members representing six computer manufacturers and three government agencies. The six computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The three government agencies were the US Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology).[28] The committee was chaired by Joseph Wegstein of the US National Bureau of Standards. Work began by investigating data description, statements, existing applications and user experiences.[29]
The committee mainly examined the FLOW-MATIC, AIMACO and COMTRAN programming languages.[21][30] The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes.[31][32] FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee.[25]FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands and the separation of data descriptions and instructions.[33]
IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC[34][35] by a short-range committee made up of colleagues of Grace Hopper.[36] Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process,[23] and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included).[37] In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English.[38] In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out".[39]Features from COMTRAN incorporated into COBOL included formulas,[40] the PICTURE clause,[41] an improved IF statement, which obviated the need for GO TOs, and a more robust file management system.[34]
The usefulness of the committee's work was subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple.[42] Controversial features included those some considered useless or too advanced for data processing users. Such features included boolean expressions, formulas and table subscripts (indices).[43][44] Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability.[43] Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002.[45] Little consideration was given to interactivity, interaction with operating systems (few existed at that time) and functions (thought of as purely mathematical and of no use in data processing).[46][47]
The specifications were presented to the Executive Committee on September 4. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions", and Bob Bemer later described them as a "hodgepodge". The subcommittee was given until December to improve it.[25]
At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language) and "COCOSYL" (Common Computer Systems Language).[48] The name "COBOL" was suggested by Bob Bemer.[49][50]
In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it.[51] This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation,[25] allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste.[52]
'And what name do you want inscribed?'
I said, 'I'll write it for you.' I wrote the name down: COBOL.
'What kind of name is that?'
'Well it's a Polish name. We shortened it and got rid of a lot of unnecessary notation.'
Howard Bromberg on how he bought the COBOL tombstone[53]
It soon became apparent that the committee was too large for any further progress to be made quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure.[b][53][55] A sub-committee was formed to analyze existing languages and was made up of six individuals:[21][56]
William Selden and Gertrude Tierney of IBM,
Howard Bromberg and Howard Discount of RCA,
Vernon Reeves and Jean E. Sammet of Sylvania Electric Products.
The sub-committee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification.[21]
The cover of the COBOL 60 report
The specifications were approved by the Executive Committee on January 3, 1960, and sent to the government printing office, which printed these as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers.[57] The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications.[58]
During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL.[59]
Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501.[60] On December 6 and 7, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved.[61]
The relative influences of which languages were used continues to this day in the recommended advisory printed in all COBOL reference manuals:
COBOL is an industry language and is not the property of any company or group of companies, or of any organization or group of organizations.
No warranty, expressed or implied, is made by any contributor or by the CODASYL COBOL Committee as to the accuracy and functioning of the programming system and language. Moreover, no responsibility is assumed by any contributor, or by the committee, in connection therewith. The authors and copyright holders of the copyrighted material used herein are as follows:
FLOW-MATIC (trademark of Unisys Corporation), Programming for the UNIVAC (R) I and II, Data Automation Systems, copyrighted 1958, 1959, by Unisys Corporation; IBM Commercial Translator Form No. F28-8013, copyrighted 1959 by IBM; FACT, DSI 27A5260-2760, copyrighted 1960 by Minneapolis-Honeywell.
They have specifically authorized the use of this material, in whole or in part, in the COBOL specifications. Such authorization extends to the reproduction and use of COBOL specifications in programming manuals or similar publications.[62]
COBOL-61 to COBOL-65[edit]
It is rather unlikely that Cobol will be around by the end of the decade.
Anonymous, June 1960[63]
Many logical flaws were found in COBOL 60, leading GE's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee enacted a total cleanup and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained.[59]
Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91.[64]
In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease.[64]
The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities.[65] The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee.[60] COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables.[66]
COBOL-68[edit]
Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions.[67] This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972.[68]
By 1970, COBOL had become the most widely used programming language in the world.[69]
Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970 and 1973, including changes such as new inter-program communication, debugging and file merging facilities as well as improved string-handling and library inclusion features.[70] Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing.[71]The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee.[70]
The Programming Language Committee was not well-known, however. The vice-president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It was also poor, lacking the funds to make public documents, such as minutes of meetings and change proposals, freely available.[72]
In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the DELETE statement[73] and the segmentation module.[74] Deleted features included the NOTE statement, the EXAMINE statement (which was replaced by INSPECT) and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard.[75] The report writer was slated to be removed from COBOL, but was reinstated before the standard was published.[76][77] ISO later adopted the updated standard in 1978.[68]
In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-President of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatiblewith COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources".[78] Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user".[79][80]
During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters.[81] Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard.[82]
In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems.[80][83] A year later, a COBOL-80 compiler was released to DEC VAX users, who noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging.[84]
The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed.[80]
In late 1985, ANSI published the revised standard. Sixty features were changed or deprecated and many[quantify] were added, such as:[85][86]
Scope terminators (END-IF, END-PERFORM, END-READ, etc.)
Nested subprograms
CONTINUE, a no-operation statement
EVALUATE, a switch statement
INITIALIZE, a statement that can set groups of data to their default values
Inline PERFORM loop bodies – previously, loop bodies had to be specified in a separate procedure
Reference modification, which allows access to substrings
I/O status codes.
The standard was adopted by ISO the same year.[68] Two amendments followed in 1989 and 1993, the first introducing intrinsic functions and the other providing corrections. ISO adopted the amendments in 1991 and 1994 respectively,[68] before subsequently taking primary ownership and development of the standard.
COBOL 2002 and object-oriented COBOL[edit]
In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs.[87][better source needed]
In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk.[1][2] The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002.[88]
Fujitsu/GTSoftware,[89] Micro Focus and RainCode introduced object-oriented COBOL compilers targeting the .NET Framework.
There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85.[90] These other features included:[91][92]
Free-form code
Locale-based processing
Support for extended character sets such as Unicode
Floating-point and binary data types (until then, binary items were truncated based on their declaration's base-10 specification)
Portable arithmetic results
Bit and boolean data types
Pointers and syntax for getting and freeing storage
The SCREEN SECTION for text-based user interfaces
The VALIDATE facility
Improved interoperability with other programming languages and framework environments such as .NET and Java.
Three corrigenda were published for the standard: two in 2006 and one in 2009.[93]
COBOL 2014[edit]
Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL.[93]
COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced.[94]
COBOL 2014 includes the following changes:[95]
Portable arithmetic results have been replaced by IEEE 754 data types
Major features have been made optional, such as the VALIDATE facility, the report writer and the screen-handling facility.
Method overloading
Dynamic capacity tables (a feature dropped from the draft of COBOL 2002)[96]
Legacy[edit]
COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually.[97]
Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed[by whom?] to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use.[98] The authors said that the survey data suggest "a gradual decline in the importance of Cobol in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted".[99]
In 2006 and 2012, Computerworld surveys found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software.[7][100] 36% of managers said they planned to migrate from COBOL, and 25% said they would like to if it was cheaper. Instead, some businesses have migrated their systems from expensive mainframes to cheaper, more modern systems, while maintaining their COBOL programs.[7]
Syntax[edit]
COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as x IS GREATER THAN y or more concisely as x GREATER y or x > y. More complex conditions can be "abbreviated" by removing repeated conditions and variables. For example, a > b AND a > c OR a = d can be shortened to a > b AND c OR = d. As a consequence of this English-like syntax, COBOL has over 300 keywords.[101][c] Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more English-like statements and clauses; e.g., the IN and OF keywords can be used interchangeably, as can IS and ARE, and VALUE and VALUES.
Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see § PICTURE clause) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. 12) and strings (e.g. 'Hello!').[103]Separators include the space character and commas and semi-colons followed by a space.[104]
A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs.
Metalanguage[edit]
COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it.[105]
Elements of COBOL's metalanguage
All capitals EXAMPLE Reserved word
Underlining EXAMPLE The reserved word is compulsory
Braces { } Only one option may be selected
Brackets [] Zero or one options may be selected
Ellipsis ... The preceding element may be repeated
Bars {| |} One or more options may be selected. Any option may only be selected once.
[| |] Zero or more options may be selected. Any option may only be selected once.
As an example, consider the following description of an ADD statement:
{\displaystyle {\begin{array}{l}{\underline {\text{ADD}}}\,{\begin{Bmatrix}{\text{identifier-1}}\\{\text{literal-1}}\end{Bmatrix}}\dots \;{\underline {\text{TO}}}\,\left\{{\text{identifier-2}}\,\left[\,{\underline {\text{ROUNDED}}}\,\right]\right\}\dots \\\quad \left[\left|{\begin{array}{l}{\text{ON}}\,{\underline {\text{SIZE}}}\,{\underline {\text{ERROR}}}\,{\text{imperative-statement-1}}\\{\underline {\text{NOT}}}\,{\text{ON}}\,{\underline {\text{SIZE}}}\,{\underline {\text{ERROR}}}\,{\text{imperative-statement-2}}\\\end{array}}\right|\right]\\\quad \left[\,{\underline {\text{END-ADD}}}\,\right]\end{array}}}
This description permits the following variants:
ADD 1 TO x
ADD 1, a, b TO x ROUNDED, y, z ROUNDED
ADD a, b TO c
ON SIZE ERROR
DISPLAY "Error"
END-ADD
ADD a TO b
NOT SIZE ERROR
DISPLAY "No error"
Code format[edit]
COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were:
Column(s)
Sequence number area 1–6 Originally used for card/line numbers, this area is ignored by the compiler
Indicator area 7 The following characters are allowed here:
* – Comment line
/ – Comment line that will be printed on a new page of a source listing
- – Continuation line, where words or literals from the previous line are continued
D – Line enabled in debugging mode, which is otherwise ignored
Area A 8–11 This contains: DIVISION, SECTION and procedure headers; 01 and 77 level numbers and file/report descriptors
Area B 12–72 Any other code not allowed in Area A
Program name area 73– Historically up to column 80 for punched cards, it is used to identify the program or sequence the card belongs to
In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column.[106]
COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator.[106]
Identification division[edit]
The identification division identifies the following code entity and contains the definition of a class or interface.
Object-oriented programming[edit]
Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables.[107] Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the INVOKE statement, which acts similarly to CALL, or through inline method invocation, which is analogous to using functions.[108]
*> These are equivalent.
INVOKE my-class "foo" RETURNING var
MOVE my-class::"foo" TO var *> Inline method invocation
COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a PROPERTY clause, which leaves the user with no way to access it.[109]Method overloading was added in COBOL 2014.[110]
Environment division[edit]
The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information.
Files[edit]
COBOL supports three file formats, or organizations: sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, alternate, record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C‑ISAM and VSAM, are based on IBM's ISAM. Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access.[111]
A common non-standard extension is the line sequential organization, used to process text files. Records in a file are terminated by a newline and may be of varying length.[112]
Data division[edit]
The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces.
Aggregated data[edit]
Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called records. Items that have subordinate aggregate data are called group items; those that do not are called elementary items. Level-numbers used to describe standard data items are between 1 and 49.[113][114]
01 some-record. *> Aggregate group record item
05 num PIC 9(10). *> Elementary item
05 the-date. *> Aggregate (sub)group record item
10 the-year PIC 9(4). *> Elementary item
10 the-month PIC 99. *> Elementary item
10 the-day PIC 99. *> Elementary item
In the above example, elementary item num and group item the-date are subordinate to the record some-record, while elementary items the-year, the-month, and the-day are part of the group item the-date.
Subordinate items can be disambiguated with the IN (or OF) keyword. For example, consider the example code above along with the following example:
01 sale-date.
05 the-year PIC 9(4).
05 the-month PIC 99.
05 the-day PIC 99.
The names the-year, the-month, and the-day are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the sale-date group, the programmer would use the-year IN sale-date (or the equivalent the-year OF sale-date). (This syntax is similar to the "dot notation" supported by most contemporary languages.)
Other data levels[edit]
A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated RENAMES clause, is rarely used[115] and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use.[116]
01 customer-record.
05 cust-key PIC X(10).
05 cust-name.
10 cust-first-name PIC X(30).
10 cust-last-name PIC X(30).
05 cust-dob PIC 9(8).
05 cust-balance PIC 9(7)V99.
66 cust-personal-details RENAMES cust-name THRU cust-dob.
66 cust-all-details RENAMES cust-name THRU cust-balance.
A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, property-name and sales-region, which are non-group data items that are independent of (not subordinate to) any other data items:
77 property-name PIC X(80).
77 sales-region PIC 9(5).
An 88 level-number declares a condition name (a so-called 88-level) which is true when its parent data item contains one of the values specified in its VALUE clause.[117] For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the wage-type data item. When the data item contains a value of 'H', the condition-name wage-is-hourly is true, whereas when it contains a value of 'S' or 'Y', the condition-name wage-is-yearly is true. If the data item contains some other value, both of the condition-names are false.
01 wage-type PIC X.
88 wage-is-hourly VALUE "H".
88 wage-is-yearly VALUE "S", "Y".
Data types[edit]
Standard COBOL provides the following data types:[118]
Sample declaration
Alphabetic PIC A(30) May only contain letters or spaces
Alphanumeric PIC X(30) May contain any characters
Boolean PIC 1 USAGE BIT Data stored in the form of 0s and 1s, as a binary number
Index USAGE INDEX Used to reference table elements
National PIC N(30) Similar to alphanumeric, but using an extended character set, e.g. UTF-8
Numeric PIC 9(5)V9(5) May contain only numbers
Object USAGE OBJECT REFERENCE May reference either an object or NULL
Pointer USAGE POINTER
Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data.[119] In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type.[120]
PICTURE clause[edit]
A PICTURE (or PIC) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a 9 indicates a decimal digit, and an S indicates that the item is signed. Other picture characters (called insertionand editing characters) specify how an item should be formatted. For example, a series of + characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a + to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, 9(7) is equivalent to 9999999. Picture specifications containing only digit (9) and sign (S) characters define purely numeric data items, while picture specifications containing alphabetic (A) or alphanumeric (X) characters define alphanumeric data items. The presence of other formatting characters define edited numeric or edited alphanumeric data items.[121]
PICTURE clause
Value in
PIC 9(5) 100 00100
"Hello" "Hello" (this is legal, but results in undefined behavior)[119]
PIC +++++ -10 " -10" (note leading spaces)
PIC 99/99/9(4) 31042003 "31/04/2003"
PIC *(4)9.99 100.50 "**100.50"
0 "****0.00"
PIC X(3)BX(3)BX(3) "ABCDEFGHI" "ABC DEF GHI"
USAGE clause[edit]
The USAGE clause declares the format data is stored in. Depending on the data type, it can either complement or be used instead of a PICTURE clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are:[122]
Binary, where a minimum size is either specified by the PICTURE clause or by a USAGE clause such as BINARY-LONG.
USAGE COMPUTATIONAL, where data may be stored in whatever format the implementation provides; often equivalent to USAGE BINARY
USAGE DISPLAY, the default format, where data is stored as a string
Floating-point, in either an implementation-dependent format or according to IEEE 754.
USAGE NATIONAL, where data is stored as a string using an extended character set
USAGE PACKED-DECIMAL, where data is stored in the smallest possible decimal format (typically packed binary-coded decimal)
Report writer[edit]
The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings.[123]
Reports are associated with report files, which are files which may only be written to through report writer statements.
FD report-out REPORT sales-report.
Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical control breaks. Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records:
RD sales-report
PAGE LIMITS 60 LINES
FIRST DETAIL 3
CONTROLS seller-name.
01 TYPE PAGE HEADING.
03 COL 1 VALUE "Sales Report".
03 COL 74 VALUE "Page".
03 COL 79 PIC Z9 SOURCE PAGE-COUNTER.
01 sales-on-day TYPE DETAIL, LINE + 1.
03 COL 3 VALUE "Sales on".
03 COL 12 PIC 99/99/9999 SOURCE sales-date.
03 COL 21 VALUE "were".
03 COL 26 PIC $$$$9.99 SOURCE sales-amount.
01 invalid-sales TYPE DETAIL, LINE + 1.
03 COL 3 VALUE "INVALID RECORD:".
03 COL 19 PIC X(34) SOURCE sales-record.
01 TYPE CONTROL HEADING seller-name, LINE + 2.
03 COL 1 VALUE "Seller:".
03 COL 9 PIC X(30) SOURCE seller-name.
The above report description describes the following layout:
Sales Report Page 1
Seller: Howard Bromberg
Sales on 10/12/2008 were $1000.00
Sales on 12/12/2008 were $0.00
Sales on 13/12/2008 were $31.47
INVALID RECORD: Howard Bromberg XXXXYY
Seller: Howard Discount
Sales Report Page 12
Sales on 08/05/2014 were $543.98
INVALID RECORD: William Selden 12O52014FOOFOO
Four statements control the report writer: INITIATE, which prepares the report writer for printing; GENERATE, which prints a report group; SUPPRESS, which suppresses the printing of a report group; and TERMINATE, which terminates report processing. For the above sales report example, the procedure division might look like this:
OPEN INPUT sales, OUTPUT report-out
INITIATE sales-report
PERFORM UNTIL 1 <> 1
READ sales
AT END
EXIT PERFORM
END-READ
VALIDATE sales-record
IF valid-record
GENERATE sales-on-day
GENERATE invalid-sales
END-IF
END-PERFORM
TERMINATE sales-report
CLOSE sales, report-out
Procedure division[edit]
Procedures[edit]
The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections.[124] Execution goes down through the procedures of a program until it is terminated.[125] To use procedures as subroutines, the PERFORM verb is used. This transfers control to the specified range of procedures and returns only upon reaching the end.
A mine is "armed" when the screen is invalid.
Unusual control flow can trigger mines, which cause control in performed procedures to return at unexpected times to unexpected locations. Procedures can be reached in three ways: they can be called with PERFORM, jumped to from a GO TO or through execution "falling through" the bottom of an above paragraph. Combinations of these invoke undefined behavior, creating mines. Specifically, mines occur when execution of a range of procedures would cause control flow to go past the last statement of a range of procedures already being performed.[126][127]
For example, in the code in the adjacent image, a mine is tripped at the end of update-screen when the screen is invalid. When the screen is invalid, control jumps to the fix-screen section, which, when done, performs update-screen. This recursion triggers undefined behavior as there are now two overlapping ranges of procedures being performed. The mine is then triggered upon reaching the end of update-screen and means control could return to one of two locations:
The first PERFORM statement
The PERFORM statement in fix-screen, where it would then "fall-through" into update-screen and return to the first PERFORM statement upon reaching the end.
Statements[edit]
COBOL 2014 has 47 statements (also called verbs),[128] which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section.
Control flow[edit]
COBOL's conditional statements are IF and EVALUATE. EVALUATE is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe:
EVALUATE TRUE ALSO desired-speed ALSO current-speed
WHEN lid-closed ALSO min-speed THRU max-speed ALSO LESS THAN desired-speed
PERFORM speed-up-machine
WHEN lid-closed ALSO min-speed THRU max-speed ALSO GREATER THAN desired-speed
PERFORM slow-down-machine
WHEN lid-open ALSO ANY ALSO NOT ZERO
PERFORM emergency-stop
WHEN OTHER
END-EVALUATE
The PERFORM statement is used to define loops which are executed until a condition is true (not while true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). CALL and INVOKE call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item.[129] Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available).[130] CANCEL unloads subprograms from memory. GO TO causes the program to jump to a specified procedure.
The GOBACK statement is a return statement and the STOP statement stops the program. The EXIT statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure.[131]
Exceptions are raised by a RAISE statement and caught with a handler, or declarative, defined in the DECLARATIVES portion of the procedure division. Declaratives are sections beginning with a USE statement which specify the errors to handle. Exceptions can be names or objects. RESUME is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the DECLARATIVES. Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected.
I/O[edit]
File I/O is handled by the self-describing OPEN, CLOSE, READ, and WRITE statements along with a further three: REWRITE, which updates a record; START, which selects subsequent records to access by finding a record with a certain key; and UNLOCK, which releases a lock on the last record accessed.
User interaction is done using ACCEPT and DISPLAY.
Data manipulation[edit]
The following verbs manipulate data:
INITIALIZE, which sets data items to their default values.
MOVE, which assigns values to data items.
SET, which has 15 formats: it can modify indices, assign object references and alter table capacities, among other functions.[132]
ADD, SUBTRACT, MULTIPLY, DIVIDE, and COMPUTE, which handle arithmetic (with COMPUTE assigning the result of a formula to a variable).
ALLOCATE and FREE, which handle dynamic memory.
VALIDATE, which validates and distributes data as specified in an item's description in the data division.
STRING and UNSTRING, which concatenate and split strings, respectively.
INSPECT, which tallies or replaces instances of specified substrings within a string.
SEARCH, which searches a table for the first entry satisfying a condition.
Files and tables are sorted using SORT and the MERGE verb merges and sorts files. The RELEASE verb provides records to sort and RETURN retrieves sorted records in order.
Scope termination[edit]
Some statements, such as IF and READ, may themselves contain statements. Such statements may be terminated in two ways: by a period (implicit termination), which terminates allunterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement.
*> Terminator period ("implicit termination")
IF invalid-record
IF no-more-records
NEXT SENTENCE
READ record-file
AT END SET no-more-records TO TRUE.
*> Scope terminators ("explicit termination")
AT END SET no-more-records TO TRUE
Nested statements terminated with a period are a common source of bugs.[133][134] For example, examine the following code:
IF x
DISPLAY y.
DISPLAY z.
Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after DISPLAY y.
Another bug is a result of the dangling else problem, when two IF statements can associate with an ELSE.
IF y
DISPLAY a
DISPLAY b.
In the above fragment, the ELSE associates with the IF y statement instead of the IF x statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require ELSE NEXT SENTENCE to be placed after the inner IF.[134]
Self-modifying code[edit]
The original (1959) COBOL specification supported the infamous ALTER X TO PROCEED TO Y statement, for which many compilers generated self-modifying code. X and Y are procedure labels, and the single GO TO statement in procedure X executed after such an ALTER statement means GO TO Y instead. Many compilers still support it,[135] but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002.[136]
Hello, world[edit]
A "Hello, world" program in COBOL:
IDENTIFICATION DIVISION.
PROGRAM-ID. hello-world.
PROCEDURE DIVISION.
DISPLAY "Hello, world!"
When the – now famous – "Hello, World!" program example in The C Programming Language was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, with an empty DATA DIVISION, was tested using GNU/Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley.[137] In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters.
//COBUCLG JOB (001),'COBOL BASE TEST', 00010000
// CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) 00020000
//BASETEST EXEC COBUCLG 00030000
//COB.SYSIN DD * 00040000
00000* VALIDATION OF BASE COBOL INSTALL 00050000
01000 IDENTIFICATION DIVISION. 00060000
01100 PROGRAM-ID. 'HELLO'. 00070000
02000 ENVIRONMENT DIVISION. 00080000
02100 CONFIGURATION SECTION. 00090000
02110 SOURCE-COMPUTER. GNULINUX. 00100000
02120 OBJECT-COMPUTER. HERCULES. 00110000
02200 SPECIAL-NAMES. 00120000
02210 CONSOLE IS CONSL. 00130000
03000 DATA DIVISION. 00140000
04000 PROCEDURE DIVISION. 00150000
04100 00-MAIN. 00160000
04110 DISPLAY 'HELLO, WORLD' UPON CONSL. 00170000
04900 STOP RUN. 00180000
//LKED.SYSLIB DD DSNAME=SYS1.COBLIB,DISP=SHR 00190000
// DD DSNAME=SYS1.LINKLIB,DISP=SHR 00200000
//GO.SYSPRINT DD SYSOUT=A 00210000
// 00220000
After submitting the JCL, the MVS console displayed:
19.52.48 JOB 3 $HASP100 COBUCLG ON READER1 COBOL BASE TEST
19.52.48 JOB 3 IEF677I WARNING MESSAGE(S) FOR JOB COBUCLG ISSUED
19.52.48 JOB 3 $HASP373 COBUCLG STARTED - INIT 1 - CLASS A - SYS BSP1
19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING
19.52.48 JOB 3 IEC130I SYSLIB DD STATEMENT MISSING
19.52.48 JOB 3 IEFACTRT - Stepname Procstep Program Retcode
19.52.48 JOB 3 COBUCLG BASETEST COB IKFCBL00 RC= 0000
19.52.48 JOB 3 COBUCLG BASETEST LKED IEWL RC= 0000
19.52.48 JOB 3 +HELLO, WORLD
19.52.48 JOB 3 COBUCLG BASETEST GO PGM=*.DD RC= 0000
19.52.48 JOB 3 $HASP395 COBUCLG ENDED
Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output.
The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL.
Criticism and defense[edit]
Lack of structure[edit]
In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind".[138] In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training.[139]
One cause of spaghetti code was the GO TO statement. Attempts to remove GO TOs from COBOL code, however, resulted in convoluted programs and reduced code quality.[140] GO TOs were largely replaced by the PERFORM statement and procedures, which promoted modular programming[140] and gave easy access to powerful looping facilities. However, PERFORM could only be used with procedures so loop bodies were not located where they were used, making programs harder to understand.[141]
COBOL programs were infamous for being monolithic and lacking modularization.[142] COBOL code could only be modularized through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify any data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake.[143] Another complication stemmed from the ability to PERFORM THRU a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule.[144]
This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms.[145] Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included.
Nevertheless, much important legacy COBOL software uses unstructured code, which has become unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways.[146]
Compatibility issues[edit]
COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created.[147] One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 official variants.[148]
COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard.[149] As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs.[150]
Verbose syntax[edit]
COBOL: /koh′bol/, n.
A weak, verbose, and flabby language used by code grinders to do boring mindless things on dinosaur mainframes. [...] Its very name is seldom uttered without ritual expressions of disgust or horror.
The Jargon File 4.4.8.[151]
COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance.[152] COBOL was also intended to be easy for programmers to learn and use,[153]while still being readable to non-technical staff such as managers.[154][155][156][157] The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code[156] and the main changes in COBOL-85 were there to help ease maintenance.[81]
Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax.[158]
Isolation from the computer science community[edit]
The COBOL community has always been isolated from the computer science community. No academic computer scientists participated in the design of COBOL: all of those on the committee came from commerce or government. Computer scientists at the time were more interested in fields like numerical analysis, physics and system programming than the commercial file-processing problems which COBOL development tackled.[159] Jean Sammet attributed COBOL's unpopularity to an initial "snob reaction" due to its inelegance, the lack of influential computer scientists participating in the design process and a disdain for business data processing.[160] The COBOL specification used a unique "notation", or metalanguage, to define its syntax rather than the new Backus–Naur form because few committee members had heard of it. This resulted in "severe" criticism.[161][162][59]
Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966).[163] By 1985, there were twice as many books on Fortran and four times as many on BASIC as on COBOL in the Library of Congress.[105] University professors taught more modern, state-of-the-art languages and techniques instead of COBOL which was said to have a "trade school" nature.[164] Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them".[165] A 2013 poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it.[166] In contrast, in 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java.[167]
There was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems.[citation needed]
Concerns about the design process[edit]
Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent."[69] Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence.[168]
COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped,[169] COBOL 2002 was five years late,[1] and COBOL 2014 was six years late.[88][170] To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard.[171]
Influences on other languages[edit]
COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the REDEFINES clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays.[105] PICTURE data declarations were incorporated into PL/I, with minor changes.
COBOL's COPY facility, although considered "primitive",[172] influenced the development of include directives.[105]
The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems.[173] Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular.[174]
Computer programming portal
COBOL compilers
Programming language genealogies
Alphabetical list of programming languages
Comparison of programming languages
CODASYL
^ Jump up to:a b c Specifically influenced COBOL 2002's object-oriented features.[1][2][3]
Jump up^ The tombstone is currently at the Computer History Museum.[54]
Jump up^ Vendor-specific extensions cause many implementations to have far more: one implementation recognizes over 1,100 keywords.[102]
^ Jump up to:a b c Saade, Henry; Wallace, Ann (October 1995). "COBOL '97: A Status Report". Dr. Dobb's Journal. Retrieved 21 April 2014.
^ Jump up to:a b Arranga, Edmund C.; Coyle, Frank P. (February 1998). Object-Oriented COBOL. Cambridge University Press. p. 15. ISBN 978-0132611404. Object-Oriented COBOL's style reflects the influence of Smalltalk and C++.
Jump up^ Arranga, Edmund C.; Coyle, Frank P. (March 1997). "Cobol: Perception and Reality". Computer. IEEE. 30 (3): 127. doi:10.1109/2.573683. ISSN 0018-9162. (Subscription required (help)).
Jump up^ Imajo, Tetsuji; et al. (September 2000). COBOL Script: a business-oriented scripting language. Enterprise Distributed Object Computing Conference. Makuhari, Japan: IEEE. doi:10.1109/EDOC.2000.882363. ISBN 0769508650. (Subscription required (help)).
Jump up^ Radin, George (1978). Wexelblat, Richard L., ed. The early history and characteristics of PL/I. History of Programming Languages. Academic Press (published 1981). p. 572. doi:10.1145/800025.1198410. ISBN 0127450408. (Subscription required (help)).
Jump up^ Mitchell, Robert L. (14 March 2012). "Brain drain: Where Cobol systems go from here". Computerworld. Retrieved 9 February 2015.
^ Jump up to:a b c Mitchell, Robert L. (4 October 2006). "Cobol: Not Dead Yet". Computerworld. Retrieved 27 April 2014.
Jump up^ Porter Adams, Vicki (5 October 1981). "Captain Grace M. Hopper: the Mother of COBOL". InfoWorld. 3 (20): 33. ISSN 0199-6649.
Jump up^ Betts, Mitch (6 Jan 1992). "Grace Hopper, mother of Cobol, dies". Computerworld. 26 (1): 14. ISSN 0010-4841.
Jump up^ Lohr, Steve (2008). Go To: The Story of the Math Majors, Bridge Players, Engineers, Chess Wizards, Maverick Scientists, and Iconoclasts--The Programmers Who Created the Software Revolution. Basic Books. p. 52. ISBN 978-0786730766.
Jump up^ Ensmenger, Nathan L. (2009). The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise. MIT Press. p. 100. ISBN 978-0262050937. LCCN 2009052638.
Jump up^ "ISO/IEC 1989:2014". ISO. 26 May 2014. Retrieved 7 June 2014.
Jump up^ http://cs.brown.edu/~adf/programming_languages.html
Jump up^ Beyer 2009, p. 282.
Jump up^ Gürer, Denise (2002-06-01). "Pioneering Women in Computer Science". SIGCSE Bull. 34 (2): 175–180. doi:10.1145/543812.543853. ISSN 0097-8418.
Jump up^ Beyer 2009, pp. 281–282.
Jump up^ Sammet 1978a, p. 200.
Jump up^ "Early Meetings of the Conference on Data Systems Languages". IEEE Annals of the History of Computing. 7 (4): 316. 1985. doi:10.1109/MAHC.1985.10047. (Subscription required (help)).
^ Jump up to:a b c d e Sammet 2004, p. 104.
^ Jump up to:a b Conner 1984, p. ID/9.
^ Jump up to:a b c d Bemer 1971, p. 132.
Jump up^ CODASYL 1969, § I.2.1.1.
Jump up^ CODASYL 1969, § I.1.2.
Jump up^ Sammet, Jean (1978). "The Early History of COBOL". ACM SIGPLAN Notices. Association for Computing Machinery, Inc. 13 (8): 121–161. doi:10.1145/960118.808378. Retrieved 14 January 2010. (Subscription required (help)).
^ Jump up to:a b Beyer 2009, p. 292.
Jump up^ Bemer 1971, p. 131.
Jump up^ "Oral History of Captain Grace Hopper" (PDF). Computer History Museum. December 1980. p. 37. Retrieved 28 June 2014.
Jump up^ Marcotty 1978, p. 268.
Jump up^ Sammet 1978a, pp. 205–206.
^ Jump up to:a b Sammet 1978a, Figure 8.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2001, p. 846.
Jump up^ Sullivan, Patricia (25 June 2004). "Computer Pioneer Bob Bemer, 84". The Washington Post. p. B06. Retrieved 28 June 2014.
Jump up^ Bemer, Bob. "Thoughts on the Past and Future". Archived from the original on 16 May 2014. Retrieved 28 June 2014.
^ Jump up to:a b "The Story of the COBOL Tombstone" (PDF). The Computer Museum Report. The Computer Museum. 13: 8–9. Summer 1985. Archived (PDF) from the original on 3 April 2014. Retrieved 29 June 2014.
Jump up^ "COBOL Tombstone". Computer History Museum. Retrieved 29 June 2014.
Jump up^ Brown 1976, p. 47.
^ Jump up to:a b c Bemer 1971, p. 133.
Jump up^ Williams, Kathleen Broome (10 November 2012). Grace Hopper: Admiral of the Cyber Sea. US Naval Institute Press. ISBN 978-1612512655. OCLC 818867202.
Jump up^ Compaq Computer Corporation: Compaq COBOL Reference Manual, Order Number: AA–Q2G0F–TK October 2000, Page xviii; Fujitsu Corporation: Net Cobol Language Reference, Version 15, January 2009; IBM Corporation: Enterprise COBOL for z/OS Language Reference, Version 4 Release 1, SC23-8528-00, December 2007
Jump up^ Garfunkel, Jerome (11 November 1984). "In defense of Cobol". Computerworld. 18 (24): ID/19.
^ Jump up to:a b Bemer 1971, p. 134.
^ Jump up to:a b c d Follet, Robert H.; Sammet, Jean E. (2003). Ralston, Anthony; Reilly, Edwin D.; Hemmendinger, David, eds. Programming language standards. Encyclopedia of Computer Science (4th ed.). Wiley. p. 1467. ISBN 0470864125. (Subscription required (help)).
^ Jump up to:a b Brown 1976, p. 49.
Jump up^ Taylor, Alan (2 August 1972). "Few Realise Wasted Resources of Local DP Schools". Computerworld. 6 (31): 11.
Jump up^ Triance, J. M. (1974). Programming in COBOL: A Course of Twelve Television Lectures. Manchester University Press. p. 87. ISBN 0719005922.
Jump up^ Klein 2010, p. 16.
Jump up^ Baird, George N.; Oliver, Paul (May 1977). "1974 Standard (X3.23–1974)". Programming Language Standards—Who Needs Them? (PDF) (Technical report). Department of the Navy. pp. 19–21. Archived (PDF) from the original on 7 January 2014. Retrieved 7 January 2014.
Jump up^ Culleton, John R., Jr. (23 July 1975). "'Spotty' Availability A Problem..." Computerworld. 9 (30): 17. ISSN 0010-4841.
Jump up^ Simmons, Williams B. (18 June 1975). "Does Cobol's Report Writer Really Miss the Mark?". Computerworld. 9(25): 20. ISSN 0010-4841.
Jump up^ Shoor, Rita (26 January 1981). "User Threatens Suit Over Ansi Cobol-80". Computerworld. 15 (4): 1, 8. ISSN 0010-4841.
Jump up^ Shoor, Rita (26 October 1981). "DPMA Takes Stand Against Cobol Draft". Computerworld. 15 (43): 1–2. ISSN 0010-4841.
^ Jump up to:a b c Gallant, John (16 September 1985). "Revised Cobol standard may be ready in late '85". Computerworld. 19(37): 1, 8. ISSN 0010-4841.
^ Jump up to:a b "Expert addresses Cobol 85 standard". Computerworld. 19 (37): 41, 48. 16 September 1985. ISSN 0010-4841.
Jump up^ Paul, Lois (15 March 1982). "Responses to Cobol-80 Overwhelmingly Negative". Computerworld. 16 (11): 1, 5. ISSN 0010-4841.
Jump up^ Paul, Lois (25 April 1983). "Study Sees Few Problems Switching to Cobol-8X". Computerworld. 17 (17): 1, 6.
Jump up^ Gillin, Paul (19 November 1984). "DEC users get head start implementing Cobol-80". Computerworld. 18 (47): 1, 6. ISSN 0010-4841.
Jump up^ Garfunkel 1987, p. 150.
Jump up^ Roy, M. K.; Dastidar, D. Ghost (1 June 1989). "Features of COBOL-85". COBOL Programming: Problems and Solutions (2nd ed.). McGraw-Hill Education. pp. 438–451. ISBN 978-0074603185.
Jump up^ Robinson, Brian (9 July 2009). "Cobol remains old standby at agencies despite showing its age". FCW. Public Sector Media Group. Retrieved 26 April 2014.
^ Jump up to:a b "COBOL Standards". Micro Focus. Archived from the original on 31 March 2004. Retrieved 2 September 2014.
Jump up^ "NetCOBOL for .Net". netcobol.com. GTSoftware. 2013. Archived from the original on 8 July 2014. Retrieved 29 January 2014.
Jump up^ "A list of Codasyl Cobol features". Computerworld. 10 September 1984. p. ID/28. ISSN 0010-4841. Retrieved 8 June 2014.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2001, Annex F.
^ Jump up to:a b "JTC1/SC22/WG4 - COBOL". ISO. 30 June 2010. Archived from the original on 14 February 2014. Retrieved 27 April 2014.
Jump up^ Billman, John; Klink, Huib (27 February 2008). "Thoughts on the Future of COBOL Standardization" (PDF). Archived from the original (PDF) on 11 July 2009. Retrieved 14 August 2014.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, Annex E.
Jump up^ Schricker, Don (2 December 1998). "J4: COBOL Standardization". Micro Focus. Archived from the original on 24 February 1999. Retrieved 12 July 2014.
Jump up^ Kizior, Ronald J.; Carr, Donald; Halpern, Paul. "Does COBOL Have a Future?" (PDF). The Proceedings of the Information Systems Education Conference 2000. 17 (126). Archived from the original (PDF) on 17 August 2016. Retrieved 30 September 2012.
Jump up^ Carr & Kizior 2003, p. 16.
Jump up^ "Cobol brain drain: Survey results". Computerworld. 14 March 2012. Retrieved 27 April 2014.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 8.9.
Jump up^ "Reserved Words Table". Micro Focus Visual COBOL 2.2 COBOL Language Reference. Micro Focus. Retrieved 3 March 2014.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 8.3.1.2.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 8.3.2.
^ Jump up to:a b c d Shneiderman 1985, p. 349.
^ Jump up to:a b ISO/IEC JTC 1/SC 22/WG 4 2001, § F.2.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § D.18.2.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § D.18.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § D.2.1.
Jump up^ "File Organizations". File Handling. Micro Focus. 1998. Retrieved 27 June 2014.
Jump up^ Cutler 2014, Appendix A.
Jump up^ Hubbell, Thane (1999). Sams Teach Yourself COBOL in 24 hours. SAMS Publishing. p. 40. ISBN 978-0672314537. LCCN 98087215.
Jump up^ McCracken & Golden 1988, § 19.9.
Jump up^ Cutler 2014, § 5.8.5.
^ Jump up to:a b ISO/IEC JTC 1/SC 22/WG 4 2014, § 14.9.24.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 14.9.35.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 13.18.40.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 13.18.60.3.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 14.4.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 14.6.3.
Jump up^ Field, John; Ramalingam, G. (September 1999). Identifying Procedural Structure in Cobol Programs (PDF). PASTE '99. doi:10.1145/381788.316163. ISBN 1581131372.
Jump up^ Veerman, Niels; Verhoeven, Ernst-Jan (November 2006). "Cobol minefield detection" (PDF). Software—Practice and Experience. Wiley. 36 (14). doi:10.1002/spe.v36:14. Archived from the original (PDF) on 6 March 2007.
Jump up^ ISO/IEC JTC 1/SC 22/WG4 2014, § 14.9.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, §§ 14.9.4, 14.9.22.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § D.6.5.2.2.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, § 14.9.13.1.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2014, §14.9.35.1.
^ Jump up to:a b McCracken & Golden 1988, § 8.4.
Jump up^ Examples of compiler support for ALTER can be seen in the following:
Tiffin, Brian (18 September 2013). "September 2014". GNU Cobol. Retrieved 5 January 2014.
"The ALTER Statement". Micro Focus Visual COBOL 2.2 for Visual Studio 2013 COBOL Language Reference. Micro Focus. Retrieved 5 January 2014.
"ALTER Statement (Nucleus)" (PDF). COBOL85 Reference Manual. Fujitsu. November 1996. p. 555. Archived from the original (PDF) on 6 January 2014. Retrieved 5 January 2014.
"ALTER Statement". Enterprise COBOL for z/OS Language Reference. IBM. June 2013. Retrieved 5 January 2014.
Jump up^ ISO/IEC JTC 1/SC 22/WG 4 2001, § F.1.
Jump up^ Moseley, Jay (17 January 2015). "COBOL Compiler from MVT". Retrieved 19 July 2015.
Jump up^ Dijkstra, Edsger W. (18 June 1975). "How do we tell truths that might hurt?". University of Texas at Austin. EWD498. Retrieved August 29, 2007.
Jump up^ Tompkins, H. E. (1983). "In defense of teaching structured COBOL as computer science". ACM SIGPLAN Notices. 18(4): 86. doi:10.1145/948176.948186. (Subscription required (help)).
^ Jump up to:a b Riehle 1992, p. 125.
Jump up^ Shneiderman 1985, pp. 349–350.
Jump up^ Coughlan, Michael (16 March 2014). Beginning COBOL for Programmers. Apress. p. 4. ISBN 1430262532. Retrieved 13 August 2014.
Jump up^ Sammet 1978b, p. 258.
Jump up^ Riehle 1992, p. 126.
Jump up^ "COBOL and Legacy Code as a Systemic Risk | naked capitalism". 2016-07-19. Retrieved 2016-07-23.
Jump up^ Lämmel, Ralf; Verhoef, Chris (November–December 2001). "Cracking the 500-language problem" (PDF). IEEE Software. 18 (6): 79. doi:10.1109/52.965809. Archived from the original (PDF) on 19 August 2014.
Jump up^ Howkins, T. J.; Harandi, M. T. (April 1979). "Towards more portable COBOL". The Computer Journal. BCS. 22 (4): 290. doi:10.1093/comjnl/22.4.290.
Jump up^ Garfunkel 1987, p. 11.
Jump up^ Raymond, Eric S. (1 October 2004). "COBOL". The Jargon File, version 4.4.8. Archived from the original on 30 August 2014. Retrieved 13 December 2014.
Jump up^ CODASYL 1969, § II.1.1.
Jump up^ Shneiderman 1985, p. 350.
Jump up^ Sammet 1961, p. 381.
^ Jump up to:a b Conner 1984, p. ID/10.
Jump up^ Conner 1984, p. ID/14.
Jump up^ https://books.bibliopolis.com/main/find/2200821/COBOL-Logic-and-Programming-third-edition-1974-McCameron-Fritz-oldcomputerbooks-com.html
Jump up^ "An interview: Cobol defender". Computerworld. 10 September 1984. pp. ID/29–ID/32. ISSN 0010-4841. Retrieved 8 June 2014.
Jump up^ "Academia needs more support to tackle the IT skills gap" (Press release). Micro Focus. 7 March 2013. Retrieved 4 August 2014.
Jump up^ Sammet, Jean; Garfunkel, Jerome (October 1985). "Summary of Changes in COBOL, 1960–1985". Annals of the History of Computing. IEEE. 7 (4): 342. doi:10.1109/MAHC.1985.10033. (Subscription required (help)).
Jump up^ Cook, Margaret M. (June 1978). Ghosh, Sakti P.; Liu, Leonard Y., eds. Data Base Facility for COBOL 80 (PDF). 1978 National Computer Conference. Anaheim, California: AFIPS Press. pp. 1107–1112. doi:10.1109/AFIPS.1978.63. LCCN 55-44701. Retrieved 2 September 2014. The earliest date that a new COBOL standard could be developed and approved is the year 1980 [...].
Jump up^ "Resolutions from WG4 meeting 24 - June 26-28, 2003 Las Vegas, Nevada, USA". 11 July 2003. p. 1. Archived from the original (doc) on 8 March 2016. Retrieved 29 June2014. a June 2008 revision of the COBOL standard
Jump up^ Babcock, Charles (14 July 1986). "Cobol standard add-ons flayed". Computerworld. 20 (28): 1, 12.
Jump up^ Marcotty, Michael (1978). Wexelblat, Richard L., ed. Full text of all questions submitted. History of Programming Languages. Academic Press (published 1981). p. 274. doi:10.1145/800025.1198371. ISBN 0127450408. (Subscription required (help)).
Jump up^ This can be seen in:
"Visual COBOL". IBM PartnerWorld. IBM. 21 August 2013. Archived from the original on 12 July 2014. Retrieved 5 February 2014. Micro Focus Visual COBOL delivers the next generation of COBOL development and deployment for Linux x86-64, Linux for System z, AIX, HP/UX, Solaris, and Windows.
"COBOL Compilers family". ibm.com. IBM. Archivedfrom the original on 23 February 2014. Retrieved 5 February 2014.
Tiffin, Brian (4 January 2014). "What platforms are supported by GNU Cobol?". Archived from the original on 14 December 2013. Retrieved 5 February2014.
Jump up^ Coughlan, Michael (2002). "Introduction to COBOL". Retrieved 3 February 2014.
Bemer, Bob (1971). "A View of the History of COBOL" (PDF). Honeywell Computer Journal. Honeywell. 5 (3). Retrieved 28 June 2014.
Beyer, Kurt (2009). Grace Hopper and the Invention of the Information Age. MIT Press. ISBN 978-0262013109. LCCN 2008044229.
Brown, William R. (1 December 1976). "COBOL". In Belzer, Jack; Holzman, Albert G.; Kent, Allen. Encyclopedia of Computer Science and Technology: Volume 5. CRC Press. ISBN 978-0824722555.
Carr, Donald E.; Kizior, Ronald J. (31 December 2003). "Continued Relevance of COBOL in Business and Academia: Current Situation and Comparison to the Year 2000 Study" (PDF). Information Systems Education Journal. AITP. 1 (52). ISSN 1545-679X. Retrieved 4 August2014.
CODASYL (July 1969). "CODASYL COBOL Journal of Development 1968". National Bureau of Standards. ISSN 0591-0218. LCCN 73601243.
Conner, Richard L. (14 May 1984). "Cobol, your age is showing". Computerworld. International Data Group. 18 (20): ID/7–ID/18. ISSN 0010-4841.
Cutler, Gary (9 April 2014). "GNU COBOL Programmer's Guide" (PDF) (3rd ed.). Retrieved 25 February 2014.
Garfunkel, Jerome (1987). The COBOL 85 Example Book. Wiley. ISBN 0471804614.
ISO/IEC JTC 1/SC 22/WG 4 (4 December 2001). "ISO/IEC IS 1989:2001 – Programming language COBOL". ISO. Archived from the original (ZIP of PDF) on 24 January 2002. Retrieved 2 September 2014.
ISO/IEC JTC 1/SC 22/WG 4 (31 October 2014). INCITS/ISO/IEC 1989:2014 [2014] – Programming language COBOL. INCITS.
Klein, William M. (4 October 2010). "The History of COBOL" (PDF). Archived from the original(PDF) on 7 January 2014. Retrieved 7 January 2014.
Marcotty, Michael (1978). Wexelblat, Richard L., ed. Transcript of question and answer session. History of Programming Languages. Academic Press (published 1981). p. 263. doi:10.1145/800025.1198370. ISBN 0127450408. (Subscription required (help)).
McCracken, Daniel D.; Golden, Donald G. (1988). A Simplified Guide to Structured COBOL Programming (2nd ed.). Wiley. ISBN 0471610542. LCCN 87034608.
Riehle, Richard L. (August 1992). "PERFORM considered harmful". Communications of the ACM. ACM. 35 (8): 125–128. doi:10.1145/135226.376106. (Subscription required (help)).
Sammet, Jean E. (May 1961). A method of combining ALGOL and COBOL. Papers presented at the May 9–11, 1961, western joint IRE–AIEE–ACM computer conference. ACM. pp. 379–387. doi:10.1145/1460690.1460734. (Subscription required (help)).
Sammet, Jean E. (1978a). Wexelblat, Richard L., ed. The early history of COBOL. History of Programming Languages. Academic Press (published 1981). doi:10.1145/800025.1198367. ISBN 0127450408. (Subscription required (help)).
Sammet, Jean E. (1978b). Wexelblat, Richard L., ed. Transcript of presentation. History of Programming Languages. Academic Press (published 1981). doi:10.1145/800025.1198368. ISBN 0127450408. (Subscription required (help)).
Sammet, Jean E. (23 July 2004). "COBOL". In Reilly, Edwin D. Concise Encyclopedia of Computer Science. Wiley. ISBN 978-0470090954. OCLC 249810423.
Shneiderman, B. (October 1985). "The Relationship Between COBOL and Computer Science". Annals of the History of Computing. IEEE. 7 (4): 348–352. doi:10.1109/MAHC.1985.10041.
Find more aboutCOBOLat Wikipedia's sister projects
Learning resources from Wikiversity
Data from Wikidata
COBOL at Curlie (based on DMOZ)
GnuCobol
ISO standards by standard number
.NET programming languages
1959 software
Class-based programming languages
Computer-related introductions in 1959
Cross-platform software
Object-oriented programming languages
Procedural programming languages
Programming languages created by women
Programming languages created in 1959
Programming languages with an ISO standard
Statically typed programming languages
Structured programming languages
Posted by intuitivefred888 at 12:55 PM
Labels: COBOL - Wikipedia
Partisanship Kills
Tears: Meditation on Impermanence
F***it?
Pence's physician alerted White House about Ronny ...
Up Next 65 Feet High Snow Walls Tateyama Japan 日本 ...
Man breaks biggest wave surfing record: Video
On the 'Roof of Japan,' an otherworldly 17-meter-d...
Can plan to fly Concorde again get off the ground?...
Steamboat Geyser erupts 3 times: Previous time it ...
There's a nursing shortage, but schools are turnin...
Stormy Daniels files defamation lawsuit against Tr...
CDC Opioid Overdose Data
The states taking the opioid epidemic seriously (a...
most read articles of the last 24 hours as of Apri...
Most read articles of the lsat 168 hours or 7 days...
As a Blogger what are the most amount of visits I ...
What do you need to believe to not run screaming o...
When you believe you might die you have to "let Go...
Turning the Wheel of the Dharma
IN case you didn't notice Amazon now owns the Whol...
Amazon Prime not only gives you free 2 day shippin...
Can cannabis stop the opioid crisis?
Michelle Wolf reminds me a little of Joan Rivers i...
most read articles in the last 168 hours or 7 days...
5 takeaways on Michelle Wolf's controversial speec...
If you don't take potassium while you are taking L...
Biliteral Cypher Code - Francis Bacon
The Infinite Grace of God
most read articles in the last 24 hours as of Apri...
Hope and Love are Four Letter words
The Humor of Near death Experiences?
Despite So Much Winning, The Right Feels Like It's...
Britain looks to ancient mines for electric future...
This Shape-Shifting, Pin-Headed Robot Lets You Fee...
The European Space Agency's new Mars orbiter just ...
Movie outfits trying to get your children's inform...
MoviePass faces criticism — again — for making cus...
'Avengers: Infinity War' scores biggest Thursday o...
US flies bombers over South China Sea
We seem to be moving into an era when partisanship...
Fortran - Wikipedia
Computer Data Processing
Why might Cosby's actual jail time be zero?
Bill Cosby's actual sentence will likely be much l...
Dead zone in the Arabian Sea is getting bigger, un...
Trump attorney Michael Cohen loaned millions to Uk...
Reprint from 2009 of "Flame's Land Suit"
"The Silver Religion"
Purple Delta 7 at 5 billion years old
To make maximum effect of a nuclear torpedo from R...
Could an 'insane' Russian nuclear torpedo cause 30...
Bill Cosby's conviction is a victory only for Raci...
Milky Way Stars Mapped With Gaia, May Have Numerou...
Wall street Journal reporting on Cohen
Neuralink Could Help Humans Keep Up With AI?
We might have to retroactively change time regardi...
Since Silicon Valley and the world's semiconductor...
For bloggers: Experiment, Experiment, Experiment
Arizona and Colorado Join Teacher Strikes
Liftoff! Europe's newest 'sentinel' satellite will...
Emmanuel Macron's Address to Congress Was Full of ...
Trump confirms Cohen represented him in 'crazy Sto...
Guadalupe Mountains National Park - Wikipedia
Macron thinks Trump will scrap Iran deal
Comey is the adult in the room
Trump denies that he told Comey he didn't stay ove...
Ford will, however, continue to offer its full gam...
VA secretary pick withdraws after flurry of allega...
If a 3 dimensional universe exists inside the mind...
Through the years a Soul traveler discovers from e...
Imagine a Soul Traveler trained as a pilot of an i...
"Never before in our nation's history has the atto...
Cohen to plead the Fifth in Stormy Daniels hush mo...
Comey's nickname for Trump: 'I call him the Presid...
Comey: Trump is 'just making stuff up'
most read articles of the last 24 hours as of Wedn...
I was looking for the part in "The Day After Roswe...
the Day AFter Roswell: Chapter 12 "The Integrated ...
a mind control experiment gone Awry at the Univers...
Kindness Heals
The 425 point drop in the stock market is partly b...
Google: "They know every search you've ever made a...
Google has more information on you than Facebook, ...
Dow Jones falls nearly 425 points; Caterpillar rep...
Mind Control Isn't Sci-Fi Anymore | WIRED: Septemb...
Could a shift in the ocean currents now create a n...
Shift in ocean circulation triggered the end of th... | CommonCrawl |
Heterogeneous dynamics, robustness/fragility trade-offs, and the eradication of the macroparasitic disease, lymphatic filariasis
Edwin Michael1 &
Brajendra K. Singh1
BMC Medicine volume 14, Article number: 14 (2016) Cite this article
The current WHO-led initiative to eradicate the macroparasitic disease, lymphatic filariasis (LF), based on single-dose annual mass drug administration (MDA) represents one of the largest health programs devised to reduce the burden of tropical diseases. However, despite the advances made in instituting large-scale MDA programs in affected countries, a challenge to meeting the goal of global eradication is the heterogeneous transmission of LF across endemic regions, and the impact that such complexity may have on the effort required to interrupt transmission in all socioecological settings.
Here, we apply a Bayesian computer simulation procedure to fit transmission models of LF to field data assembled from 18 sites across the major LF endemic regions of Africa, Asia and Papua New Guinea, reflecting different ecological and vector characteristics, to investigate the impacts and implications of transmission heterogeneity and complexity on filarial infection dynamics, system robustness and control.
We find firstly that LF elimination thresholds varied significantly between the 18 study communities owing to site variations in transmission and initial ecological parameters. We highlight how this variation in thresholds lead to the need for applying variable durations of interventions across endemic communities for achieving LF elimination; however, a major new result is the finding that filarial population responses to interventions ultimately reflect outcomes of interplays between dynamics and the biological architectures and processes that generate robustness/fragility trade-offs in parasite transmission. Intervention simulations carried out in this study further show how understanding these factors is also key to the design of options that would effectively eliminate LF from all settings. In this regard, we find how including vector control into MDA programs may not only offer a countermeasure that will reliably increase system fragility globally across all settings and hence provide a control option robust to differential locality-specific transmission dynamics, but by simultaneously reducing transmission regime variability also permit more reliable macroscopic predictions of intervention effects.
Our results imply that a new approach, combining adaptive modelling of parasite transmission with the use of biological robustness as a design principle, is required if we are to both enhance understanding of complex parasitic infections and delineate options to facilitate their elimination effectively.
While the current WHO-led global initiative advocating the application of annual single-dose mass drug administration (MDA) for 4–6 years to eradicate the vector-borne macroparasitic disease, lymphatic filariasis (LF), from all 73 endemic countries represents one of the largest global health programs devised to reduce the burden of tropical diseases [1, 2], a critical challenge to parasite eradication is the heterogeneous transmission of the disease across endemic regions [3–6]. We have previously shown that such environmental and geographic variability in parasite transmission between communities may reflect the impacts of significant site-specific variations in initial ecological conditions and transmission parameters [7–9]; i.e. that observed infection patterns do not merely reflect noise clouding an inherently non-spatial transmission equilibrium [10], but represent significant sensitivity to spatial and temporal variations in the key socioecological drivers of transmission across a region [8, 11]. LF transmission is further complicated by the geographic variation observed in the diversity of the primary mosquito genera implicated in parasite transmission, wherein in some agro-ecological areas Culex is dominant and in others, Anopheles or Aedes spp. [12–16], suggesting that site variations in vector biodiversity may also constitute a key part of the variable LF infection patterns observed across endemic regions [17].
These findings imply that spatial and temporal variability in key environmental drivers could fundamentally alter pattern-process relationships in LF transmission, and consequently lead to the likely occurrence of significant site-specific variability in parasite population response to interventions [7, 8, 11]. From a strategic perspective, these complexities imply that a single fixed time-limited global intervention strategy (as exemplified by the current WHO MDA initiative) that ignores local heterogeneities in parasite transmission and extinction dynamics is unlikely to achieve the successful elimination of this parasitic disease from all endemic regions [18, 19]. Instead, overall benefits are likely to be uneven, with re-emergence of infection and disease inevitable in those communities where transmission is not broken by the conclusion of a fixed-length intervention applied commonly everywhere [20, 21]. This observation suggests that the essentially top-down command and control management approach deployed by the WHO, which is further characterized by the selection and use of single elimination thresholds or breakpoints [7, 8, 11, 18, 22, 23], may require to be changed and made more adaptive to local transmission settings if the goal of global LF elimination is to be achieved. Alternatively, it indicates that a better understanding of how heterogeneous transmission interacts with intervention perturbations will be crucial if countermeasures robust to differential locality-specific control dynamics are to be discovered and used for achieving LF elimination reliably everywhere.
While impacts of heterogeneities in ecological and environmental factors on the transmission dynamics of vector-bone parasitic diseases, including malaria, filariasis, schistosomiasis and onchocerciasis, are a topic of growing study [5, 6, 8, 11, 22, 24], their interactions with public health interventions by contrast is only now beginning to be appreciated [11, 25–28]. Our previous work on LF transmission heterogeneity, for example, has highlighted the complex outcomes that such interactions may have for efforts aiming to achieve the elimination of parasitic disease [7–9, 11, 17]. An important finding in this regard is that while heterogeneous parasite transmission dynamics across a region may reflect strong system adaptations to site-specific environmental factors, this sensitivity to one set of localized conditions may also make a locally robustly adapted parasite system particularly fragile to perturbations that may significantly alter the variables that constrain and govern the local transmission dynamics [11]. This implies that critical trade-offs may occur between environmentally-structured transmission robustness and adaptability or even evolvability in these parasitic systems [7, 8, 11, 17, 29], suggesting that a better understanding of these "robust yet fragile" system traits, and factors that underlie these properties, will be fundamental to the development of the countermeasures needed for more effectively disrupting LF transmission from all endemic settings [7, 8, 11, 17]. Furthermore, how heterogeneous transmission dynamics interact with current drug treatment regimens to impact timelines for achieving parasite elimination in different ecological settings has also acute policy significance for the current LF elimination program, namely determining if the current WHO MDA strategy is likely to achieve the stated goal of accomplishing the elimination of this disease both regionally and globally by 2020 [7, 8, 11, 17].
In this study, our overarching goal is to examine how site-specific heterogeneity in LF transmission might affect the probability of eliminating this parasitic disease both regionally and globally using existing disease control strategies. The basis of our work is the use of a Bayesian data-model assimilation (DA) framework that facilitates both the simultaneous fitting and parameterization of vector-specific LF transmission models to parallel cross-sectional human infection and vector abundance data assembled from community field surveys [8, 9, 11, 30, 31], and the effective use of the resulting best-fitting model ensembles for undertaking numerical investigations of the effects of between-site heterogeneity on LF transmission and extinction dynamics, and the impact that this variability may have on infection outcomes in response to the mass drug and vector intervention strategies currently advocated for interrupting parasite transmission in different LF endemic settings. In addition, following recent advances in investigating the parameter structure of complex dynamical models, we also examine the parameter space and behaviour of the locally fitted models to develop new theoretical understanding regarding how such characteristics may be linked to LF transmission robustness and adaptation to the local environment, the impact that such associations may have on parasite response to perturbations, and on the ability of models to make reliable macroscopic predictions [32–34]. To be socially relevant to current control efforts, we focus on the implications that transmission heterogeneity have for two key management questions: the durations of control required for breaking LF transmission across the range of transmission intensity-vector species combinations likely to be observed in LF endemic regions; and the possible role that adding supplemental vector control measures can play in overcoming the between-site response variations that may arise from applying MDA alone.
We begin by describing our study areas and the data, followed by descriptions of the LF model and the Bayesian melding DA framework used to calibrate and fit the model to parallel community-level human infection and vector data. We then describe the modelling results focussing on how heterogeneity in transmission, parameter structure and biological robustness to extinction may interact with intervention outcomes, taking particular account of effects of variable vector species, pre-control transmission intensities, intervention coverage patterns, and the impact of supplemental vector control. We end by discussing the significance of these findings for assessing and designing the policy and management options that can best affect global LF elimination in the face of the heterogeneous dynamics and robustness trade-offs that are likely to govern local parasite transmission in typical endemic settings.
The data used in this analysis were assembled from published pre-control cross-sectional surveys of microfilariae (mf) prevalence and mosquito abundance carried out in 18 geographically-distinct communities across the major extant LF endemic regions of Africa, Asia and Papua New Guinea. These datasets were selected on the basis that they provide human age-mf prevalence data, including break-ups of totals of individuals sampled and numbers of mf-positives out of these samples, information on the dominant prevalent vector species, and measurements of the corresponding annual mosquito biting rates (ABR) denoting the vector transmission intensity prevailing in each site. Details of the data—sample sizes and % mf-positives, along with sampling blood volumes used to assess infection prevalence, dominant vector species and ABRs—for each of the 18 survey sites are given in Table 1. Information on the drug regimen used for simulating the effects of interventions in each of these sites by MDA without/with vector control (VC) are also given, reflecting the current guidelines and use of drug combinations advocated for these sites.
Table 1 Description of baseline survey data. The study sites are given with the baseline sample size and microfilariae (mf) prevalence (%), blood volumes collected during the survey to test for mf positivity, annual biting rate (ABR) of vector mosquitoes, dominant vector species and drug regimen used for simulating the chemotherapeutic interventions by mass drug administration (MDA) without/with vector control (VC)
The mathematical model of LF transmission dynamics
We employed the recently developed mosquito genus-specific transmission model of LF to carry out the modeling work in this study [7, 8, 11, 35, 36]. Briefly, the state variables of this hybrid coupled partial differential and differential equation model vary over age (a) and/or time (t), representing changes in the adult worm burden per human host (W(a, t)), the mf level in the human host modified to reflect infection detection in a 1 ml blood sample (M(a, t)), the average number of infective L3 larval stages per mosquito (L), and a measure of immunity (I(a, t)) developed by human hosts against L3 larvae. The state equations comprising this model are:
$$ \begin{array}{l}\frac{\partial W}{\partial t}+\frac{\partial W}{\partial a}=\lambda \frac{V}{H}{\psi}_1{\psi}_2h(a){L}^{*}{g}_1(I){g}_2(W)-\mu W\\ {}\frac{\partial M}{\partial t}+\frac{\partial M}{\partial a}=\alpha \phi \left(W,k\right)W-\gamma M\\ {}\frac{\partial I}{\partial t}+\frac{\partial I}{\partial a}=W-\delta I\\ {}\frac{dL}{dt}=\lambda \kappa g{\displaystyle \int \pi (a)\Big(1-f(M)}\Big)da-\sigma L-\lambda {\psi}_1L\\ {}{L}^{*}=\frac{\lambda \kappa g{\displaystyle \int \pi (a)\Big(1-f(M)}\Big)da}{\sigma +\lambda {\psi}_1}\end{array} $$
The above equations involve partial derivatives of three state variables (W, worm load; M, microfilaria intensity; I, immunity to acquiring new infection due to the pre-existing worm load), whereas given the faster timescale of infection dynamics in the vector compared to the human host, the infective L3-stage larval density developing in the mosquito population as a result of ingestion of mf from infected humans is modeled by an ordinary differential equation, essentially reflecting the significantly faster timescale of larval infection dynamics in the vector hosts. This allows making the simplifying assumption that the density of infective stage larvae in the vector population reaches a dynamic equilibrium (denoted by L *) rapidly [7, 8, 11, 37, 38]. The term f(M) describes the functional form relating the mf-L3-stage larval uptake and development in the vector population, which is famously known to differ significantly in the two major genera of mosquito vectors implicated in LF transmission [39–42], and defined as [7]:
$$ f(M)=\left[\frac{2}{{\left[1+\frac{M}{k}\left(1- \exp \left[-\frac{r}{\kappa}\right]\right)\right]}^k}-\frac{1}{{\left[1+\frac{M}{k}\left(1- \exp \left[-\frac{2r}{\kappa}\right]\right)\right]}^k}\right] $$
for mosquitoes of anopheline genus, and:
$$ f(M)={\left(1+\frac{M}{k}\left(1- \exp \left[-\frac{r}{\kappa}\right]\right)\right)}^{-k} $$
for mosquitoes of culicine genus.
In the above, k[=k 0 + k Lin M] is the shape parameter of the negative binomial distribution indicating that mean L3 output is dependent on the distribution of mf, typically found to be overdispersed among hosts in a community [37, 43], whereas r and κ are, respectively, the rate of initial increase and the maximum level of L3 larvae that develop in each vector population. The details of the derivation of these two larval uptake and development functions are given elsewhere [7]. The terms g 1 (I) and g 2 (W) represent expressions by which acquired immunity to larval establishment, and host immunosuppression, as functions of adult worms, respectively, are included in the model [8, 11]. This basic coupled immigration-death model structure as well as recent extensions have been discussed [7, 8, 11, 37, 38]; see Additional file 1: Table S1 for the description of all the model parameters and functions.
The Bayesian melding framework
Our strategy was essentially two-pronged: first, to integrate field observations on LF infection with simulation model outputs to undertake model calibrations and to quantify localized parasite transmission, i.e. by constraining values of transmission parameters within the bounds of data-based estimation; and second, following this to use the locally parameterized models to address the variables and questions of interest in this study, namely 1) estimation of site-specific mf age-prevalences and worm breakpoints, and 2) use of these quantities to carry out the intervention simulations described further below. We used the data-model assimilation methodology founded on the Bayesian melding (BM) algorithm to address this coupled model fitting and analyses problem [8, 11]. The BM approach is a procedure whereby all the available prior information about model inputs and outputs are "melded" together via Bayesian synthesis in order to obtain the posterior distribution of any quantity of interest that is a function of these inputs and/or outputs [31, 44]. For example, one of the priors on model output is the set of observed data; i.e. in our case the survey data on LF age-prevalence collected from each endemic community. The other output prior is the model-generated values of the state variables, such as W or M. We further specify a conditional probability distribution for observed data given the model outputs, and this yields a likelihood for each model output. Thus, the BM procedure is fundamentally a method for reconciling several sources of prior information (related to model parameters and outcomes, and data), in order to constrain the acceptable solution space of the input parameters [30, 45, 46]. In the form of the method we implemented here, we initially assigned vague or uniform prior distributions for each of the model input parameters (except for the mosquito biting rate, which was fixed to the values of the monthly biting rate (MBR; see Table 1) prevailing in each site), to reflect our initial incomplete knowledge regarding their local values, while for assessing adequacy of model outputs to data, a binomial likelihood function was constructed to capture the distribution of the observed mf age-prevalence data [8, 11, 38]. In practice, we run the dynamic model i times, each time drawing random input values θ i for i = 1, … l, with the model producing as output the quantity of interest ϕ i , for example predictions of mf age-prevalence, for each input θ i . We then use the observed data, denoted by y, to compute a weight w i for each input θ i : w i = L(ϕ i ). Specifically, here, L(ϕ i ) is the likelihood of the model outputs given the observed data, L(ϕ i ) = Prob(y|ϕ i ). We finally use the sampling importance resampling (SIR) algorithm to resample, with replacement, from the above parameter sets with the probability of acceptance of each resample θ j = 1,2, … l probable to its weight w i. A typical value of resamples l for the results presented in this paper was around 500, and these SIR parameter sets are then used to generate distributions of variables of interest from the model (e.g. age-prevalence curves, worm breakpoints), including measures of their uncertainties [8, 11]. Note that as this procedure is Monte Carlo-based, the method thus yields an ensemble of good fitting local models differing only in their parameter values as summarized by their posterior distributions.
Numerical stability analysis for quantifying mf breakpoint and vector biting thresholds
A previously developed numerical stability analysis procedure, based on varying initial values of L * to each of the SIR-selected model parameter sets or vectors, was used to calculate the distribution of mf prevalence breakpoints and threshold biting rates (TBR) expected in each study community [8, 11]. Briefly, in this procedure, we begin by progressively decreasing V/H from its original value to a threshold value below which the model always converges to zero mf prevalence, regardless of the values of the endemic infective larval density L *. The product of λ and this newly found V/H value is termed as the threshold biting rate (TBR). Once the threshold biting rate is discovered, the model at TBR will settle to either a zero (trivial attractor) or non-zero mf prevalence depending on the starting value of L *. Therefore, in the next step, while keeping all the model parameters unchanged, including the new V/H, and by starting with a very low value of L * and progressively increasing it in very small step-sizes we estimate the minimum L * below which the model predicts zero mf prevalence and above which the system progresses to a positive endemic infection state. The corresponding mf prevalence at this new L * value is termed as the worm breakpoint in this study [7].
Modeling intervention by mass drug administration
Intervention by MDA was modeled based on the assumption that anti-filarial treatment with a combination drug regimen acts, firstly, by killing certain fractions of the populations of adult worms and mf instantly following drug administration. These effects are incorporated into the basic model by calculating the drug-induced removal of worms and mf:
$$ \left.\begin{array}{l}W\left(a,t+dt\right)=\left(1-\omega C\left)W\right(a,t\right)\\ {}M\left(a,t+dt\right)=\left(1-\varepsilon C\right)M\left(a,t\right)\end{array}\right\}\kern1.25em \mathrm{at}\ \mathrm{time}\ t={T}_{MD{A}_i} $$
Where dt is a short time period since the time point \( {T}_{MD{A}_i} \) when the ith MDA was administered. The parameters ω and ε are drug killing efficacy rates for the two life stages of the parasite, while the parameter C represents the MDA coverage. Apart from instantaneous killing of mf, the drug is also thought secondarily to continue to kill the newly reproduced mf by any surviving adult worms for a period of time, P. We model this effect as follows:
$$ \frac{\partial M\left(a,t\right)}{\partial t}+\frac{\partial M\left(a,t\right)}{\partial a}=\left(1-\varepsilon C\right)\alpha \phi \left(W\left(a,t\right),k\right)W\left(a,t\right)-\gamma M\left(a,t\right),\kern1em \mathrm{f}\mathrm{o}\mathrm{r}\ {T}_{MD{A}_i}<t\le {T}_{MD{A}_i}+P $$
Simulating LF MDA interventions
We simulated the effects of MDA interventions by running the model with fixed values of the three drug-related parameters (ω, ε and P) for MDA coverage levels ranging from 40 % to 100 %. The values of worm and mf kill rates for the two drug regimens studied here, namely diethylcarbamazine/albendazole (DEC + ALB) and ivermectin/albendazole (IVM + ALB) (Table 1), were taken from [36]. The first MDA round is implemented in the model by applying the above equations to the model vectors obtained from the baseline fits describing the pre-control worm (W) and mf (M) loads in each site, and subsequent interventions are simulated as discrete repeated pulse events acting on parasite loads resulting from each sequentially applied MDA. We investigated the impact of MDA implemented annually on the cycles or rounds of annual treatment required to reduce mf % prevalence from baseline to below the individual mf breakpoint values estimated for each SIR model vector in each site.
Modeling vector control
We model supplemental vector control (VC) (i.e. the impact of long-lasting insecticidal nets (LLINs) or that of indoor residual spray (IRS) or the impact of the two applied in some combination) by assuming that population-level coverage of LLIN/IRS would reduce the vector biting rate to the same degree regardless of the mosquito genus present in a study site. Although efficacies of VC methods can decay over time, for example due to wear and tear of insecticidal bed nets used in the households [25, 47, 48], we do not consider this possibility here and assume for simplification that the advocated replacements of nets as well as IRS re-sprays will take place during the simulation periods examined in this paper. A full exploration of the impacts of such decay effects will be presented elsewhere. The impact of VC in this work will thus follow the modelling approach we used previously [36, 38], whereby we replace \( \frac{V}{H} \) in the worm equation by the term \( \left(1-{C}_V\right)\frac{V}{H} \), where C v is the VC coverage in terms of the fraction of households using LLIN/IRS in a LF endemic setting.
Model sensitivity to local conditions and feasibility of macroscopic predictions
In this exercise, we considered whether the microscopic sensitivity of LF models to local conditions may nonetheless allow general predictions of the impact of interventions at the macroscopic scale. We address this here by pooling firstly the parameter vectors from the BM fits to baseline mf age-prevalence data from each study site to create two superensembles of parameter sets: one set of parameter vectors representing the transmission dynamics across the anopheline settings in our dataset (i.e. combining the SIR vectors obtained from the five PNG and five African anopheline study sites (Table 1)); and the other for the culicine settings (containing the SIR parameter vectors from the three African and five Southeast Asian culicine sites). For each superensemble, we then ran the respective vector-specific model for the full set of ABR values (ranging from 1,500 to 230,000 bites/person/year) observed across the 18 sites, and used the resulting mf infection curves to calculate the corresponding superensemble model ABR- and TBR-associated mf % breakpoints. Only mf breakpoint values denoting a 95 % elimination probability were estimated (see below), and used as target thresholds in the intervention simulations carried out using these models.
Model fits to baseline age-prevalence data
The fits generated by the culicine and anopheline LF models (red curves) to the respective baseline mf prevalences in different age-groups (blue squares representing the means with lines denoting the corresponding 95 % binomial confidence intervals) from each of the 18 study sites used in this study are shown in Fig. 1. All mf prevalence values were standardized to reflect sampling of 1 ml blood volumes using a transformation factor of 1.95 and 1.15, respectively, for values originally estimated using 20 or 100 μl blood volumes [49]. Observed values, and the transformed age-profiles of mf infection showed significant differences between the study sites (Table 1; binomial generalized additive model (GAM) testing for significance of interaction between study site and mf age-prevalence patterns [50]: χ 2 = 2734, df = 165, p <0.001), consistent with our previous findings that site-specific socioecologic conditions govern LF transmission patterns in the field [7, 8, 11]. The results also show that the BM-based data-model assimilation procedure is capable of reproducing the age-stratified mf prevalences consistent with observed data in each of the study communities (overall Monte Carlo p values >0.9 in each case (Additional file 1: Table S2)), although as expected the fits to mf age-prevalences are comparatively better for the study villages with the lowest variability in this infection measure (Fig. 1).
Observed and fitted microfilarial age-prevalences of lymphatic filariasis (LF) for each study site. The SIR BM model fits (red lines) to observed baseline mf prevalences in different age-groups (blue circles with binomial error-bars) from the 18 study sites investigated in this work are shown; the filled circles display the data for the culicine communities, while the open circles denote data for the anopheline communities. The age-groups are represented by the mid-point of the groups studied in each community. The study sites and details of survey data are described in Table 1. All mf prevalence values were standardized to reflect sampling of 1 ml blood volumes using a transformation factor of 1.95 and 1.15, respectively, for values originally estimated using 20 or 100 μl blood volumes [49]
Parameter values
Table 2 shows the results of a univariate Kolmogorov–Smirnov (KS) two-sample test applied to the values of prior and posterior distributions of each model parameter estimated using the Bayesian ensemble-based data-model assimilation procedure. The results show that while most of the LF model parameters exhibited variable change from initially assigned parameter values, only a few parameters pertaining to variables related to the exposure (ψ1, ψ2, HLin), immunity (c, IC, SC) and community structure (captured indirectly by the infection aggregation parameters, e.g. kLin)-related determinants of parasite transmission were consistently constrained by the site-specific data. Overall, there were also more parameters that differed from their prior values when compared across all study villages in the culicine compared to the anopheline setting (Table 2). Intriguingly, while parameters related to immunosuppression (IC, SC) were thus constrained in the villages exposed to Anopheles vectors, for culicine villages, by contrast, the immunity parameter most consistently constrained by site-specific data was the one associated with the strength of acquired immunity (c).
Table 2 Posterior changes in model parameters. Parameters whose posteriors significantly differed from their priors across all the anopheline (An) and culicine (Cx) villages are identified by the Kolmogorov–Smirnov two-sample test. The null hypothesis (H) is that priors and posteriors have the same underlying distribution. The keys are: 1, reject the null H at the 5 % significance level; and 0, do not reject the null H. Note that the parameters kLin, ψ1, ψ2, HLin, IC and SC differed from their priors across all ten or nine anopheline study villages. In the remaining culicine study villages, the parameters that differed from their priors across all eight (or seven) villages were κ, r, ψ1, ψ2, c and HLin
We used classification tree analysis next to determine which parameters differed significantly between the study communities, and therefore might underlie the between-study heterogeneity observed in the mf age-prevalence data. The fitted trees stratified by vector species are depicted in Fig. 2, and indicate that the between-site variation in LF infection age-patterns observed across the present study communities depended only on a few "stiff" combinations of parameters, again primarily those reflecting the differential exposure, degree of community infection aggregation and worm fecundity variables in both vector systems. This finding highlights that the majority of the LF model parameters may be deemed to be "sloppy" or insensitive to locally varying environmental conditions, and support recent work in systems biology suggesting that such neutral regions in multiparameter space may be a ubiquitous feature of complex systems biology models [33, 51–53].
Classification tree analysis to identify model parameters that differed significantly between the present study sites. (a) Anopheles mosquitoes and (b) Culex mosquitoes. The fitted trees, stratified by mosquito species, indicate that local between-site variation in the LF infection age-patterns observed between the present study sites depended only on a few "stiff" combinations of parameters. These parameters are the HLin, a threshold value used to adjust the rate at which individuals of age a are bitten, worm establishment rate (ψ2), degree of community infection aggregation (k) and worm fecundity rate (α) in both culicine (Cx) and anopheline (An) systems, and additionally the term, r, related to mf uptake by mosquitoes in the anopheline system. The classification trees were fitted using the rpart package in R
Threshold values and probability of LF extinction
We used the SIR-selected ensemble of parameter sets to calculate the distributions of infection breakpoints (in terms of mf %) and the vector to human transmission thresholds (the TBR) expected in each of our study sites. Mf breakpoints were furthermore estimated at both the prevailing annual biting rate (ABR) in a community as well as at the TBR value. An illustrative example, showing results from the numerical stability analysis carried out using the set of SIR parameter vectors obtained from model fits to the Peneng dataset for estimating mf % breakpoints at their TBR values is shown in Additional file 1: Figure S1. The likely existence for a distribution of system breakpoint thresholds rather than a single breakpoint in a site implied by the results shown in Additional file 1: Figure S1 also means that the probability of LF elimination or extinction will vary across the range of values of each threshold [54, 55]. Here, we use the cumulative density function (CDF) of the estimated threshold values, in conjunction with exceedance calculations [56], to quantify three mf % breakpoint threshold values denoting elimination probabilities of 50 %, 75 % and 95 % in each site in order to investigate the management trade-offs involved in their choice as intervention targets in LF elimination programs (see Additional file 1: Figure S2 for plots of the CDFs and mf % cutoffs representing these elimination probabilities in each study site).
Table 3 provides the actual numerical mf % breakpoint values signifying these probabilities at both the ABR and TBR vector transmission thresholds, and demonstrates that wide variation in their values may occur between the present study sites. Additional file 1: Table S3 presents the results of the respective binomial generalized linear model, or one-way ANOVA and Wilcoxon signed-rank tests applied to these data, and statistically support the impression from Table 1 that there existed both a significant vector species-related difference observed in the estimated values of these thresholds, with generally higher values found in the anopheline settings, as well as a statistical site-specific variation in the values of these thresholds within both the anopheline and culicine LF transmission endemic settings. The results further show that mf breakpoint values in a site are also highly dependent on the associated probability of extinction they represent, with values decreasing markedly with increasing probabilities of extinction. Figure 3, however, indicates that while the mf breakpoint values estimated at either TBR or baseline ABR are variable between the study sites, these values nonetheless may exhibit functional relationships with the baseline study ABR, with the estimated mf thresholds declining on average in a power-law fashion with increasing site-specific intensities of the host infection system input (ABR) variables in both the anopheline and culicine cases.
Table 3 Model-estimated worm breakpoint values for achieving the successful interruption of LF transmission in each of the study sites investigated. Breakpoints are listed in terms of % mf prevalence at three probabilities of elimination for two situations: 1) at the prevailing vector biting rates (i.e. at the observed ABRs); and 2) at the threshold biting rate (TBR) at or below which LF transmission process cannot sustain itself regardless of the level of the infection in human hosts (see text). The first set of the threshold values (at study-specific ABR) is used in modeling the impact of mass drug administration (MDA) alone, while the second set (mf breakpoint values estimated at TBR) is applied for modeling the impact when MDA is supplemented by vector control (VC)
Mf breakpoints as a function of baseline community annual biting rate (ABR) and microfilaria (mf) prevalence. The mf breakpoints estimated in each site are shown as average values with 95 % CIs, calculated as the 2.5th and 97.5th percentiles of the breakpoint distribution in each site, and are plotted against the observed ABRs in each site; filled and open circles, respectively, represent values for the culicine and anopheline settings. The data in (a, b) and (c, d), respectively, represent the mf breakpoints estimated at the observed site-specific ABRs and the corresponding estimated threshold biting rates (TBRs). Both types of mf breakpoints were negatively correlated with ABR, with the fitted dashed lines indicating that overall these data follow a power-law function: f(x) = ax b, with x representing the biting rate values on the x-axis, and f(x) the mf breakpoints on the y-axis. The term a is a constant while b is the power-law exponent, with fitted values of (a, b) as follows: (a) (20.54, −0.5112); (b) (1.335, −0.2184); (c) (54.25, −0.3498); and (d) (4.251, −0.104). All four associated p values were <0.01. The set of mf breakpoints plotted in each graph were calculated using the best-fitting parameter vectors obtained from model fits to the baseline mf age-profile of each study site. In the plots, individual sites are indicated by their first two letters, except for "Mao" in the culicine settings, in order to distinguish it from "Ma" used for "Mambrui". Inset plots are provided to clarify the variations in the breakpoint values estimated for sites with approximately the same baseline ABR values, which were obscured in the respective main plots
Impact of local transmission dynamics and breakpoints on elimination of LF
We used the locally calibrated LF models together with their corresponding site-specific mf % breakpoints to simulate the impact that locally variable LF transmission dynamics may have on the expected timelines (in the form of number of rounds of annual MDAs required) for achieving parasite extinction in each site due to the application of the two major control strategies currently proposed for eliminating LF, namely MDA alone and MDA supplemented with vector control. The analysis was carried out by subjecting each of the 500 SIR-resampled parameter sets estimated from a site to the drug regimen (i.e. either DEC + ALB or IVM + ALB) recommended for use in that setting, and assessing the number of annual cycles of MDA which would be required for all the ensemble model vectors to cross below their respective mf % breakpoint thresholds signifying 50 %, 75 % and 95 % probabilities of LF elimination (EP). Mf % breakpoint thresholds at ABR were used as targets when modelling the impact of MDA alone (Table 3), whereas breakpoint prevalence values at TBR were used when modelling the impact of including VC, as reducing the vector population will push the system towards the TBR breakpoint and hence raise mf breakpoints to their maximal values (see Additional file 1: Figure S1 and S3).
Figure 4 shows the annual MDA cycles (the boxes indicating the mean and variance in the rounds) required to cross below the site-specific 95 % EP mf % thresholds quantified for a selection of our anopheline and culicine study sites (with results for the rest of the sites given in Additional file 1: Figure S4 and S5). Results are illustrated for a range of drug coverages (from 40 % to 100 %) and with and without inclusion of VC. These indicate firstly that while in general the number of years of annual MDA rounds required to achieve parasite elimination will decline with increasing drug coverage, the actual MDAs required at any given drug coverage will vary significantly between sites (Fig. 4, Additional file 1: Figure S4 and S5, Additional file 1: Table S4). Inclusion of VC, however, will not only strikingly reduce the numbers of annual MDAs needed (in some cases from decades of treatment to more feasible MDA durations (less than 10 years in general even for a drug coverage as high as 80 %)), but it will also, interestingly, reduce the variance in treatment rounds required compared to when using MDA alone (Fig. 4, Additional file 1: Figure S4 and S5).
Variability in the impact of annual mass drug administration (MDA) and combined MDA plus vector control (VC) on intervention rounds in years required to eliminate LF in different endemic communities (results shown for selected study sites). The required annual MDA rounds without and with VC as a function of drug coverage (from 40 % to 100 %) are shown as box plots, with the solid horizontal line depicting the means. Supplemental use of vector control (VC) was modelled at 80 % coverage. The results are shown for mf breakpoint threshold values representing a 95 % elimination probability (see Table 3). The results for the remaining study sites are shown in Additional file 1: Figure S4 and S5. These results are from the model simulations carried out for both LF intervention scenarios using the site-specific parameter vectors that best-fitted baseline age-prevalence infection in each site (compare with Fig. 1)
Figure 5 plots and compares the duration in years of annual MDA alone (at 80 % coverage) versus annual MDA plus vector (both administered at 80 % coverage) required to eliminate LF in relation to both the mf breakpoint value (at the 95 % EP) and the baseline mf prevalence prevailing in the current anopheline and culicine study sites. The results indicate that the duration of interventions needed to break LF transmission in a site is a complex outcome of both the elimination threshold value and baseline infection prevalence, which may intriguingly also depend on the associated transmitting vector species. Thus, while at low-moderate locality baseline mf prevalence levels, striking between-site variation may occur in the needed durations of the two LF interventions investigated here for achieving parasite elimination, as baseline mf prevalence increases in a site the durations of these interventions will increase significantly. However, this outcome appears less well demonstrated for the culicine compared to the anopheline sites investigated in this study (Fig. 5). While this may reflect an artefact of the smaller culicine study set used in this study, it is notable that culicines in general appear to be less efficient than anophelines in transmitting LF infection [39, 57], with lower levels of endemic mf prevalence produced at comparable community ABR values in culicine than in the case of anopheline settings (Table 1; [57]). This constraining of endemic infection prevalence could in turn restrict the range of breakpoint values in culicine settings leading to a lower range in the durations of interventions estimated for our culicine study sites compared to those obtained for anopheline sites. On the other hand, the higher endemic infection prevalences produced in the anopheline sites as ABR increases combined with the declining mf breakpoints at higher ABR values (Fig. 3) would increase the intensity and durations of interventions required to eliminate LF from such settings.
Mean rounds of annual MDAs in years predicted for achieving LF elimination as a joint function of the community-level baseline mf prevalence and breakpoint thresholds at 95 % EP. (a) MDA alone and (b) MDA + VC. Blue symbols, culicine sites; tan symbols, anopheline sites. EP, elimination probability; MDA, mass drug administration; VC, vector control
Figure 6 tabulates these outcomes for all study sites, and highlights the two major impacts on LF interventions arising from variations in intervention coverage and choice of EP threshold targets: 1) that durations of LF interventions for achieving transmission elimination in either vector setting and for each type of intervention will decrease with increasing intervention coverage; and 2) that they will increase significantly with the use of breakpoints signifying higher elimination probabilities. The latter finding illustrates the management trade-offs connected with the choice of EPs; i.e. that choosing a higher level of confidence for ensuring the meeting of transmission interruption or elimination (e.g. choosing a breakpoint value signifying a 95 % probability of elimination) will invariably lead to the need for implementing longer durations (and hence higher cost) of control regardless of MDA coverage and whether VC is included or not, compared to choosing a threshold with lower EP (say, 50 %). However, an important finding is that including VC will, by reducing the duration of interventions needed, drastically lower this cost of switching from using a lower EP to a robustly higher EP in all the current study settings (Fig. 6).
Mean rounds of annual MDAs in years for achieving LF elimination in each study site. The left and right heat maps are, respectively, for the anopheline and culicine settings. Two intervention scenarios (namely, MDA alone and MDA + VC, with VC coverage at 80 %) were modeled using three mf breakpoint threshold values at 50 %, 75 % and 95 % elimination probabilities (see Table 3). The results are shown for three MDA coverages at 60 %, 80 % and 100 % for the MDA alone in the first three columns and for the MDA + VC strategy in the remaining three columns of both the left- and right-panel plots. The drug regimens and their respective efficacies (i.e. adult worm and mf killing rates and efficacious period) used in modeling these intervention scenarios are given in Table 1. The mean number of years of interventions were derived using model runs for each of the 18 study sites based on their site-specific best-fit parameter vectors. EP, elimination probability; MDA, mass drug administration; VC, vector control
Macroscopic predictions
The results of intervention predictions for each superensemble model are given in Fig. 7. These highlight, firstly, that a macroscopic vector-specific LF ensemble model comprising of best-fit parameter vectors from all relevant sites is able to capture and hence adequately predict the number of years of MDA required to achieve local LF elimination as a function of ABR. However, the results indicate that there is a major trade-off with this global ability as it comes with a cost in the variability of making the macroscopic predictions that varies dramatically between the two interventions. Thus, while the predictions are highly variable in the case of the MDA alone intervention (Fig. 7a and c), this variability is drastically reduced in the MDA plus vector control case (Fig. 7b and d). The superensemble predictions are interestingly also comparatively less variable, particularly for the combined intervention strategy in the case of the anopheline system compared to the culicine case (Fig. 7). Figure 8 compares the contributions of the site-specific parameter vectors within the global superensemble model to the parameter vectors that best describe the mf age-prevalence curves observed given local ABR values in each of our study sites from either the anopheline (Fig. 8a and b) or culicine (Fig. 8c and d) settings. The dashed lines in each plot represent the 95 % upper and lower confidence band of the mf age-prevalence curve in each site, while the solid lines denote predictions of the site-specific parameter vectors making up the anopheline and culicine LF superensemble models—colored according to locality (Fig. 8)—in each of these sites. The relative contributions of the site-specific parameter vectors comprising a superensemble to the ensemble model fit to each dataset from a site can be discerned and calculated from the proportion of mf age curves predicted using the site-specific parameter vectors that fall within the mf curve band within each site. This can be seen both from the overlapping of curves predicted from the site-specific vectors of the superensemble model to a site's observed age-prevalence curve (Fig. 8a and c), as well as the summary bar charts (Fig. 8b and d) below the age-pattern plots that show the calculated percentages of site-specific vectors from the superensemble that contributed to observed age-infection data in each site. The H values given above each bar group depict values of the Shannon index obtained by assessing the diversity of site-specific parameter vectors contributing to the superensemble predictions for a site. These formally indicate that site-specific parameters may play a greater role in superensemble model fits and hence ability to predict local infection dynamics in the case of anopheline compared to culicine filariasis (i.e. that anopheline transmission dynamics is comparatively less constrained by local ABR initial conditions). This comparative lesser local parameter constraining could consequently also underlie the lower variance observed in the superensemble predictions for this system (Fig. 7). However, despite the above results, for both vector systems, it is clear that using annual MDA alone will not allow meeting the goal of LF elimination using just the 6 years of annual treatment recommended by the WHO; in fact in sites with higher values of ABR, it will take up to >20 years (and dramatically beyond the year 2020 end date) to achieve this goal (Fig. 7a and c). Including vector control to MDA, however, will not only drastically reduce the number of annual MDAs, but for sites up to a moderate ABR value, it will also meet the goal of achieving LF elimination by just six rounds of treatment (Fig. 7b and d).
Site-specific versus macroscopic superensemble predictions of the impact of LF interventions. The results from combining site-specific best-fit model parameters to develop and use vector-specific superensemble models for simulating the impact of LF intervention at 80 % MDA and VC coverages for the MDA alone and MDA + VC strategies are shown in (a, c) and (b, d), respectively. The solid curves represent the superensemble medians of annual MDA rounds required to reduce community-level mf prevalences below their respective infection breakpoint thresholds for achieving a 95 % probability of elimination, and are stratified as a function of community ABR (annual biting rate) values. Note that the x-axis is on a logarithmic scale. The dark and light grey regions, respectively, represent the 50 % (between the 25th and 75th percentiles) and 95 % (between the 2.5th and 97.5th percentiles) credible intervals (CIs) of the number of years of interventions predicted by the ensemble model to cross the respective 95 % elimination thresholds in each site. Circles (open, anopheline sites; filled, culicine sites) denote the median number of years of each intervention (at 80 % coverages) predicted by the respective best-fitting site-specific models to break LF transmission. The lower dashed line drawn at 6 years (i.e. the time period representing six annual MDA rounds) is to contrast the model-predicted MDA rounds required to achieve LF elimination with the WHO recommendation of applying six annual MDAs to achieve elimination of LF from all endemic settings in the world. The upper solid line drawn at 20 annual MDA cycles represents the target deadline for meeting the call for eliminating LF worldwide by 2020. The results for each site represent simulations of the impact of interventions mimicking a start year of 2000 (i.e. the year of WHO announcement of GPELF) and maintenance of MDA and VC coverages at 80 % throughout
Contribution of site-specific parameter vectors to predictions of the superensemble model. The simulation of mf age-prevalence curves at endemic equilibrium by the vector-specific LF regional superensemble model (see text) given the baseline ABR of each study site are portrayed for each of five PNG anopheline (a, b) and five Southeast Asian culicine (c, d) study settings. The curves represent the sets of mf age-prevalence curves, individually color-coded, generated by the resultant S (=5) site-specific parameter vectors comprising the respective regional model in each site. In each site, we count the number n i of the best-fit parameter vectors (belonging to the ith site-specific set of the superensemble) that are able to reproduce the observed mf age-prevalence in each site (i.e. fall within the 2.5th and 97.5th percentiles (shown by the dashed curves) of the site-specific mf age-prevalence data), in order to quantify the proportional contributions (i.e. \( \frac{n_i}{N} \) where N = ∑n i ) of individual members, S, of the global model to each site-specific prediction. The Shannon index, \( H=-{\displaystyle {\sum}_{i=1}^S\left(\frac{n_i}{N} \ln \frac{n_i}{N}\right)} \) was used to measure the diversity in the superensemble parameter vectors as a result of the relative contributions of these S members to each regional prediction, with a higher diversity index denoting a greater contribution of site-specific parameter vectors arising from different study settings to the regional prediction of infection in a site. The bars in the grouped-bar plots in (b, d) depict the percentage contribution (i.e. \( \frac{n_i}{N}\times 100 \) of each S site-specific parameter member to the regional ensemble model predictions of age-infection in each of the anopheline (b) and culicine (d) settings, with the values of the corresponding Shannon index (H) displayed overhead
Impact of ABR on transmission and extinction dynamics
Figure 9 shows results from a recursive partitioning analysis [58] of temporal changes in individual site-specific mf % breakpoints from baseline to a sequence of states when ABR is progressively reduced cumulatively over time by VC. The results underline a major outcome arising from the use of VC that may underlie the reduction in the variability of the MDA plus vector control predictions depicted in Fig. 7, namely that this could primarily be due to a dissolution in the between-study heterogeneity in these breakpoints brought about as a result of VC-induced negative changes in the prevailing abundance of vectors. Indeed, the results show that (for both LF-vector combinations) at high (50 % and 70 %) levels of ABR reductions, initially separable between-site breakpoint values converge until there is effectively only a single regime of unpartitionable breakpoints that remain among the still infection-positive sites. This finding supports our previous conclusion [11] that ABR may represent the major factor bounding the local transmission and extinction dynamics of LF, and that including VC could effectively compress such widely differing ABR-driven locality-specific LF transmission regimes (here as measured by site-specific mf breakpoint values) into a single regime if it can be applied at levels that can lead to consistently large declines in the prevailing vector populations.
The impact of reducing ABR by VC on LF transmission regimes. The recursive partitioning of LF elimination regimes was obtained by carrying out a classification analysis using the kalR package in R on mf breakpoint values obtained at different ABR values changing from baseline due to reductions brought about by VC. The left-side panel of plots (a to d) portray the results for the anopheline (An) superensemble whereas the right-side panel (e to h) show results for the culicine (Cx) global model. Mf breakpoints depicted in each panel plot were calculated at the observed baseline ABR values (a(Obs) and e(Obs)) and at reduced ABR values per site as follows: 30 % reduction (b, f); 50 % (c, g); and 70 % (d, h). As the baseline ABR values in each site are reduced from 0 % (no reduction) to 30 %, different regimes of breakpoints signifying initially separable or partitionable site-specific values as indicated by the vertical lines begin to shrink in terms of their ranges. Further reductions (of 50 % and 70 %) in the baseline ABRs lead to a collapse of these different regimes into a single regime at the 70 % reduction stage
The chief contributions of this modelling study of the dynamics of LF elimination based on detailed parasitological and entomological field data are twofold. First, we have advanced knowledge regarding the nature and the organizational features that underlie heterogeneous LF transmission across endemic localities, and the effects these have for infection and vector-related elimination thresholds. The key result here most immediately relevant to global LF elimination is the finding that, as a result of parameter adjustment to local transmission environments, significant differences in parasite population dynamics and in the resultant transmission and infection breakpoints occurred between the 18 endemic villages investigated. Further, given our Monte Carlo ensemble-based data-modelling framework that was designed to capture local uncertainty and variability in transmission parameters from site-specific data [8, 11, 31, 44, 59], we show that rather than being a single estimate, both these infection-related and vector abundance thresholds can exist as a "cloud" or distribution of values within and between village sites, with each value related to a probability that parasite elimination will be achieved when crossed [56]. This has significant strategic implications as it clarifies that there is a choice in choosing a threshold value from such distributions to serve as an endpoint or breakpoint target in management programs, and as can be seen from Table 3, given that these threshold values can range from as high as 3 % mf prevalence (for worm or infection breakpoints) to as low as 0.0002 %, such a choice ultimately revolves on how risk of program failure is (implicitly or explicitly) perceived and accepted by the relevant policy makers; i.e. whether management or the decision maker is risk averse (and hence opts for high confidence (e.g. 95 % probability) of achieving elimination) or risk tolerant (and so is tolerant of using values signifying lower confidences of achieving elimination). It is instructive to note, in this regard, that the WHO currently promotes the use of a 1 % mf prevalence threshold to serve as the elimination target for MDA programs globally [60]; our results on mf prevalence breakpoint values (Table 3) indicate that such a target is likely to afford at best only a moderate level of confidence (up to at best 80 % probability of elimination) that LF transmission will be interrupted when this value is used globally or invariantly as a metric to signify program success.
The present work has provided intriguing new insights concerning the factors that may underlie LF transmission adaptation and response to both local environmental conditions and intervention-induced perturbations. An important finding is that local transmission adaptation appears to be governed by only a few biological parameters, with the majority of these parameters poorly constrained by local data. This feature, previously primarily thought of as being an outcome of either poor or lack of parameter identifiability [33, 61], has recently been shown instead to be an intrinsic feature of complex multiparameter biological systems [34, 53, 62]; i.e. that often it is not possible to identify or estimate values for many parameters of these systems even with the availability of detailed data [63]. This phenomenon, which has been termed as "parameter sloppiness", is attributed to the existence of a highly anisotropic structure in the parameter space, wherein the behaviour of these systems is insensitive to perturbations in the majority of its defining parameters while varying due to changes on only a few "stiff" combinations of model parameters [34, 62]. Our results in this study indicate that this system characteristic may also apply to the transmission dynamics of parasitic infections; however, they also highlight that while such "sloppy" parameter behaviour has the potential to make global LF transmission invariant or robust to many local permutations or changes in environmental conditions, including as we have shown previously to temporally varying follow-up infection data in response to interventions in a setting [11], this sloppiness may have evolved at the local level to withstand variations across a relatively narrow range or thresholds of environmental shocks (i.e. the LF system may be robust to changes in initial conditions within only a set of local constraint values [64]), with the local system commensurately susceptible or fragile to shocks outside these thresholds (but see below).
This behaviour of the LF system, particularly the robust (i.e. maintenance of transmission despite external and internal perturbations [32]) yet fragile (extreme sensitivity leading to transmission disruption following perturbations) duality of transmission/extinction dynamics in relation to environmental variability in vector abundance, suggests that LF transmission may be an example of a highly optimized tolerance (HOT) system [65–67], the structure and operation of which have been the basis of new lines of enquiry and thinking regarding mechanisms that may govern the robustness and persistence of complex systems [32, 68–70]. Such work on HOT architectures across various biological systems has shown that a key mechanism that generates robustness is increasing complexity in the internal structure of a system, wherein many variables and feedback loops have been tuned to favor or accommodate small losses in system function/productivity in response to common events at the expense of large losses when subject to unexpected perturbations [66–68]. We show in Fig. 3 the likely operation of this mechanism in the case of LF transmission, whereby decreases in worm breakpoint values as a function of mosquito abundance follow power-law functions, rather than the comparatively faster decreases that would be expected if exponential relationships were to occur between these states [71]. This result implies that the cost of maintaining the complex internal structure required to accommodate common disturbances in the LF system is the occurrence of relatively high worm breakpoint values; it also suggests that ABR values in a locality may govern the structural configuration of LF transmission to local conditions, and that inducing changes in ABR values outside the normal range experienced locally would provide an effective mechanism to significantly increase transmission fragility, and hence affect reliable disruption of infection.
The assessments carried out in the second half of this study in relation to evaluating the impact that site-specific heterogeneity in transmission dynamics may have on the prospects for eliminating LF has provided important first insights as to how such mechanisms operate and may impact current options to interrupt LF transmission. Our chief finding in this regard is that this interplay between LF transmission organization and dynamics at the local level will significantly influence the durations of control required to break parasite transmission in a setting. We show specifically that control durations will vary from site to site as a result of complex interactions between local transmission intensity, efficiency, breakpoints, and robustness to environmental changes or perturbations, but also with respect to the type of interventions being applied as well as the transmitting vector genus. Thus, we found that while durations of interventions will significantly vary between our study sites, these durations will generally be longer and much more variable when using the MDA alone strategy (with years of interventions varying between 6 and 20 years at 80 % drug coverage) compared to the MDA plus vector control strategy (with the years of interventions ranging between 2 and 13 at the same 80 % drug and vector control coverages (Fig. 6)). As we show in Fig. 9, this difference between the two interventions is largely a function of the transmission regime homogenization or convergence brought about by vector control, which by reducing the robustness of LF transmission to change in the local dynamics constraining variables and facilitating the switching of transmission dynamics into a more narrowed and more fragile regime (in terms of increasing infection breakpoint values), can lead to a decrease in the extent and variance in the intervention durations required to disrupt parasite transmission. By contrast, the results imply that the higher variability and longer durations of interventions required when applying the annual MDA strategy alone are likely to be a function of the strong density-dependent negative feedback loops, such as those fostered by the limitation, acquired immunity and worm mating functions [7, 72], that govern LF transmission in endemic areas compensating variably for the worm killing effects of drug treatments. These findings clearly indicate that gaining a better understanding of the interactions between system structures that generate robustness and the specific perturbations being applied to a system will be crucial to identifying the informed locally adaptive strategies required for achieving the reliable disruption of parasite transmission from all endemic settings [70]. From this perspective, it is clear that reducing vector abundance in addition to killing worms using MDA, by significantly increasing the fragility of transmission, may be a better option than applying MDA alone for effectively eliminating LF transmission.
Another significant and unexpected, but intriguing finding from the intervention simulations carried out here relates to the fact that despite the lower estimates of infection breakpoints in the culicine study sites, the durations of interventions for these sites, irrespective of type, are calculated to be within the range predicted for the anopheline settings for similar low to medium pre-control community vector biting rates; i.e. between 5 to 15 years in general (Fig. 5). Given that the generally lower mf breakpoint values estimated for the culicine study sites (Table 3) would have indicated the need for longer durations of interventions in these sites in comparison with the anopheline case, this finding thus suggests that factors other than breakpoint values may also play a role in governing the LF system response to interventions. Our results show that one factor underlying this paradox may relate to the robustness-performance trade-offs that govern the two LF systems. Thus, we show firstly that although transmission breakpoints are lower in the case of culicine LF, the performance or production efficiency of this system in terms of the overall mf prevalence produced for the same ABR is lower than that of the anopheline system [57]. This would result in a smaller distance or basins of attraction between endemic infection levels relative to elimination thresholds in the culicine compared to the anopheline system [7], an outcome that could clearly overcome the impact that lower breakpoint values estimated for this system (Table 3) may have on lengthening intervention durations. Note that as different assemblages of density-dependent mechanisms govern the differential levels of infection and breakpoints values generated in each system [7], wherein in one case (culicine), strong negative density-dependent factors, such as the L3 limitation function and host acquired immunity, lowers the endemic mf levels reached but also slows the approach to crossing the lower extinction thresholds (hence enhancing the stability of the endemic state) and in the anopheline case, strong positive density-dependent functions, such as the L3 facilitation and host immunosuppression functions, lead to higher endemic mf prevalences but faster approaches to higher extinction thresholds over the same ABR ranges, our finding of a strong vector specificity in the response of the parasite population to different LF control interventions further supports our overall contention from this study that it is the complex interplay between dynamics and the internal organization structure underlying LF transmission—in terms of resource use, productivity and robustness—that will ultimately underlie the dynamics of LF elimination in an endemic setting [32, 68, 70, 73, 74].
The evaluations carried out in this study with regards to examining the feasibility of developing and using superensemble models of LF transmission, based on pooling site-specific parameter vectors, to facilitate predictions of the impact of interventions at the macroscopic scale was predicated on the hypothesis that sloppiness in parameter values would indicate a weak dependence on microscopic details and thus allow effective macroscopic predictions. It was also based on growing work on multiparameter models from a range of fields, including physics and biology, that has underscored how such sloppiness in parameter values may be the key factor underlying the ability of mathematical models in predicting complex phenomena at larger scales despite considerable microscopic uncertainty [34, 62]. We show here for the first time that indeed such macroscopic superensemble models would be able to predict the number of years of LF interventions required to achieve LF elimination in different sites varying in baseline mf prevalence and ABR values. However, a major finding is that the ability of these global models to make reliable predictions is critically dependent on both the type of LF interventions being modelled and on the vector species mediating transmission in a locality (Fig. 7). Thus, while the results indicate how comparatively more reliable (lower variance) predictions of the effects of combined MDA and vector control are possible owing to the pushing of the LF system into common dynamical regimes as a result of ABR reductions (discussed above), an unexpected finding was that intervention predictions using the constructed superensemble models were also more reliable for anopheline compared to culicine LF. We suggest that this is largely due to the greater constraining of culicine dynamics to local settings; i.e. culicine model parameters may be relatively less sloppy than in the case of the anopheline parameters (Fig. 8). This implies that the robustness of the culicine system may be restricted to changes of initial conditions within a fixed local boundary of ABR values, whereas anopheline LF could also be robust to changes in these constraining values between sites. This difference in the type of robustness clearly makes it possible to undertake a more reliable macroscopic modelling of anopheline LF transmission dynamics and control using the present superensemble modelling approach, and highlights how apart from affecting the outcomes of interventions, biological organizational architectures that govern transmission robustness may also govern the practical ability of models to make reliable macroscopic predictions of the effect of specific interventions. However, note a trade-off is that such robustness may also reduce the capacity of anopheline LF for evolutionary and environmental adaptation relative to culicine LF [33, 70]. This is an important finding because if times to genetic rescue become favourable in relation to those that would bring about population extinction as a result of LF interventions [36], then we predict that culicine systems would be more likely to evolve drug resistance, say, as a specific example of a mutational response to MDA, compared to anopheline LF. The practical conclusion of this finding is clear, namely that if drug resistance to LF MDA emerges this will occur first in culicine areas and thus that management options, for example combined MDA plus vector control [36, 75], to prevent such an eventuality, as well as surveillance for detecting mutational changes reflective of developing resistance, should also be targeted in the first instance to these areas.
We have shown in this study for the first time how the multiple aspects that characterize biological robustness to a set of perturbations and its expression in terms of system resource demands, productivity and structure, will not only lead to a better understanding of heterogeneous LF transmission dynamics and persistence but also to delineating and identifying the set of external conditions and perturbations that would reliably increase system fragility and hence lead to a more predictable disruption of LF transmission. This is an important result and indicates how understanding the complex ecology of parasite transmission and persistence, rather than merely basing decisions on empirical field or clinical trial results, is central to the development of effective control or elimination strategies. We show in this regard, for example, how including vector control to MDA may not only reliably increase system fragility and hence reduce the number of years of interventions required to interrupt LF transmission significantly—in many cases to within the WHO recommended 6 years of intervention—but by also additionally reducing transmission regime variability permit the making of more reliable global predictions of control requirements. These findings imply that a change in thinking is now required concerning how parasite elimination programs are to be designed if we are to identify and apply better approaches to disrupting transmission. More specifically, they suggest that the use of robustness, including features of HOT mechanisms, as a design principle to investigate the nature of, and response to, assemblages of intervention options, could provide a more effective framework and tool for uncovering options that would reliably and sustainably eliminate LF, and indeed other parasitic diseases, from all settings in the face of extant environmental heterogeneity and uncertainty, and possibly even problems previously unencountered (e.g. evolution of drug resistance by LF parasites). We suggest that adaptive modelling methods, such as the coupled data-modelling approach developed here, that will allow the construction of robustness profiles of parasitic systems in response to environmental variations may provide a first step in this process [74, 76, 77]. We also echo in this regard increasing calls for the assembly and release of LF intervention data from the many countries collecting these data as part of their LF program monitoring and evaluation activities to modellers so that predictions made in the present study could be verified and tested rigorously. Given the current pressing policy needs of the global LF elimination program, and indeed other growing neglected tropical disease control programs, we indicate that this work be urgently initiated in order that the goal of eliminating these major diseases of the global poor is more robustly supported.
ABR:
annual biting rate
ALB:
BM:
Bayesian melding
CDF:
cumulative density function
data-model assimilation
DEC:
diethylcarbamazine citrate
EP:
elimination probability
GPELF:
global programme to eliminate lymphatic filariasis
highly optimized tolerance
IRS:
indoor residual spray
IVM:
LF:
LLIN:
long-lasting insecticidal net
MBR:
monthly biting rate
MDA:
mass drug administration
SIR:
sample importance resampling
TBR:
threshold biting rate
Ottesen EA, Hooper PJ, Bradley M, Biswas G. The global programme to eliminate lymphatic filariasis: health impact after 8 years. PLoS Negl Trop Dis. 2008;2(10):e317.
Rebollo MP, Bockarie MJ. Toward the elimination of lymphatic filariasis by 2020: treatment update and impact assessment for the endgame. Expert Rev Anti Infect Ther. 2013;11(7):723–31.
PubMed CAS Article Google Scholar
Michael E, Bundy DA, Grenfell BT. Re-assessing the global prevalence and distribution of lymphatic filariasis. Parasitology. 1996;112(Pt 4):409–28.
Michael E, Bundy DA. Global mapping of lymphatic filariasis. Parasitol Today. 1997;13(12):472–6.
Slater H, Michael E. Predicting the current and future potential distributions of lymphatic filariasis in Africa using maximum entropy ecological niche modelling. PLoS One. 2012;7(2):e32202.
PubMed CAS PubMed Central Article Google Scholar
Slater H, Michael E. Mapping, Bayesian geostatistical analysis and spatial prediction of lymphatic filariasis prevalence in Africa. PLoS One. 2013;8(8):e71574.
Gambhir M, Michael E. Complex ecological dynamics and eradicability of the vector borne macroparasitic disease, lymphatic filariasis. PLoS One. 2008;3(8):e2874.
Gambhir M, Bockarie M, Tisch D, Kazura J, Remais J, Spear R, et al. Geographic and ecologic heterogeneity in elimination thresholds for the major vector-borne helminthic disease, lymphatic filariasis. BMC Biol. 2010;8:22.
Michael E, Gambhir M. Transmission models and management of lymphatic filariasis elimination. Adv Exp Med Biol. 2010;673:157–71.
Cushman S, Huettmann F. Spatial complexity, informatics, and wildlife conservation. Tokyo: Springer; 2010.
Singh BK, Bockarie MJ, Gambhir M, Siba PM, Tisch DJ, Kazura J, et al. Sequential modeling of the effects of mass drug treatments on Anopheline-mediated lymphatic filariasis infection in Papua New Guinea. PLoS One. 2013;8(6):e67004.
Rwegoshora RT, Rwegoshora EM, Pedersen DA, Mukoko DW, Meyrowitsch N, Masese MN, et al. Bancroftian filariasis: Patterns of vector abundance and transmission in two East African communities with different levels of endemicity. Ann Trop Med Parasitol. 2005;99(3):253–65.
Pedersen EM. Vectors of lymphatic filariasis in Eastern and Southern Africa. In: Simonsen PE, Malecela MN, Michael E, Mackenzie CD, editors. Lymphatic filariasis: research and control in Eastern and Southern Africa. Copenhagen: Centre for Health Research and Development (DBL); 2008. p. 78–110.
Simonsen PE, Pedersen EM, Rwegoshora RT, Malecela MN, Derua YA, Magesa SM. Lymphatic filariasis control in Tanzania: effect of repeated mass drug administration with ivermectin and albendazole on infection and transmission. PLoS Negl Trop Dis. 2010;4(6):e696.
Mboera LE, Senkoro KP, Mayala BK, Rumisha SF, Rwegoshora RT, Mlozi MR, et al. Spatio-temporal variation in malaria transmission intensity in five agro-ecosystems in Mvomero district, Tanzania. Geospat Health. 2010;4(2):167–78.
McMahon JE, Magayauka SA, Kolstrup N, Mosha FW, Bushrod FM, Abaru DE, et al. Studies on the transmission and prevalence of Bancroftian filariasis in four coastal villages of Tanzania. Ann Trop Med Parasitol. 1981;75(4):415–31.
Michael E, Gambhir M. Vector transmission heterogeneity and the population dynamics and control of lymphatic filariasis. Adv Exp Med Biol. 2010;673:13–31.
Holling CS, Meffe GK. Command and control and the pathology of natural resource management. Conserv Biol. 1996;10:328–37.
Folke C, Carpenter S, Walker B, Scheffer M, Elmqvist T, Gunderson L, et al. Regime shifts, resilience, and biodiversity in ecostem management. Annu Rev Ecol Sys. 2004;35:557–81.
Esterre P, Plichart C, Sechan Y, Nguyen NL. The impact of 34 years of massive DEC chemotherapy on Wuchereria bancrofti infection and transmission: the Maupiti cohort. Trop Med Int Health. 2001;6(3):190–5.
Sunish I, Rajendran R, Mani T, Munirathinam A, Tewari S, Hiriyan J, et al. Resurgence in filarial transmission after withdrawal of mass drug administration and the relationship between antigenaemia and microfilaraemia–a longitudinal study. Trop Med Int Health. 2002;7(1):59–69.
Liang S, Seto EY, Remais JV, Zhong B, Yang C, Hubbard A, et al. Environmental effects on parasitic disease transmission exemplified by schistosomiasis in western China. Proc Natl Acad Sci U S A. 2007;104(17):7110–5.
Pedersen EM, Stolk WA, Laney SJ, Michael E. The role of monitoring mosquito infection in the Global Programme to Eliminate Lymphatic Filariasis. Trends Parasitol. 2009;25(7):319–27.
Filipe JA, Boussinesq M, Renz A, Collins RC, Vivas-Martinez S, Grillet ME, et al. Human infection patterns and heterogeneous exposure in river blindness. Proc Natl Acad Sci U S A. 2005;102(42):15265–70.
Griffin JT, Hollingsworth TD, Okell LC, Churcher TS, White M, Hinsley W, et al. Reducing Plasmodium falciparum malaria transmission in Africa: a model-based evaluation of intervention strategies. PLoS Med. 2010;7(8):e1000324.
Bejon P, Bejon T, Williams A, Liljander A, Noor J, Wambua E, et al. Stable and unstable malaria hotspots in longitudinal cohort studies in Kenya. PLoS Med. 2010;7(7):e1000304.
Bousema T, Griffin JT, Sauerwein RW, Smith DL, Churcher TS, Takken W, et al. Hitting hotspots: spatial targeting of malaria for control and elimination. PLoS Med. 2012;9(1):e1001165.
Midega JT, Smith DL, Olotu A, Mwangangi JM, Nzovu JG, Wambua J, et al. Wind direction and proximity to larval sites determines malaria risk in Kilifi District in Kenya. Nat Commun. 2012;3:674.
Wagner A. The origins of evolutionary innovations: a theory of transformative change in living systems. New York, NY: Oxford University Press; 2011.
Poole D, Raftery AE. Inference for deterministic simulation models: the Bayesian melding approach. J Am Stat Assoc. 2000;95(452):1244–55.
Spear RC, Hubbard A, Liang S, Seto E. Disease transmission models for public health decision making: toward an approach for designing intervention strategies for Schistosomiasis japonica. Environ Health Perspect. 2002;110(9):907–15.
Kitano H. Biological robustness. Nat Rev Genet. 2004;5(11):826.
Daniels BC, Chen YJ, Sethna JP, Gutenkunst RN, Myers CR. Myers. Sloppiness, robustness, and evolvability in systems biology. Curr Opin Biotechnol. 2008;19(4):389–95.
Machta BB, Chachra R, Transtrum MK, Sethna JP. Parameter space compression underlies emergent theories and predictive models. Science. 2013;342(6158):604–7.
Michael E, Malecela-Lazaro MN, Kabali C, Snow LC, Kazura JW. Mathematical models and lymphatic filariasis control: endpoints and optimal interventions. Trends Parasitol. 2006;22(5):226–33.
Michael E, Malecela-Lazaro MN, Simonsen PE, Pedersen EM, Barker G, Kumar A, et al. Mathematical modelling and the control of lymphatic filariasis. Lancet Infect Dis. 2004;4(4):223–34.
Chan MS, Srividya A, Norman RA, Pani SP, Ramaiah KD, Vanamail P, et al. Epifil: a dynamic model of infection and disease in lymphatic filariasis. Am J Trop Med Hyg. 1998;59(4):606–14.
Norman RA, Chan MS, Srividya A, Pani SP, Ramaiah KD, Vanamail P, et al. EPIFIL: the development of an age-structured model for describing the transmission dynamics and control of lymphatic filariasis. Epidemiol Infect. 2000;124(3):529–41.
Southgate BA, Bryan JH. Factors affecting transmission of Wuchereria bancrofti by anopheline mosquitoes. 4. Facilitation, limitation, proportionality and their epidemiological significance. Trans R Soc Trop Med Hyg. 1992;86(5):523–30.
Pichon G. Limitation and facilitation in the vectors and other aspects of the dynamics of filarial transmission: the need for vector control against Anopheles-transmitted filariasis. Ann Trop Med Parasitol. 2002;96(2):143–52.
Snow LC, Michael E. Transmission dynamics of lymphatic filariasis: density-dependence in the uptake of Wuchereria bancrofti microfilariae by vector mosquitoes. Med Vet Entomol. 2002;16(4):409–23.
Snow LC, Bockarie MJ, Michael E. Transmission dynamics of lymphatic filariasis: vector-specific density dependence in the development of Wuchereria bancrofti infective larvae in mosquitoes. Med Vet Entomol. 2006;20(3):261–72.
Michael E, Simonsen P, Malecela M, Jaoko W, Pedersen E, Mukoko D, et al. Transmission intensity and the immunoepidemiology of bancroftian filariasis in East Africa. Parasite Immunol. 2001;23(7):373–88.
Spear RC, Hubbard A. Parameter estimation and site-specific calibration of disease transmission models. Adv Exp Med Biol. 2010;673:99–111.
Raftery AE, Givens GH, Zeh JE. Inference from a deterministic population dynamics model for bowhead whales. J Am Stat Assoc. 1995;90:402–16.
Sevcíková H, Raftery AE, Waddell PA. Assessing uncertainty in urban simulations using Bayesian melding. Transp Res B. 2007;41(6):652.
White MT, Griffin JT, Churcher TS, Ferguson NM, Basanez MG, Ghani AC. Modelling the impact of vector control interventions on Anopheles gambiae population dynamics. Parasit Vectors. 2011;4:153.
Okumu FO, Moore SJ. Combining indoor residual spraying and insecticide-treated nets for malaria control in Africa: a review of possible outcomes and an outline of suggestions for the future. Malar J. 2011;10(1):208.
Michael E, Malecela MN, Zervos M, Kazura JW. Global eradication of lymphatic filariasis: the value of chronic disease control in parasite elimination programmes. PLoS One. 2008;3(8):e2936.
Wood S. Generalized additive models: an introduction with R. Boca Raton, FL: Chapman & Hall/CRC Press; 2006.
Brown K, Sethna J. Statistical mechanical approaches to models with many poorly known parameters. Phys Rev E. 2003;68(2):021904.
Waterfall J, Waterfall F, Casey R, Gutenkunst K, Brown C, Myers P, et al. Sloppy-model universality class and the Vandermonde matrix. Phys Rev Lett. 2006;97(15):150601.
Gutenkunst R, Gutenkunst J, Waterfall F, Casey K, Brown C, Myers J, et al. Universally sloppy parameter sensitivities in systems biology models. PLoS Comput Biol. 2007;3(10):e189.
PubMed Central Article Google Scholar
May RM. Stability and complexity in model ecosystems. Princeton, NJ: Princeton University Press; 1973.
Wang Y, Gutierrez A. An assessment of the use of stability analyses in population ecology. J Anim Ecol. 1980;49:435–52.
Reimer LJ, Thomsen EK, Tisch DJ, Henry-Halldin CN, Zimmerman PA, Baea ME, et al. Insecticidal bed nets and filariasis transmission in Papua New Guinea. N Engl J Med. 2013;369(8):745–53.
Michael E, Bundy DA. Herd immunity to filarial infection is a function of vector biting rate. Proc R Soc Lond B Bio. 1998;265(1399):855–60.
Weihs C, Ligges U, Luebke K, Raabe N. klaR: analyzing German business cycles. In: Baier D, Decker R, Schmidt-Thieme L, editors. Data analysis and decision support. Berlin: Springer; 2005. p. 335–43.
Spear RC, Bois FY. Parameter variability and the interpretation of physiologically based pharmacokinetic modeling results. Environ Health Perspect. 1994;102 Suppl 11:61–6.
World Health Organization (WHO). World Health Organization Global Programme to Eliminate Lymphatic Filariasis: monitoring and epidemiological assessment mass drug administration. Geneva: WHO; 2011.
Hengl S, Kreutz C, Timmer J, Maiwald T. Data-based identifiability analysis of non-linear dynamical models. Bioinformatics. 2007;23(19):2612–8.
Transtrum MK, Machta BB, Brown KS, Daniels BC, Myers CR, Sethna JP. Perspective: sloppiness and emergent theories in physics, biology, and beyond. J Chem Phys. 2015;143(1):010901.
Fengos G, Iber D. Prediction stability in a data-based, mechanistic model of σF regulation during sporulation in Bacillus subtilis. Sci Rep. 2013;3:2755.
Gunawardena J. Models in systems biology: the parameter problem and the meanings of robustness. In: Lodhi HM, Muggleton SH, editors. Elements of computational systems biology. Hoboken, NJ: Wiley; 2010. p. 19–47.
Carlson JM, Doyle J. Highly optimized tolerance: a mechanism for power laws in designed systems. Phys Rev E. 1999;60(2):1412–27.
Carlson JM, Doyle J. Highly optimized tolerance: robustness and design in complex systems. Phys Rev Lett. 2000;84(11):2529.
Carlson JM, Doyle J. Complexity and robustness. Proc Natl Acad Sci U S A. 2002;99 Suppl 1:2538–45.
Kitano H. Towards a theory of biological robustness. Mol Syst Biol. 2007;3:137.
Whitacre JM. Biological robustness: paradigms, mechanisms, and systems principles. Front Gene. 2012;3:67.
Jen E. Robust design: a repertoire of biological, ecological, and engineering case studies. New York, NY: Oxford University Press; 2005.
Abaimov SG. Statistical physics of non-thermal phase transitions: from foundations to applications. New York, NY: Springer; 2015.
Gambhir M, Singh BK, Michael E. The Allee effect and elimination of neglected tropical diseases: a mathematical modelling study. Adv Parasitol. 2015;87:1–31.
Jen E. Stable or robust? What's the difference? Complexity. 2003;8(3):12–8.
Kitano H. Biological robustness in complex host-pathogen systems. In: Kitano H, Barry CE, Boshoff HI, editors. Systems biological approaches in infectious diseases. New York, NY: Springer; 2007. p. 239–63.
Bockarie MJ, Pedersen EM, White GB, Michael E. Role of vector control in the global program to eliminate lymphatic filariasis. Annu Rev Entomol. 2009;54:469–87.
Nayak S, Salim S, Luan D, Zai M, Varner JD. A test of highly optimized tolerance reveals fragile cell-cycle mechanisms are molecular targets in clinical cancer trials. PLoS One. 2008;3(4):e2016.
Quinton‐Tulloch MJ, Bruggeman FJ, Snoep JL, Westerhoff HV. Trade‐off of dynamic fragility but not of robustness in metabolic pathways in silico. FEBS J. 2013;280(1):160–73.
Bockarie MJ, Alexander ND, Hyun P, Dimber Z, Bockarie F, Ibam E, et al. Randomised community-based trial of annual single-dose diethylcarbamazine with or without ivermectin against Wuchereria bancrofti infection in human beings and mosquitoes. Lancet. 1998;351(9097):162–8.
Bockarie MJ, Tisch DJ, Kastens W, Alexander ND, Dimber Z, Bockarie F, et al. Mass treatment to eliminate filariasis in Papua New Guinea. N Engl J Med. 2002;347(23):1841–8.
Simonsen PE, Meyrowitsch DW, Jaoko WG, Malecela MN, Mukoko D, Pedersen EM, et al. Bancroftian filariasis infection, disease, and specific antibody response patterns in a high and a low endemicity community in East Africa. Am J Trop Med Hyg. 2002;66(5):550–9.
Wijers DJ, Kiilu G. Bancroftian filariasis in Kenya III. Entomological investigations in Mambrui, a small coastal town, and Jaribuni, a rural area more inland (Coast Province). Ann Trop Med Parasitol. 1977;71(3):347–59.
Wijers DJ, Kinyanjui H. Bancroftian filariasis in Kenya II. Clinical and parasitological investigations in Mambrui, a small coastal town, and Jaribuni, a rural area more inland (Coast Province). Ann Trop Med Parasitol. 1977;71(3):333–45.
Brengues J. La filariose de Bancroft en Afrique de L'ouest. Memoires d'Orstom. 1975;79:1–299.
Brunhes J. La filariose de Bancroft dans la sous-region malgache Comores-Madagascar-Reunion. Memoires d'Orstom. 1975;81:1–212.
Rajagopalan PK, Kazmi SJ, Mani TR. Some aspects of transmission of Wuchereria bancrofti and ecology of the vector Culex pipiens fatigans in Pondicherry. Indian J Med Res. 1977;66(2):200–15.
Rozeboom LE, Bhattacharya NC, Gilotra SK. Observations on the transmission of filariasis in urban Calcutta. Am J Epidemiol. 1968;87(3):616–32.
Gubler DJ, Bhattacharya NC. A quantitative approach to the study of Bancroftian filariasis. Am J Trop Med Hyg. 1974;23(6):1027–36.
Ramaiah K, Pani S, Balakrishnan N, Sadanandane C, Das L, Mariappan T, et al. Prevalence of bancroftian filariasis & its control by single course of diethyl carbamazine in a rural area in Tamil Nadu. Indian J Med Res. 1989;89:184–91.
Wolfe MS, Aslamkhan M. Bancroftian filariasis in two villages in Dinajpur District, East Pakistan. I. Infections in man. Am J Trop Med Hyg. 1972;21(2):22–9.
Aslamkhan M, Wolfe MS. Bancroftian filariasis in two villages in Dinajpur District, East Pakistan. II. Entomological investigations. Am J Trop Med Hyg. 1972;21(2):30–7.
Self LS, Usman S, Sajidiman H, Partono F, Nelson MJ, Pant CP, et al. A multidisciplinary study on bancroftian filariasis in Jakarta. Trans R Soc Trop Med Hyg. 1978;72(6):581–7.
World Health Organization (WHO). Progress report 2000–2009 and strategic plan 2010–2020 of the global programme to eliminate lymphatic filariasis: halfway towards eliminating lymphatic filariasis. Geneva: WHO; 2010.
PacELF. The PacELF Way: towards the elimination of lymphatic filariasis from the Pacific, 1999–2005. Geneva: World Health Organization WPR; 2006.
EM acknowledges the partial support of the National Institutes of Health, grant no: RO1 AI069387-01A1. EM and BKS gratefully acknowledge the support of the Eck Institute for Global Heath, Notre Dame, and the Office of the Vice President for Research (OVPR), Notre Dame, as well as the support of the Bill and Melinda Gates Foundation in partnership with the Task Force for Global Health. Model runs used in this work were carried out using the MATLAB Parallel Computing Toolbox available on the Computer Clusters of the University of Notre Dame's Center for Research Computation. The views, opinions, assumptions or any other information set out in this article are solely those of the authors.
Department of Biological Sciences, University of Notre Dame, Notre Dame, IN, USA
Edwin Michael & Brajendra K. Singh
Edwin Michael
Brajendra K. Singh
Correspondence to Edwin Michael.
EM and BKS conceived and designed the study, ran the models and performed the analyses, and interpreted the results and wrote the paper. Both authors read and approved the final manuscript.
Authors' information
EM is Professor of Biology at the Department of Biological Sciences, and Affiliate Member of the Eck Institute for Global Health, the Notre Dame Global Adaptation Index and the Kellogg Institute for international Studies, University of Notre Dame, USA. BKS is Senior Scientist and Mathematical Ecologist at the Department of Biological Sciences, University of Notre Dame, USA.
Supplementary Material. (DOCX 1731 kb)
Michael, E., Singh, B.K. Heterogeneous dynamics, robustness/fragility trade-offs, and the eradication of the macroparasitic disease, lymphatic filariasis. BMC Med 14, 14 (2016). https://doi.org/10.1186/s12916-016-0557-y
Received: 15 August 2015
Vector-borne neglected tropical diseases
Parasite transmission heterogeneity
Biological complexity and robustness
Parameter sloppiness
Adaptability and evolvability
Parasite elimination programs
Submission enquiries: [email protected] | CommonCrawl |
RevScripter
Background on state-dependent diversification rate estimation
An introduction to inference using state-dependent speciation and extinction (SSE) branching processes
Sebastian Höhna, Will Freyman, and Emma Goldberg
This is a general introduction to character state-dependent branching process models, particularly as they are implemented in RevBayes. Frequently referred to as state-dependent speciation and extinction (SSE) models, these models are a birth-death process where the diversification rates are dependent on the state of an evolving character. The original model of this type considered a binary character (a trait with two discrete state values; called BiSSE, (Maddison et al. 2007). Several variants have also been developed for other types of traits (FitzJohn 2010; Goldberg et al. 2011; Goldberg and Igić 2012; Magnuson-Ford and Otto 2012; FitzJohn 2012; Beaulieu and O'Meara 2016; Freyman and Höhna 2018).
RevBayes can be used to specify a wide range of SSE models. For specific examples see these other tutorials:
BiSSE and MuSSE models: State-dependent diversification with BiSSE and MuSSE
ClaSSE and HiSSE models: State-dependent diversification with HiSSE and ClaSSE
ChromoSSE: Chromosome Evolution
Background: The BiSSE Model
The binary state speciation and extinction model (BiSSE) (Maddison et al. 2007) was introduced because of two problems identified by Maddison (2006). First, inferences about character state transitions based on simple transition models [like Pagel (1999)] can be thrown off if the character affects rates of speciation or extinction. Second, inferences about whether a character affects lineage diversification based on sister clade comparisons (Mitter et al. 1988) can be thrown off if the transition rates are asymmetric. BiSSE and related models are now mostly used to assess if the states of a character are associated with different rates of speciation or extinction.
RevBayes implements the extension of BiSSE to any number of discrete states–i.e., the MuSSE model in diversitree; (FitzJohn 2012). We will first describe the general theory about the model.
The theory behind state-dependent diversification models
A schematic overview of the BiSSE model. Each lineage has a binary trait associated with it, so it is either in state 0 (blue) or state 1 (red). When a lineage is in state 0, it can either (a) speciate with rate $\lambda_0$ which results into two descendant lineage both being in state 0; (b) go extinct with rate $\mu_0$; or (c) transition to state 1 with rate $q_{01}$. The same types of events are possible when a lineage is in state 1 but with rates $\lambda_1$, $\mu_1$, and $q_{10}$, respectively.
General approach
The BiSSE model assumes two discrete states (i.e., a binary character), and that the state of each extant species is known (i.e., the discrete-valued character is observed). The general approach adopted by BiSSE and related models is to derive a set of ordinary differential equations (ODEs) that describe how the probability of observing a descendant clade changes along a branch in the observed phylogeny. Each equation in this set describes how the probability of observing a clade changes through time if it is in a particular state over that time period; collectively, these equations are called $\frac{\mathrm{d}D_{N,i}(t)}{\mathrm{d}t}$, where $i$ is the state of a lineage at time $t$ and $N$ is the clade descended from that lineage.
Computing the likelihood proceeds by establishing an initial value problem. We initialize the procedure by observing the character states of some lineages, generally the tip states. Then starting from those probabilities (e.g., species X has state 0 with probability 1 at the present), we describe how those probabilities change over time (described by the ODEs), working our way back until we have computed the probabilities of observing that collection of lineages at some earlier time (e.g., the root).
As we integrate from the tips to the root, we need to deal with branches coming together at nodes. Assuming that the parent and daughter lineages have the same state, we multiply together the probabilities that the daughters are state $i$ and the instantaneous speciation rate $\lambda_i$ to get the initial value for the ancestral branch subtending that node.
Proceeding in this way down the tree results in a set of $k$ probabilities at the root; these $k$ probabilities represent the probability of observing the phylogeny conditional on the root being in each of the states (i.e., the $i^\text{th}$ conditional probability is the probability of observing the tree given that the root is in state $i$). The overall likelihood of the tree is a weighted average of the $k$ probabilities at the root, where the weighting scheme represents the assumed probability that the root was in each of the $k$ states.
As with all birth-death process models, special care must be taken to account for the possibility of extinction. Specifically, the above ODEs must accommodate lineages that may arise along each branch in the tree that subsequently go extinct before the present (and so are unobserved). This requires a second set of $k$ ODEs, $\frac{ \mathrm{d}E_{i}(t)}{\mathrm{d}t}$, which define how the probability of eventual extinction from state $i$ changes over time. These ODEs must be solved to compute the differential equations $\frac{ \mathrm{d}D_{N,i}(t)}{\mathrm{d}t}$. We will derive both sets of equations in the following sections.
Derivation for the binary state birth-death process
The derivation here follows the original description in Maddison et al. (2007). Consider a (time-independent) birth-death process with two possible states (a binary character), with diversification rates \(\{\lambda_0, \mu_0\}\) and \(\{\lambda_1, \mu_1\}\).
Clade probabilities, $D_{N, i}$
We define $D_{N,0}(t)$ as the probability of observing lineage $N$ descending from a particular branch at time $t$, given that the lineage at that time is in state 0. To compute the probability of observing the lineage at some earlier time point, $D_{N,0}(t + \Delta t)$, we enumerate all possible events that could occur within the interval $\Delta t$. Assuming that $\Delta t$ is small—so that the probability of more than one event occurring in the interval is negligible—there are four possible scenarios within the time interval ():
nothing happens;
a transition occurs, so the state changes $0 \rightarrow 1$;
a speciation event occurs, and the right descendant subsequently goes extinct before the present, or;
a speciation event occurs and the left descendant subsequently goes extinct before the present.
We are describing events within a branch of the tree (not at a node), so for (3) and (4), we require that one of the descendant lineages go extinct before the present because we do not observe a node in the tree between $t$ and $t + \Delta t$.
Possible events along a branch in the BiSSE model, used for deriving $D_{N,0}(t + \Delta t)$. This is Figure 2 in Maddison et al. (2007).
We can thus compute $D_{N,0}(t + \Delta t)$ as:
\[\begin{aligned} D_{N,0}(t + \Delta t) = & \;(1 - \mu_0 \Delta t) \times & \text{in all cases, no extinction of the observed lineage} \\ & \;[ (1 - q_{01} \Delta t)(1 - \lambda_0 \Delta t) D_{N,0}(t) & \text{case (1) nothing happens} \\ & \; + (q_{01} \Delta t) (1 - \lambda_0 \Delta t) D_{N,1}(t) & \text{case (2) state change but no speciation} \\ & \; + (1 - q_{01} \Delta t) (\lambda_0 \Delta t) E_0(t) D_{N,0}(t) & \text{case (3) no state change, speciation, extinction} \\ & \; + (1 - q_{01} \Delta t) (\lambda_0 \Delta t) E_0(t) D_{N,0}(t)] & \text{case (4) no state change, speciation, extinction} \end{aligned}\]
A matching equation can be written down for $D_{N,1}(t+\Delta t)$.
To convert these difference equations into differential equations, we take the limit $\Delta t \rightarrow 0$. With the notation that $i$ can be either state 0 or state 1, and $j$ is the other state, this yields:
\[\frac{\mathrm{d}D_{N,i}(t)}{\mathrm{d}t} = - \left(\lambda_i + \mu_i + q_{ij} \right) D_{N,i}(t) + q_{ij} D_{N,j}(t) + 2 \lambda_i E_i(t) D_{N,i}(t) \tag{1}\label{eq:one}\]
Extinction probabilities, $E_i$
To solve the above equations for $D_{N, i}$, we see that we need the extinction probabilities. Define $E_0(t)$ as the probability that a lineage in state 0 at time $t$ goes extinct before the present. To determine the extinction probability at an earlier point, $E_0(t+\Delta t)$, we can again enumerate all the possible events in the interval $\Delta t$ ():
the lineage goes extinct within the interval;
the lineage neither goes extinct nor speciates, resulting in a single lineage that must eventually go extinct before the present;
the lineage neither goes extinct nor speciates, but there is a state change, resulting in a single lineage that must go extinct before the present, or;
the lineage speciates in the interval, resulting in two lineages that must eventually go extinct before the present.
\[\begin{aligned} E_0(t + \Delta t) = &\; \mu_0\Delta t + & \text{case (1) extinction in the interval} \\ & (1 - \mu_0\Delta t) \times & \text{no extinction in the interval and \dots} \\ & \;[(1-q_{01}\Delta t)(1-\lambda_0 \Delta t) E_0(t) & \text{case (2) nothing happens, but subsequent extinction} \\ & \;+ (q_{01}\Delta t) (1-\lambda_0 \Delta t) E_1(t) & \text{case (3) state change and subsequent extinction} \\ & \;+ (1 - q_{01} \Delta t) (\lambda_0 \Delta t) E_0(t)^2] & \text{case (4) speciation and subsequent extinctions} \end{aligned}\]
Again, a matching equation for $E_1(t+\Delta t)$ can be written down.
Possible events along a branch in the BiSSE model, used for deriving $E_0(t + \Delta t)$. This is Figure 3 in Maddison et al. (2007).
To convert these difference equations into differential equations, we again take the limit $\Delta t \rightarrow 0$:
\[\frac{\mathrm{d}E_i(t)}{\mathrm{d}t} = \mu_i - \left(\lambda_i + \mu_i + q_{ij} \right)E_i(t) + q_{ij} E_j(t) + \lambda_i E_i(t)^2 \tag{2}\label{eq:two}\]
Initial values: tips and sampling
The equations above describe how to get the answer at time $t + \Delta t$ assuming we already have the answer at time $t$. How do we start this process? The answer is with our character state observations, which are generally the tip state values. If species $s$ has state $i$, then $D_{s,i}(0) = 1$ (probability is 1 at time 0 [the present] because we observed it for sure) and $E_i(0) = 0$ (probability 0 of being extinct at the present). For all states other than $i$, $D_{s,j}(0) = 0$ and $E_j(0) = 1$.
We can adjust these initial conditions to allow for incomplete sampling. If a proportion $\rho$ of species are included on the tree, we would instead set $D_{s,i}(0) = \rho$ (probability of having state $s$ and of being on the tree) and $E_i(0) = 1-\rho$ (probability of absent, due to sampling rather than extinction). This simple form of incomplete sampling assumes that any species is equally likely to be on the tree (FitzJohn et al. 2009).
At nodes
Equations \eqref{eq:one} and \eqref{eq:two} are the BiSSE ODEs, describing probabilities along the branches of a phylogeny. We also need to specify what happens with the clade probabilities (the $D$s) at the nodes of the tree. BiSSE assumes the ancestor (called $A$) and descendants (called $N$ and $M$) have the same state (i.e., there is no cladogenetic character change). The initial value for the ancestral branch going into a node (at time $t_A$) is then the product of the final values for each of the daughter branches coming out of that node, times the instantaneous speciation rate (to account for the observed speciation event):
\[D_{A, i}(t_A) = D_{N, i}(t_A) D_{M, i}(t_A) \lambda_i \tag{3}\label{eq:three}\]
At the root
After we integrate equations \eqref{eq:one} and \eqref{eq:two} from the tips to the root, dealing with nodes along the way via equation \eqref{eq:three}, we arrive at the root with the $D$ values (called $D_{R, i}$), one for each state. These need to be combined somehow to get the overall likelihood of the data:
\[\text{Likelihood(tree, tip states | model)} = \sum_i D_{R, i} \, p_{R, i}\]
What probability weighting, $p_{R, i}$ should be used for the possible root states? Sometimes a fixed approach is used, assuming that the prior root state probabilities are either all equal, or are the same as the observed tip state frequencies, or are the equilibrium state frequencies under the model parameters. These assumptions do not have a real basis, however (unless there is some external data that supports them), and they can cause trouble (Goldberg and Igić 2008). An alternative is to use the BiSSE probabilities themselves to determine the root state weightings, essentially adjusting the weightings to be most consistent with the data and BiSSE parameters (FitzJohn et al. 2009). Perhaps better is to treat the weightings as unknown parameters to be estimated. These estimates are usually quite uncertain, but in a Bayesian framework, one can treat the $p_{R, i}$ as nuisance parameters and integrate over them.
BiSSE model parameters and their interpretation
$\Psi$ Phylogenetic tree with divergence times
$T$ Root age
$q_{01}$ Rate of transitions from 0 to 1
$\lambda_0$ Speciation rate for state 0
$\mu_0$ Extinction rate for state 0
Equations for the multi-state birth-death process
The entire derivation above can easily be expanded to accommodate an arbitrary number of states (FitzJohn 2012). The only extra piece is summing over all the possible state transitions. The resulting differential equations within the branches are:
\[\begin{aligned} \frac{\mathrm{d}D_{N,i}(t)}{\mathrm{d}t} &= - \left(\lambda_i + \mu_i + \sum\limits_{j \neq i}^k q_{ij} \right)D_{N,i}(t) + \sum\limits_{j \neq i}^k q_{ij} D_{N,j}(t) + 2\lambda_iE_i(t)D_{N,i}(t) \\ \frac{\mathrm{d}E_i(t)}{\mathrm{d}t} &= \mu_i - \left(\lambda_i + \mu_i + \sum\limits_{j \neq i}^k q_{ij} \right)E_i(t) + \sum\limits_{j \neq i}^k q_{ij} E_j(t) + \lambda_i E_i(t)^2 \end{aligned}\]
Beaulieu J.M., O'Meara B.C. 2016. Detecting hidden diversification shifts in models of trait-dependent speciation and extinction. Systematic Biology. 65:583–601. 10.1093/sysbio/syw022
FitzJohn R.G., Maddison W.P., Otto S.P. 2009. Estimating trait-dependent speciation and extinction rates from incompletely resolved phylogenies. Systematic Biology. 58:595–611. 10.1093/sysbio/syp067
FitzJohn R.G. 2010. Quantitative Traits and Diversification. Systematic Biology. 59:619–633. 10.1093/sysbio/syq053
FitzJohn R.G. 2012. Diversitree: Comparative Phylogenetic Analyses of Diversification in R. Methods in Ecology and Evolution. 3:1084–1092. 10.1111/j.2041-210X.2012.00234.x
Freyman W.A., Höhna S. 2018. Cladogenetic and anagenetic models of chromosome number evolution: a Bayesian model averaging approach. Systematic Biology. 67:1995–215.
Goldberg E.E., Lancaster L.T., Ree R.H. 2011. Phylogenetic Inference of Reciprocal Effects between Geographic Range Evolution and Diversification. Systematic Biology. 60:451–465. 10.1093/sysbio/syr046
Goldberg E.E., Igić B. 2008. On Phylogenetic Tests of Irreversible Evolution. Evolution. 62:2727–2741. 10.1111/j.1558-5646.2008.00505.x
Goldberg E.E., Igić B. 2012. Tempo and Mode in Plant Breeding System Evolution. Evolution. 66:3701–3709. 10.1111/j.1558-5646.2012.01730.x
Maddison W.P., Midford P.E., Otto S.P. 2007. Estimating a binary character's effect on speciation and extinction. Systematic Biology. 56:701. 10.1080/10635150701607033
Maddison W.P. 2006. Confounding Asymmetries in Evolutionary Diversification and Character Change. Evolution. 60:1743–1746. 10.1111/j.0014-3820.2006.tb00517.x
Magnuson-Ford K., Otto S.P. 2012. Linking the Investigations of Character Evolution and Species Diversification. The American Naturalist. 180:225–245. 10.1086/666649
Mitter C., Farrell B., Wiegemann B. 1988. The Phylogenetic Study of Adaptive Zones: Has Phytophagy Promoted Insect Diversification? The American Naturalist. 132:107–128. 10.1086/284840
Pagel M. 1999. The Maximum Likelihood Approach to Reconstructing Ancestral Character States of Discrete Characters on Phylogenies. Systematic Biology. 48:612–622. 10.1080/106351599260184
GitHub | License | Citation | Users Forum | CommonCrawl |
Toxic effects of an organophosphate pesticide, envoy 50 SC on the histopathological, hematological, and brain acetylcholinesterase activities in stinging catfish (Heteropneustes fossilis)
Rabeya Akter1,
Mst Arzu Pervin1,
Halima Jahan1,
Sharmin Ferdewsi Rakhi1,2,
A. H. M. Mohsinul Reza1,3 &
Zakir Hossain1
Freshwater fish in Bangladesh are adversely affected by the washed off pesticides, used in agriculture. The aim of this study was to evaluate the impacts of a commonly used organophosphate pesticide on freshwater stinging catfish, Heteropneustes fossilis, which envisioned that the possible threats might occur by this organophosphate group to other species in the wild.
To study the potential hazards of Envoy 50 SC on H. fossilis, fry of the fish were exposed to the acute toxicity tests. Changes in the hematological parameters, organ-specific histomorphologies, and brain acetylcholinesterase (AChE) activities were determined by treating the fish with agricultural recommended dose and below that dose of 0.015 and 0.0075 ppm, respectively.
LC50 of Envoy 50 SC for the fish was determined as 0.151 (0.014–0.198) ppm. Pesticide abruptly altered the normal tissue structures of the gill, liver, and kidney. The major alterations included were the gill lamellae missing, gill clubbing, hyperplasia, nuclear hypertrophy, vacuolation, glomerular expansion, increasing diameter of the renal tubules, hemorrhage, necrosis, and pyknosis. In blood cells, changes observed in the peripheral nuclear erythrocyte were large lymphocyte, dead cell, fusion of the cells, binucleated cells, tear-shaped cells, ghost cells, senile cells, and abnormal structures of the cells. Significantly lower (P < 0.05) red blood cell (RBC) count and AChE activities in fish brain due to the pesticide exposure suggested the reasons of abrupt behavior, increased oxygen consumption, and fish mortality at higher concentration of this organophosphate pesticide.
The presence of pesticides, even at low concentrations, caused deleterious effects on the earlier life stages of a comparatively harder and robust fish, suggesting a wider range effect on the more sensitive wild life, in particular decrease in survival in their native environment. Therefore, measures should be taken to minimize the risk of contamination of the aquatic environment by such toxic chemicals.
Over the last few decades, due to the significant impacts on aquatic flora and fauna, the problem associated with the environmental pollution has been a concern worldwide (Zahra, 2017; Özkara, Akyıl, & Konuk, 2016; Rakhi, Reza, Hossen, & Hossain, 2013). Toxic organic pollutants that include a large number of agrochemicals, such as pesticides, many of which are non-biodegradable and carcinogenic, are consistently being used in crop fields. As a result, fish and other aquatic biota, exposed to the pesticide-contaminated water are at much higher risk of dying (Katagi, 2010; Reza, Rakhi, Hossen, & Hossain, 2017). Seepage of pesticides into rivers and streams can be highly lethal to aquatic life and often might change the bionetwork of a particular area (Mensah, Palmer, & Muller, 2014; Sánchez-Bayo, Goka, & Hayasaka, 2016). Moreover, repeated exposure to sublethal doses of some pesticides can cause physiological, behavioral, and environmental modifications by endangering fish population, abandonment of nests and broods, decreased immunity to diseases, and decreased predator avoidance (Saaristo et al., 2018; Hamilton et al., 2016). Additionally, pesticides can accumulate in water bodies and affect the source of food for young fish by actively altering the trophic levels (Lew, Lew, Biedunkiewicz, & Szarek, 2013; Hossain, Rahman, & Mollah, 2001). Pesticides can also abruptly alter the lower trophic levels that instigate the fish to forage further, exposing them to greater risks of predators. Generally, insecticides are more toxic to aquatic life than herbicides and fungicides (Aktar, Sengupta, & Chowdhury, 2009); therefore, their tremendous use for the domestic sphere is required to be reconsidered. However, the sooner a pesticide degrades in the atmosphere, the less menace it might cause to aquatic life (Gill & Garg, 2014).
Envoy 50 SC is a wide-ranging commonly used organophosphate (OP) insecticide, used commercially to control foliar insects in croplands (Rusyniak & Nanagas, 2004). Accumulation of this OP insecticide in different aquatic organisms, particularly in fish through air drift or surface runoff adversely affects them (Varo et al., 2002). This chemical is a well-known acetylcholinesterase inhibitor, which plays a crucial role in neurotransmission by rapid hydrolysis of neurotransmitter acetylcholine (ACh) to choline and acetate at cholinergic synapses (Kwong, 2002). Therefore, they can alter the neurological responses of non-target organisms even at very low concentrations (Grue, Gibert, & Seeley, 1997; Hamilton et al., 2016).
During the contaminant exposure, histopathological observations can give insights into the organism's health and responses towards the stressors, and therefore have been widely used biomarkers, both in the laboratory and field studies (Yancheva, Velcheva, Stoyanova, & Georgieva, 2016; Schwaiger et al., 1997; Thophon et al., 2003; Hook, Gallagher, & Batley, 2014). Histopathological biomarkers are very useful for examining the structure of vital organs (gills, kidney, and liver) when respiration, excretion, or detoxification processes are affected by environmental contaminants (Gernhofer, Pawet, Schramm, Müller, & Triebskorn, 2001). Additionally, hematological parameters have also been used as health indicators to assess the physiological status of fish and other vertebrates (Chandra & Chandra, 2013; Blahova et al., 2014; Al-Asgah, Abdel-Warith, Younis, & Allam, 2015). Blood biochemistry profiles and hematology of organisms are gaining increasing importance due to its value in monitoring the health status rapidly and effectively (Hrubec, Cardinale, & Smith, 2000). Hematological characteristics can be used as a sensitive index to screen the pathophysiological changes in fish (Kori-Siakpere, Ake, & Idoge, 2005).
Acetylcholinestarase (AChE) is a functional key enzyme of the nervous system for the termination of the nerve impulses by hydrolyzing the neurotransmitter acetylcholine. Inhibitions of AChE results in the accretion of acetylcholine in the central and peripheral synapses and subsequently modify the physiological and neuroendocrine processes (Sandahl, Baldwin, Jenkins, & Scholz, 2005). Such physiological variations can lead to a succession of behavioral changes that include impeded swimming performances, altered social behavior, reduced foraging, and greater predation risks. Therefore, AChE is also a widely used biomarker to give insight to the environmental and pathological perspectives (Lionetto, Caricato, Calisi, Giordano, & Schettino, 2013; Richetti et al., 2011).
Stinging catfish, Heteropneustes fossilis, is a freshwater fish with a high yield potential, often found in ponds, ditches, swamps, marshes, and in the rice fields of Southeast Asia (Jha & Rayamajhi, 2010). This species has become increasingly popular due to its delicious taste, appealing market price, and medicinal and nutritional values. It has been proved to be a good candidate for aquaculture due to its very hardy nature. The presence of the accessory respiratory organs also enables this species to survive additional few hours even if outside the water (Khan, Islam, & Hossain, 2003). Although, H. fossilis breeds in confined waters during the monsoon months, they can also breed in ponds, derelict ponds, and ditches when sufficient rain water accumulates, which make this fish one of the most susceptible species for exposure to the aquatic pollutants.
In the present study, as a model fish, fry of freshwater stinging catfish, H. fossilis was selected to evaluate the Envoy 50 SC-mediated toxicity. Histopathological observations of major organs, changes in hematological parameters, and brain acetylcholinesterase activity were investigated to understand the probable threats elicited by this organophosphate pesticide during the early life stages of this fish in the wild.
Sites of the experiment
The bioassay was conducted in the Wet Laboratory of the Department of Fisheries Biology and Genetics, Faculty of Fisheries, Bangladesh Agricultural University, Mymensingh, Bangladesh. The histological study and AChE activities were carried out by using Genetics and Biotechnology Laboratory of the Department of Fisheries Biology and Genetics and Department of Surgery and Obstetrics, Bangladesh Agricultural University, Mymensingh, Bangladesh, respectively.
The study for each treatment was conducted in glass aquaria in triplicates, situated in the Wet Laboratory of the Department of Fisheries Biology and Genetics, Bangladesh Agricultural University, Mymensingh, Bangladesh. Envoy 50 SC was collected from an authorized dealer at Mymensingh town, Bangladesh. Fries of H. fossilis were collected from the local fish market and were acclimated in the laboratory condition prior to the experiment. Fish were kept unfed throughout the experimental period (Pandey, Singh, Singh, Singh, & Das, 2009). Glass aquaria were properly cleaned and filled with chlorine-free 35 L of tap water and 10 H. fossilis with an average length and weight of 1.4 ± 0.14 cm and 1.05 ± 0.12 g, respectively, were acclimated for 2 days. Concentrations of pesticides (0.00375, 0.0075, 0.015, 0.03, 0.06, 0.12, 0.24, and 0.36 ppm) were adjusted and a control was maintained where fish were kept in pesticide-free water. Air stone was used to increase the water circulation in the aquarium. The temperature and pH were measured daily using a mercury centigrade thermometer and a pH meter (Model: pH ep Tester, Romania), respectively. Over the experimental timeframe, dissolved oxygen (DO) in the aquarium was traced using a dissolved oxygen meter (Model: HI 9146-DO meter, Romania). During the pesticide exposure, dead fish were removed, and mortality was recorded daily. The LC50 for the fish at 96 h was determined through the acute toxicity tests.
Histopathological study
To observe the histopathological effects of Envoy 50 SC, H. fossilis was exposed at agricultural recommended dose (0.015 ppm) and below (0.0075 ppm) the agricultural recommended dose in glass aquaria and maintained for 7 days. The agricultural recommended dose was calculated considering a 20-cm water depth in rice field. A control group was also maintained exposed to pesticide-free water. Following exposure, the gills, liver, and kidney were collected and preserved in 10% neutral buffered formalin for further analysis. The paraffin wax-embedded samples were sectioned (5 μm) with a microtome machine (Leica JUNG RM 2035). After that, the sections were stained with hematoxylin (H) and eosin (E) stains, proceeding through various chemicals of different concentration and time schedule. After staining, the sections were mounted with Canada balsam and kept overnight for permanent slide. Photomicrography of the stained samples was done by using a photomicroscope (OLYMPUS CX41, Japan).
Hematological alteration with pesticide-treated fish
To count the red blood cells, fish were exposed at two different concentrations (i. e., 0.015 and 0.0075 ppm) of Envoy 50 SC for 7 days in triplicates with 4 fish in each group. Group without pesticide treatment served as control. For the study of morphological alterations of erythrocytes, blood smears were prepared on glass slides from fresh unheparinized blood at 7-day-exposed fish. They were air dried, fixed in methanol, and stained with Wright's Giemsa. Blood corpuscles were then examined by immersion oil microscopy and photographed. Photographs were taken with the help of Intel Pentium Q3X computer-attached microscope under 400 × lens (OLYMPUS CX41, Japan). Red blood cells were counted according to the modified method of Math et al. (2016).
The number of RBC per cubic millimeter was calculated by using the following formula:
$$ \mathrm{Total}\;\mathrm{RBC}\;\left({\mathrm{mm}}^3\right)=\frac{\mathrm{No}.\mathrm{of}\kern0.17em \mathrm{cells}\times \mathrm{dilution}\kern0.17em \mathrm{factor}\times \mathrm{depth}\kern0.17em \mathrm{factor}}{\mathrm{Total}\ \mathrm{No}.\mathrm{of}\ \mathrm{small}\ \mathrm{squares}}=\mathrm{number}\ \mathrm{cubic}\ \mathrm{mm} $$
Measurement of the AChE activity
For the AChE activity analysis, H. fossilis were exposed to 0.015 ppm pesticide containing water in glass aquaria for 10 days. Fish exposed to pesticide-free water was kept as control. Following exposure, the whole brain was dissected out and placed in ice-cold 0.1-M sodium phosphate buffer (pH 8.0). In this study, brain sample was used as in teleost, AChE is maximally distributed in brain (Kopecka, Rybakowas, Barsiene, & Pempkowiak, 2004; Ferenczy, Szegletes, Balint, Abraham, & Nemcsok, 1997). Fish brains were then weighed and homogenized using a glass Teflon homogenizer in homogenization buffer (0.1 M sodium phosphate buffer, 0.1% Triton X-100, and pH 8.0) to achieve a final concentration of 20 mg tissue/ml phosphate buffer. Brain tissue homogenate was centrifuged at 2000 rpm for 10 min at 4 °C, and supernatant was removed. An aliquot of supernatant was then removed and measured for protein according to Lowry, Rosebrough, Farr, and Randall (1951)) using bovine serum albumin in homogenization buffer as a standard. A standard curve of known absorbance (bovine serum albumin) was plotted and used to determine the sample protein concentration.
AChE activity was measured according to the method of Ellman, Courtney, Andres, and Featherstone (1961), as optimized by Habig, Giulio, and Donia (1988) and Sandahl et al. (2005). Tissue homogenate (50 μl) was added to 900 μl of cold sodium phosphate buffer (0.1 M containing 0.1% Triton X-100, pH 8.0) and 50 μl of 5,5-dithiobis (2-nitrobenzoic acid) (DTNB; 6 mM), then vortexed and allowed to stand at room temperature for 10 min. Aliquots of 200 μl in triplicate was then placed into microtitre plate wells. The reaction was started with the addition of 50 μl of acetylthiocholine chloride (15 mM) specific for fish (Jash, Chatterjee, & Bhattacharya, 1982). Changes in absorbance was measured with a microplate reader (Model: SPECTRA max 340PC384) at 412 nm.
The rate was calculated as follows:
$$ R=5.74\left({10}^{-4}\right)\Delta A/{C}_0 $$
where R = Rate in moles substrate hydrolyzed per minute per gram of tissue, ΔA = Change in absorbance per min, and C0 = Original concentration of tissue.
AChE activity was calculated (nmol/min/mg protein).
Data obtained from the acute toxicity tests were evaluated using the Probit analysis statistical method to find the LC50 values. Student t test and one-way analysis of variance (ANOVA) were used for analyzing the data of AChE and blood cells, respectively. A post hoc Waller Duncan multiple test range was performed considering a 5% significant level using SPSS ver. 17.0 computer software program.
Physicochemical parameters
During the experimental period, temperature, DO, and pH were recorded regularly. The average temperature, initial DO, and pH were recorded as 27.0 ± 3.0 °C, 7.5 ± 1.0 ppm, and 9.3 ± 2.1, respectively. Data of the oxygen concentration (Table 1) in the aquaria exhibited a declining trend in the DO content with the increasing concentration of pesticide exposure in comparison to the control group that was consistent until the starting of the mortality.
Table 1 Changes in dissolved oxygen and fish mortality during the experimental period
Observation of the behavioral changes
The behaviors of tested fries of H. fossilis were observed throughout the experimental period. Some of their vertebral column on the caudal region were bent and showed abnormal swimming (Fig. 1). Several atypical behaviors such as restlessness, antenna movements, loss of balance, and prompt operculum activities were observed when the fries started to be affected by the test. At the acute level, frequent surfacing, gulping with increased mucus discharge, and loss of balance have been observed.
Behavioral changes of Heteropneustes fossilis after 7 days. a measurement of the size, b control group, and c pesticide, Envoy 50 SC (0.015 ppm)-treated fish. Arrows are showing more bent structures and abnormal swimming
LC50 of envoy 50 SC for H. fossilis
The LC50 of Envoy 50 SC for H. fossilis was 0.151 (0.014–0.198) at 96 h (Table 2).
Table 2 LC50 of Heteropneustes fossilis
Histopathological observation of fish exposed to pesticides
H. fossilis were exposed to Envoy 50 SC at two different concentrations, agricultural recommended dose of 0.015 ppm and half of the agricultural recommended dose of 0.0075 ppm. Structural changes were observed in gills, liver, and kidneys and compared with those of the control. No pathology has been observed in the gill arch and primary and secondary gill lamellae of the control group, whereas at the dose of 0.015 ppm, blood congestion, hyperplasia, curling of secondary lamellae, hemorrhage, epithelial hyperplasia, clubbing, and necrosis were found in the gill (Fig. 2).
Photomicrographs of gills of Heteropneustes fossilis after 7-day exposure to 0.015 ppm Envoy 50 SC. a Control—normal epithelial cell and secondary lamellae were found; b blood congestion (a), hyperplasia (b), curling of secondary lamellae (c), and hemorrhage (d); c epithelial hyperplasia; and d clubbing (a) and necrosis (b) were observed
Hepatocytes and kidney cells appeared normal in the control group. At 0.015 ppm concentrations, mild alterations were found in the liver tissue (cytoplasmic vacuolation, nuclear hypertrophy, hemorrhage, pyknotic area, vacuolation) (Fig. 3), but more serious alteration of the kidney histology was observed at the same concentration of the pesticide: glomerular expansion, increasing the diameter of renal tubule, necrosis, pyknosis, vacuolation, and hemorrhage. Similar pathologies were also observed at the lower concentration of the pesticide (0.0075 ppm) but to a lesser extent for liver tissue (Fig. 4).
Photomicrographs of liver of Heteropneustes fossilis after 7-day exposure to 0.015 ppm Envoy 50 SC. a Control—normal regular and systematic arrangement of hepatocytes were found; b cytoplasmic vacuolation (a) and nuclear hypertrophy (b); c hemorrhage (a) and pyknotic area (b); and D vacuolation were observed
Photomicrographs of the kidney of Heteropneustes fossilis after 7-day exposure to 0.015 ppm Envoy 50 SC. a Control—normal regular and systematic arrangement of kidney tubules and hematopoietic cells were found; b glomerular expansion (a) and increasing the diameter of renal tubule (b); C necrosis (a) and pyknosis (b); and d vacuolation (a) and hemorrhage (b) were observed
Nevertheless, in the below agricultural doses, pathologies also have been identified in the gills and liver (Figs. 5 and 6), but it was comparatively less than those of the agricultural recommended doses. However, compared with other organs, pathologies were found almost similar in kidneys (Fig. 7) in both doses.
Photomicrographs of gills of Heteropneustes fossilis after 7-day exposure to 0.0075 ppm Envoy 50 SC. a vacuolation and b missing of secondary gill lamellae (a), hyperplasia (b), and clubbing (c) were observed
Photomicrographs of liver of Heteropneustes fossilis after 7-day exposure to 0.0075 ppm Envoy 50 SC. Severe (a) nuclear hypertrophy and (b) vacuolation (a) and cytoplasmic vacuolation (b) were observed
Photomicrographs of kidney of Heteropneustes fossilis after 7-day exposure to 0.0075 ppm Envoy 50 SC. Severe (a) glomerular expansion (a) and cellular degeneration (b) and (b) increasing the diameter of renal tubule (a) and vacuolation (b) were observed
Hematological alteration of pesticide-treated fish
Uniform blood smears from normal healthy unpolluted fish samples revealed that each erythrocyte was an oval-shaped cell with a concentric nucleus with the outer edge of the cell. At the dose of 0.015 ppm of Envoy 50 SC, large lymphocyte, dead cell, fusion of cells, binucleated cell, tear-shaped cell, ghost cell, senile cell, and abnormal shape of the cells were found (Fig. 8). The mean blood cell counts were significantly higher (P < 0.05) at 0.0075 ppm as 4.74 ± 0.80 (106 mm3) and at 0.015 ppm as 3.84 ± 0.35 (106 mm3) compared with those of the control as 6.05 ± 0.12 (106 mm3).
Photomicrographs of blood smears of Heteropneustes fossilis after 8-day exposure to 0.015 ppm Envoy 50 SC. a Control—normal regular and systematic arrangement of nucleus of erythrocytes were found; b small nucleus (a), dead cell (b), fusion of cells (c), and binucleated cell (d); c tear-shaped cell; d ghost cell; e senile cell; and f abnormal shape of cells were observed
AChE activity of fish brain exposed to envoy 50 SC
The AChE activity in the brain of H. fossilis was calculated as 75.7 ± 5.9 nmol/min/mg protein in control and 42.6 ± 5.8 nmol/min/mg protein at the dose of 0.015 ppm that showed significant (P < 0.05) inhibition compared with the control group (Fig. 9).
AChE activity (nmol/min/mg protein) measured in brain of Heteropneustes fossilis. Fish exposed to 0.015 ppm Envoy 50 SC were compared with those of the control group. Data were presented as mean ± SD. *P < 0.05
This study was conducted on a freshwater stinging catfish, H. fossilis, to understand the possible effects of the commonly used organophosphate pesticides on the early life stages of this comparatively resilient fish species. In the present experiment, despite of using same conditions in all the aquariums, decreasing oxygen concentration in pesticide-exposed aquarium compared with the control group which presumably happened due to the elevated respiration of the stressed fish. These data were partly supported by another study, where oxygen consumption of some commonly cultured fish species, Labeo rohita, Cirrhina mrigala, Catla catla, Hypophthalmichthys molitrix, and Ctenopharyngodon idella fingerlings were determined through different thermal challenges (Tabinda et al., 2003). During the study, the lowest oxygen utilization rates were determined at 30 °C, which was followed by the rapid death of most of the species. Moreover, the oxygen consumption rate was found much higher in other temperatures, where the study was performed in airtight 4-l bottles with stocking of 20 fish fingerlings. Although, the oxygen consumption rate was reported much higher for the fries in that study, in the present study, the condition was much different, where fish were kept in larger open-glass aquariums. Additionally, during stress conditions, being an air-breathing fish, H. fossilis depend more on aerial respiration, which causes the results of the change in DO more inconspicuous. However, more studies on oxygen consumption are required for completely understanding the stress responses of air-breathing fish due to the pesticide exposure.
The lethal effects of pesticides on test animals can be expressed as LC50 value. In the present study, the LC50 value of Envoy 50 SC was 0.151 ppm for H. fossilis at 96 h. Deka and Mahanta (2012) found that the LC50 value of Malathion was 0.98 ppm for H. fossilis at 96-h exposure, whereas Hossain, Haldar, and Mollah (2000) estimated the LC50 value of Diazinon as 2.97 ppm for L. rohita at 96-h exposure. Hossain et al. (2001) found the LC50 values were 0.3530 and 1.2809 ppm for Diazinon 60 EC and Dimecron 100 SCW, respectively, at 48-h exposure on a zooplankton, Diaptomus. Sharbidre, Metkari, and Patode (2011) recorded the LC50 values of methyl parathion and chlorpyrifos to guppy fish, Poecilia reticulate, were 8.48 ppm and 0.176 ppm, respectively. In addition, the LC50 values were 6.75, 22.95, and 375.26 ppm for Anabas testudineus, Channa panctatus, and Barbodes gonionotus, respectively, on Dimecron 100 SCW at 96 h (Hossain, Rahman, & Mollah, 2002). These indicated that the LC50 value is species specific, and different pesticides have different LC50 value.
Anomalous histology was observed under exposure to Envoy 50 SC. During histological study, mild to severe alteration in gills were recorded, whereas the pathologies were more noticeable at a higher dose than the lower one. Tissue-specific structural alterations from the polluted ecosystem have also been recognized (Marchand, Van Dyk, Pieterse, Barnhoorn, & Bornman, 2009) from other studies. The results of this study are also supported by Zodrow, Stegemanb, and Tanguay (2004), who recorded hypertrophy and fusion of secondary gill lamellae in zebrafish. Benli and Ozkul (2010) found telangiectasis at the tip of secondary gill lamellae following the 96-h exposure of Nile tilapia in an organophosphate pesticide. Reza et al. (2017) also found mentionable structural alterations with major pathological signs in the gills of 0.058 ppm organophosphate-treated Labeo rohita, which included gill clubbing, hemorrhage, and pyknosis.
The hepatocytes and other kidney tissues of H. fossilis showed ultrastructural damages compared with those of the control group that included glomerular expansions, cellular degeneration, increased renal tubule diameter, pyknotic area, melanomacrophage, fatty degeneration, lipid droplets, vacuole, and hemorrhage formation in the hepatocytes. In kidneys, with both doses of pesticides, these pathologies have been observed, which might be due to the osmoregulatory function of the kidney. Similar results were also observed earlier by Hossain et al. (2002) and Rahman, Hossain, Mollah, and Ahmed (2002) from the organophosphate pesticide-exposed fish liver, whereas hypertrophy and lipidosis were prevalent in the study of Zodrow et al. (2004). Additionally, Oropesa, Cambero, Gómez, Roncero, and Soler (2009) reported lipid drops and necrotic foci in the Cyprinus carpio liver, while Reza et al. (2017) found severe alterations like formation of vacuoles, hemorrhage, and fatty degeneration in 0.058 ppm Envoy 50 SC-treated L. rohita liver and moderate hemorrhage, fatty degeneration, and lipid droplets for the same species due to the exposure at 0.108 ppm. These results indicated that different pesticides and fish species showed similar pathologies.
Pathologies of the kidney of the pesticide-treated fish of the present study also partially agrees with Hossain et al. (2002) and Rahman et al. (2002), as they found comparatively more pathologies in B. gonionotus. Fischer-Scherl et al. (1991) acknowledged pathological alterations of renal corpuscles and renal tubule's components in Oncorhynchus mykiss during 28-day exposure of razine (5–40 μg/l). Additionally, necrotic renal hemopoietic tissue and endothelial cells at 80–2800-μg/l exposure have also been observed in the experimental group. Alike the results from this experiment, abnormalities in kidney tissues of rare minnow (Gobiocypris rarus) have significant toxic effects of atrazine (10 μg/l, 28-day exposure) in this species. Pathologssies recorded from the study were lesions in kidney tissues, expansion in the lumen, necrotic and degenerative tubular epithelia, and shrinkage of the glomeruli (Yang, Zha, Li, Li, & Wang, 2010). Conversely, almost no differences between control and 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD)-exposed zebra fish kidney have been observed from a study of Zodrow et al. (2004). However, Reza et al. (2017) found slight structural changes with hemorrhage, mild vacuole, and degenerating kidney tubule at 0.058 ppm in L. rohita, whereas relatively lower effects with some melanin pigments and vacuoles have been observed from the B. gonionotus kidney tissues at the same dose. When the fish were treated with 0.108 ppm of Envoy 50 SC, pathologies studied were pyknosis, moderate hemorrhage and hyaline for L. rohita, and pyknosis, moderate vacuole, and necrosis for B. gonionotus kidney. More structural impairments in L. rohita compared with those in B. gonionotus were indicators of susceptibility of the former species toward pesticide exposure.
Alterations of blood components are important biomonitoring tools in toxicological research because of the potentiality for rapid assessment of the chronic toxicities of a compound. Generally, any unfavorable changes in water quality are reflected in the blood of the aquatic organisms by separating the blood cells with thin epithelial membrane (Kori-Siakpere et al., 2005). In the present study, several changes in peripheral erythrocyte were found due to the exposure of 2 different concentrations of Envoy 50 SC. According to Adhikari, Sarkar, Chatterjee, Mahapatra, and Ayyappan (2004), and Evans and Claiborne (2005), biochemical and hematological indices can be useful diagnostic markers for the functional status and stress responses of fish during pesticide exposure. The results were also supported by other studies, where alteration in the blood parameters and histomorphologies of erythrocytes of Cyprinus carpio and Puntius ticto have been observed due to the exposure of some chlorinated pesticides (Satyanarayan, Bejankiwar, Chaudhari, Kotangale, & Satyanarayan, 2004). Likewise, from another study by Maheswaran, Devapaul, Muralidharan, Velmurugan, and Ignacimuthu (2008), hematocrit quantity and morphology have been reported in altered condition due to the pollutant exposure in Clarias batrachus, which also partially support the results of the present study. Moreover, RBC is the vertebrate's central carriage to convey oxygen all through the body and the circulatory system through the gills of fish and proclaims oxygen into the tissues, which is followed by squeezing through the body's capillaries (Wikipedia contributors, Red blood cell, 2019). Therefore, in this study, lower RBC count due to the pesticide exposure might have affected the fish ability to effectively deliver oxygen to the tissues, which resulted in the attempt of increasing consumption rate of oxygen to recuperate the situation. With the increasing concentration of pesticides, the total system may fail and cause the death of H. fossilis.
AChE activity is an important biomarker for organophosphates and carbamate pesticides than other contaminants, inhibitions of which indicate the exposure and effects of these chemicals in fish. In the present study, significant (P < 0.05) brain AChE inhibition was observed in pesticide-exposed fish. However, maximum inhibition in AChE activity (up to 51.49%) was reported from L. rohita that agrees with Sancho, Ferrando, and Andreu (1998), who described that exposure to 0.04 ppm fenitrothion (an organophosphate) produced a 57% decline in AChE activity, while 51% reduction was recorded for 0.02 ppm. Additionally, from a comparative study on 11 freshwater teleost species by Chuiko (2000), in vitro inhibition of brain and serum AChE by DDVP (an organophosphate pesticide) has been observed. Similar decline of AChE activities responsible for in vitro treatment with organophosphates has also been reported recently (Valbonesi, Brunelli, Mattioli, Rossi, & Fabbri, 2011; Rodrigues et al., 2011; Colovic, Krstic, Uscumlic, & Vasic, 2011). Moreover, Pessoa et al. (2011) showed behavioral changes in O. niloticus caused by the enzymatic inhibition during pesticide exposure, whereas reduced ammonium excretion and oxygen consumption was suggested by Barbieri, Augusto, and Ferreira (2011). Reza et al. (2017) also showed significant inhibition of AChE activity in L. rohita at 216.7 ± 11.0, 207.3 ± 5.0, and 146.7 ± 5.5 nmol/min/mg protein after exposure to Envoy 50 SC, Samcup 20 EC, and Dursban 20 EC, respectively. According to their study, exposure of B. gonionotus to Samcup 50 EC and Dursban 20 EC showed significant inhibition (P < 0.05), which were recorded as 242.0 ± 6.6 and 221.7 ± 60.3 nmol/min/mg protein, respectively. Furthermore, pesticide-treated L. rohita showed higher enzymatic inhibition (51.49%) than B. gonionotus (19.60%). The restlessness, hyperactivity with abrupt erratic swimming of H. fossilis fries in the present study, might have occurred due to the reduction of the AChE activity, which resulted in the accumulation of acetylcholine at synaptic junctions (Colović, Krstić, Lazarević-Pašti, Bondžić, & Vasić, 2013) and stimulated the peripheral nervous system that caused the modulation of the metabolic activities and more oxygen requirement (Pandey et al., 2009).
This study undoubtedly indicates that the presence of commonly used organophosphate pesticides in freshwater reservoirs could cause deleterious effects on the earlier life stages of a comparatively harder and robust fish, which ensures the threat pesticides might possess to other delicate wild species. Their physiological alterations may potentially decrease their survival rate in the nature. Therefore, measures should be taken to mitigate the possible contamination of the aquatic ecosystem by such toxic chemicals, and to strengthen the current findings, further continuation of research should be made. Additionally, more studies for their potential residual effects are required to be performed for completely understanding their hazardous impacts on aquatic ecosystems, with the requirements of using environmentally safe agricultural pesticides.
All data are available upon request.
Organophosphate
Red blood cell
Adhikari, S., Sarkar, B., Chatterjee, A., Mahapatra, C. T., & Ayyappan, S. (2004). Effects of cypermethrin and carbofuran on certain haematological parameters and prediction of their recovery in a freshwater teleost, Labeo rohita (Hamilton). Ecotoxicology and Environmental Safety, 58(2), 220–226. https://doi.org/10.1016/j.ecoenv.2003.12.003.
Aktar, M. W., Sengupta, D., & Chowdhury, A. (2009). Impact of pesticides use in agriculture: Their benefits and hazards. Interdisciplinary Toxicology, 2(1), 1–12. https://doi.org/10.2478/v10102-009-0001-7.
Al-Asgah, N. A., Abdel-Warith, A. W., Younis, E.-S. M., & Allam, H. Y. (2015). Haematological and biochemical parameters and tissue accumulations of cadmium in Oreochromis niloticus exposed to various concentrations of cadmium chloride. Saudi Journal of Biological Sciences, 22(5), 543–550. https://doi.org/10.1016/j.sjbs.2015.01.002.
Barbieri, E., Augusto, L., & Ferreira, A. (2011). Effects of the organophosphate pesticide Folidol 600 on the freshwater fish, Nile tilapia (Oreochromis niloticus). Pesticide Biochemistry and Physiology, 99(3), 209–214. https://doi.org/10.1016/j.pestbp.2010.09.002.
Benli, K. C. A., & Ozkul, A. (2010). Acute toxicity and histopathological effects of sublethal fenitrothion on Nile tilapia, Oreochromis niloticus. Pesticide Biochemistry and Physiology, 97(1), 32–35. https://doi.org/10.1016/j.pestbp.2009.12.001.
Blahova, J., Modra, H., Sevcikova, M., Marsalek, P., Zelnickova, L., Skoric, M., & Svobodova, Z. (2014). Evaluation of biochemical, haematological, and histopathological responses and recovery ability of common carp (Cyprinus carpio L.) after acute exposure to atrazine herbicide. BioMed Research International, 2014(4), 980948. https://doi.org/10.1155/2014/980948.
Chandra, S., & Chandra, H. (2013). Role of haematological parameters as an indicator of acute malarial infection in Uttarakhand state of India. Mediterranean Journal of Hematology and Infectious Diseases, 5(1), e2013009. https://doi.org/10.4084/MJHID.2013.009.
Chuiko, G. M. (2000). Comparative study of acetylcholinesterase and butyrylcholinesterase in brain and serum of several freshwater fish: Specific activities and in vitro inhibition by DDVP, an organophosphorus pesticide. Comparative Biochemistry and Physiology Part C: Pharmacology, Toxicology and Endocrinology, 127(3), 233–242. https://doi.org/10.1016/s0742-8413(00)00150-x.
Colović, M. B., Krstić, D. Z., Lazarević-Pašti, T. D., Bondžić, A. M., & Vasić, V. M. (2013). Acetylcholinesterase inhibitors: Pharmacology and toxicology. Current Neuropharmacology, 11(3), 315–335. https://doi.org/10.2174/1570159X11311030006.
Colovic, M. B., Krstic, D. Z., Uscumlic, G. S., & Vasic, V. M. (2011). Single and simultaneous exposure of acetylcholinesterase to Diazinon, chlorpyrifos and their photodegradation products. Pesticide Biochemistry and Physiology, 100(1), 16–22. https://doi.org/10.1016/j.pestbp.2011.01.010.
Deka, S., & Mahanta, R. (2012). A study on the effect of organophosphorus pesticide malathion on hepato-renal and reproductive organs of Heteropneustes fossilis (Bloch). The Science Probe, 1(1), 1–13 https://pdfs.semanticscholar.org/61b6/ecb9f9178f8b34acab55b735b50916f5ee94.pdf.
Ellman, G. L., Courtney, K. D., Andres, J. R. V., & Featherstone, R. M. (1961). A new and rapid colorimetric determination of acetylcholinesterase activity. Biochemical Pharmacology, 7(2), 88–95. https://doi.org/10.1016/0006-2952(61)90145-9.
Evans, D. H., & Claiborne, J. B. (2005). The physiology of fishes. Boca Raton, Fla, USA: CRC Press.
Ferenczy, J., Szegletes, T., Balint, T., Abraham, M., & Nemcsok, J. (1997). Characterization of acetylcholinesterase and its molecular forms in organs of five freshwater teleosts. Fish Physiology and Biochemistry, 16(6), 515–529. https://doi.org/10.1023/A:1007701323808.
Fischer-Scherl, T., Veeser, A., Hoffmann, R. W., Kuhnhauser, C., Negele, R., & Ewringmann, T. (1991). Morphological effects of acute and chronic atrazine exposure in rainbow trout (Oncorhynchus mykiss). Archives of Environmental Contamination and Toxicology, 20(4), 454–461. https://doi.org/10.1007/BF01065833.
Gernhofer, M., Pawet, M., Schramm, M., Müller, E., & Triebskorn, R. (2001). Ultrastructural biomarkers as tools to characterize the health status of fish in contaminated streams. Journal of Aquatic Ecosystem Stress and Recovery, 8(3–4), 241–260. https://doi.org/10.1023/A:1012958804442.
Gill, H.K., & Garg, H. (2014). Pesticides: Environmental impacts and management strategies. Pesticides – Toxic Aspects, Marcelo L. Larramendy and Sonia Soloneski, IntechOpen. https://www.intechopen.com/books/pesticides–toxic–aspects/pesticides–environmental–impacts–and–management–strategies. Accessed 01 July 2019.
Grue, C. E., Gibert, P. L., & Seeley, M. E. (1997). Neurophysiological and behavioral changes in non-target wildlife exposed to organophosphate and carbamate pesticides: Thermoregulation, food consumption, and reproduction. American Zoologist, 37(4), 369–388. https://doi.org/10.1093/icb/37.4.369.
Habig, C., Giulio, D. R., & Donia, A. M. (1988). Comparative properties of channel catfish (Ictalurus punctatus) and blue crab (Callinectes sapidus) acetylcholinesterases. Comparative Biochemistry and Physiology Part C: Comparative Pharmacology, 91(2), 293–300. https://doi.org/10.1016/0742-8413(88)90032-1.
Hamilton, P. B., Cowx, I. G., Oleksiak, M. F., Griffiths, A. M., Grahn, M., Stevens, J. R., … Tyler, C. R. (2016). Population-level consequences for wild fish exposed to sublethal concentrations of chemicals – A critical review. Fish and Fisheries, 17(3), 545–566. https://doi.org/10.1111/faf.12125.
Hook, S. E., Gallagher, E. P., & Batley, G. E. (2014). The role of biomarkers in the assessment of aquatic ecosystem health. Integrated Environmental Assessment and Management, 10(3), 327–341. https://doi.org/10.1002/ieam.1530.
Hossain, Z., Haldar, G. C., & Mollah, M. F. A. (2000). Acute toxicity of chlorpyrifos, cadusafos and Diazinon to three Indian major carps (Labeo rohita, Catla catla and Cirrhinus mrigala) fingerlings. Bangladesh Journal of Fisheries Research, 4(2), 191–198 http://aquaticcommons.org/16464/1/BJFR4.2_191.pdf. Accessed 01 July 2019.
Hossain, Z., Rahman, M.Z., & Mollah, M.F.A. (2001). Effects of two organophosphorus pesticides Diazinon 60 EC and Dimecorn 100 SCW on a zooplankton, Diaptomus. Pakistan Journal of Biological Sciences, 4(11), 1403–1405. http://docsdrive.com/pdfs/ansinet/pjbs/2001/1403-1405.pdf. Accessed 01 July 2019.
Hossain, Z., Rahman, M.Z., & Mollah, M.F.A. (2002). Effect of Dimecron 100 SCW on Anabas testudineus, Channa punctatus and Barbobes gonionotus. Indian Journal of Fisheries, 49(4), 405–417. http://epubs.icar.org.in/ejournal/index.php/IJF/article/view/8213/3229. Accessed 24 October 2019.
Hrubec, T. C., Cardinale, J. L., & Smith, S. A. (2000). Haematology and plasma chemistry reference intervals for cultured tilapia (Oreochromis hybrid). Veterinary Clinical Pathology, 29(1), 7–12. https://doi.org/10.1111/j.1939-165X.2000.tb00389.x.
Jash, N. B., Chatterjee, S., & Bhattacharya, S. (1982). Role of acetylcholine in the recovery of brain acetylcholinesterase in Channa punctatus (Bloch) exposed to Furadan. Comparative Physiology and Ecology, 7, 56–58.
Jha, B.R., & Rayamajhi, A. (2010). Heteropneustes fossilis (errata version published in 2018). The IUCN Red List of Threatened Species 2010: e.T166452A135875733. 10.2305/IUCN.UK.2010–4.RLTS.T166452A6212487.en. Accessed 03 July 2019.
Katagi, T. (2010). Bioconcentration, bioaccumulation, and metabolism of pesticides in aquatic organisms. In D. Whitacre (Ed.), Reviews of Environmental Contamination and Toxicology. Reviews of Environmental Contamination and Toxicology (Continuation of Residue Reviews), vol 204. New York, NY: Springer. https://doi.org/10.1007/978-1-4419-1440-8_1.
Chapter Google Scholar
Khan, M. N., Islam, A. K. M. S., & Hossain, M. G. (2003). Marginal analysis of culture of stinging catfish (Heteropneustes fossilis, Bloch): Effect of different stocking densities in earthen ponds. Pakistan Journal of Biological Sciences, 6(7), 666–670. https://doi.org/10.3923/pjbs.2003.666.670.
Kopecka, J., Rybakowas, A., Barsiene, J., & Pempkowiak, J. (2004). AChE levels in mussels and fish collected off Lithuania and Poland (southern Baltic). Oceanologica. 46(3), 405–418. http://www.iopan.gda.pl/oceanologia/. Accessed 03 July 2019.
Kori-Siakpere, O., Ake, J. E. G., & Idoge, E. (2005). Haematological characteristics of the African snakehead, Parachacnna obscura. African Journal of Biotechnology, 4(6), 527–530. https://doi.org/10.5897/AJB2005.000-3096.
Kwong, T.C. (2002). Organophosphate pesticides: Biochemistry and clinical toxicology. Therapeutic Drug Monitoring, 24(1), 144–149. https://www.ncbi.nlm.nih.gov/pubmed/11805735. Accessed 01 July 2019.
Lew, S., Lew, M., Biedunkiewicz, A., & Szarek, J. (2013). Impact of pesticide contamination on aquatic microorganism populations in the littoral zone. Archives of Environmental Contamination and Toxicology, 64(3), 399–409. https://doi.org/10.1007/s00244-012-9852-6.
Lionetto, M. G., Caricato, R., Calisi, A., Giordano, M. E., & Schettino, T. (2013). Acetylcholinesterase as a biomarker in environmental and occupational medicine: New insights and future perspectives. BioMed Research International, 2013, 321213. https://doi.org/10.1155/2013/321213.
Lowry, O.H., Rosebrough, N.J., Farr, A.L., & Randall, R.J. (1951). Protein measurement with the Folin phenol reagent. Journal of Biological Chemistry, 193(1), 265–275. http://www.jbc.org/content/193/1/265.long. Accessed 01 July 2019.
Maheswaran, R., Devapaul, A., Muralidharan, S., Velmurugan, B., & Ignacimuthu, S. (2008). Haematological studies of fresh water fish, Clarias batrachus (L.) exposed to mercuric chloride. International Journal of Integrative Biology, 2(1), 49–54. http://ijib.classicrus.com/trns/2574241571887312.pdf. Accessed 24 October 2019.
Marchand, M. J., Van Dyk, J., Pieterse, G. M., Barnhoorn, I. E., & Bornman, M. S. (2009). Histopathological alterations in the liver of the sharptooth catfish Clarias gariepinus from polluted aquatic systems in South Africa. Environmental Toxicology, 24(2), 133–147. https://doi.org/10.1002/tox.20397.
Math, M. V., Kattimani, Y. R., Khadkikar, R. M., Patel, S. M., Shanti, V., & Inamdar, R. S. (2016). Red blood cell count: Brief history and new method. MGM Journal of Medical Sciences, 3(3), 116–119. https://doi.org/10.5005/jp-journals-10036-1104.
Mensah, P.K., Palmer, C.G., & Muller, W.J. (2014). Lethal and sublethal effects of pesticides on aquatic organisms: The case of a freshwater shrimp exposure to Roundup®. Pesticides – Toxic Aspects, Marcelo L. Larramendy and Sonia Soloneski, IntechOpen. https://www.intechopen.com/books/pesticides–toxic–aspects/lethal–and–sublethal–effects–of–pesticides–on–aquatic–organisms–the–case–of–a–freshwater–shrimp–expo. Accessed 01 July 2019.
Oropesa, A. L., Cambero, J. P. G., Gómez, L., Roncero, V., & Soler, F. (2009). Effect of long-term exposure to simazine on histopathology, haematological, and biochemical parameters in Cyprinus carpio. Environmental Toxicology, 24(2), 187–199. https://doi.org/10.1002/tox.20412.
Özkara, A., Akyıl, D., & Konuk, M. (2016). Pesticides, environmental pollution, and health, environmental health risk – Hazardous factors to living species. Marcelo L. Larramendy and Sonia Soloneski, IntechOpen. https://www.intechopen.com/books/environmental–health–risk–hazardous–factors–to–living–species/pesticides–environmental–pollution–and–health. Accessed 01 July 2019.
Pandey, R.K., Singh, R.N., Singh, S., Singh, N.N., & Das, V.K. 2009. Acute toxicity bioassay of dimethoate on freshwater airbreathing catfish, Heteropneustes fossilis (Bloch). Journal of Environmental Biology 30(3), 437–440. http://www.jeb.co.in/journal_issues/200905_may09/paper_23.pdf. Accessed 01 July 2019.
Pessoa, P. C., Luchmannb, K. H., Ribera, A. B., Verasa, M. M., Correac, J. R. M. B., Nogueirab, A. J., & Carvalhoa, P. S. M. (2011). Cholinesterase inhibition and behavioral toxicity of carbofuran on Oreochromis niloticus early life stages. Aquatic Toxicology, 105(3–4), 312–320. https://doi.org/10.1016/j.aquatox.2011.06.020.
Rahman, M.Z., Hossain, Z., Mollah, M.F.A., & Ahmed, G.U. (2002). Effects of Diazinon 60 EC on Anabas testudineus, Channa punctatus and Barbobes gonionotus. Naga, the ICLARM quarterly, 25(2): 8–12. https://www.researchgate.net/publication/227642224. Accessed 01 July 2019.
Rakhi, S. F., Reza, A. H. M. M., Hossen, M. S., & Hossain, Z. (2013). Alterations in histopathological features and brain acetylcholinesterase activity in stinging catfish, Heteropneustes fossilis exposed to polluted river water. International Aquatic Research, 5, 7. https://doi.org/10.1186/2008-6970-5-7.
Reza, A. H. M. M., Rakhi, S. F., Hossen, M. S., & Hossain, Z. (2017). Organ specific histopathology and brain acetylcholinesterase inhibition in rohu, Labeo rohita and silver barb, Barbonymus gonionotus: Effects of three widely used organophosphate pesticides. Turkish Journal of Fisheries and Aquatic Sciences, 17, 821–832. https://doi.org/10.4194/1303-2712-v17_4_18.
Richetti, S. K., Rosemberg, D. B., Ventura-Lima, J., Monserrat, J. M., Bogo, M. R., & Bonan, C. D. (2011). Acetylcholinesterase activity and antioxidant capacity of zebrafish brain is altered by heavy metal exposure. Neurotoxicology, 32(1), 116–122. https://doi.org/10.1016/j.neuro.2010.11.001.
Rodrigues, S. R., Caldeira, C., Castro, B. B., Gonçalves, F., Nunes, B., & Antunes, S. C. (2011). Cholinesterase (ChE) inhibition in pumpkinseed (Lepomis gibbosus) as environmental biomarker: ChE characterization and potential neurotoxic effects of xenobiotics. Pesticide Biochemistry and Physiology, 99(2), 181–188. https://doi.org/10.1016/j.pestbp.2010.12.002.
Rusyniak, D. E., & Nanagas, K. A. (2004). Organophosphate poisoning. Seminars in Neurology, 24(2), 197–204. https://doi.org/10.1055/s-2004-830907.
Saaristo, M., Brodin, T., Balshine, S., Bertram, M. G., Brooks, B. W., Ehlman, S. M., … Arnold, K. E. (2018). Direct and indirect effects of chemical contaminants on the behaviour, ecology and evolution of wildlife. Proceedings of the Royal Society B: Biological Science, 285(1885), 20181297. https://doi.org/10.1098/rspb.2018.1297.
Sánchez-Bayo, F., Goka, K., & Hayasaka, D. (2016). Contamination of the aquatic environment with neonicotinoids and its implication for ecosystems. Frontiers in Environmental Science, 4, 71. https://doi.org/10.3389/fenvs.2016.00071.
Sancho, E., Ferrando, M. D., & Andreu, E. (1998). In vivo inhibition of AChE activity in the European eel Anguilla anguilla exposed to technical grade fenitrothion. Comparative Biochemistry and Physiology Part C: Pharmacology, Toxicology and Endocrinology, 120(3), 389–395. https://doi.org/10.1016/S0742-8413(98)10067-1.
Sandahl, J. F., Baldwin, D. H., Jenkins, J. J., & Scholz, N. L. (2005). Comparative thresholds for acetylcholinesterase inhibition and behavioral impairment in coho salmon exposed to chlorpyrifos. Environmental Toxicology and Chemistry, 24(1), 136–145. https://doi.org/10.1897/04-195R.1.
Satyanarayan, S., Bejankiwar, R.S., Chaudhari, P.R., Kotangale, J.P., & Satyanarayan, A. (2004). Impact of some chlorinated pesticides on the haematology of the fish Cyprinus carpio and Puntius ticto. Journal of Environmental Sciences, 16 (4), 631–634. https://www.ncbi.nlm.nih.gov/pubmed/15495970. Accessed 01 July 2019.
Schwaiger, J., Wanke, R., Adam, S., Pawert, M., Honnen, W., & Triebskorn, R. (1997). The use of histopathological indicators to evaluate contaminant-related stress in fish. Journal of Aquatic Ecosystem Stress and Recovery, 6(1), 75–86. https://doi.org/10.1023/A:1008212000208.
Sharbidre, A. A., Metkari, V., & Patode, P. (2011). Effect of methyl parathion and chlorpyrifos on certain biomarkers in various tissues of guppy fish, Poecilia reticulate. Pesticide Biochemistry and Physiology, 101, 132–141. https://doi.org/10.1016/j.pestbp.2011.09.002.
Tabinda, A. B., Khan, M. A., Hany, O., Ayub, M., Hussain, M., Yasar, A., & Khan, M. A. (2003). Rate of oxygen consumption in fingerlings of major carps at different temperatures. Pakistan Journal of Biological Sciences, 6, 1535–1539. https://doi.org/10.3923/pjbs.2003.1535.1539.
Thophon, S. M., Kruatrachue, E. S., Upathan, P., Pokethitiyook, S., Sahaphong, S., & Jarikhuan, S. (2003). Histopathological alterations of white seabass, Lates calcarifer in acute and subchronic cadmium exposure. Environmental Pollution, 121(3), 307–320. https://doi.org/10.1016/S0269-7491(02)00270-1.
Valbonesi, P., Brunelli, F., Mattioli, M., Rossi, T., & Fabbri, E. (2011). Cholinesterase activities and sensitivity to pesticides in different tissues of silver European eel, Anguilla anguilla. Comparative Biochemistry and Physiology - Part C: Toxicology & Pharmacology, 154(4), 353–359. https://doi.org/10.1016/j.cbpc.2011.07.003.
Varo, I., Serrano, R., Pitarch, E., Amat, F., Lopez, F. J., & Navarro, J. C. (2002). Bioaccumulation of chlorpyrifos through an experimental food chain: Study of protein HSP70 as biomarker of sublethal stress in fish. Archives of Environmental Contamination and Toxicology, 42(2), 229–235. https://doi.org/10.1007/s00244-001-0013-6.
Wikipedia contributors. Red blood cell. 2019. In Wikipedia, The Free Encyclopedia. https://en.wikipedia.org/w/index.php?title=Red_blood_cell&oldid=896416362. Accessed 31 June, 2019.
Yancheva, V., Velcheva, I., Stoyanova, S., & Georgieva, E. (2016). Histological biomarkers in fish as a tool in ecological risk assessment and monitoring programs: A review. Applied Ecology and Environmental Research, 14(1), 47–75. https://doi.org/10.15666/aeer/1401_047075.
Yang, L., Zha, J., Li, W., Li, Z., & Wang, Z. (2010). Atrazine affects kidney and adrenal hormones (AHs) related genes expressions of rare minnow (Gobiocypris rarus). Aquatic Toxicology, 97(3), 204–211. https://doi.org/10.1016/j.aquatox.2009.09.005.
Zahra, K. (2017). Effects of environmental pollution on fish: A short review. Transylvanian review of systematical and. Ecological Research, 19(1), 49–60. https://doi.org/10.1515/trser-2017-0005.
Zodrow, J. M., Stegemanb, J. J., & Tanguay, R. L. (2004). Histological analysis of acute toxicity of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) in zebra fish. Aquatic Toxicology, 66(1), 25–38. https://doi.org/10.1016/j.aquatox.2003.07.002.
The authors are thankful to the authorized dealer of Mymensingh for providing fish for experimental purpose. The authors also thank Bangladesh Agricultural University, Mymensingh and Government of NST fellowship of Bangladesh in supporting and funding the present research.
The project was funded by Bangladesh Agricultural University Research System, Mymensingh, Bangladesh under the ORCID no. 0000-0001-7122-5299. There are also no contradictions among the authors concerning any personal or professional relationships, affiliations, or beliefs regarding the research discussed in this manuscript.
Department of Fisheries Biology and Genetics, Faculty of Fisheries, Bangladesh Agricultural University, Mymensingh, 2202, Bangladesh
Rabeya Akter, Mst Arzu Pervin, Halima Jahan, Sharmin Ferdewsi Rakhi, A. H. M. Mohsinul Reza & Zakir Hossain
Upazilla Fisheries Office, Kasba, Brahmanbaria, Bangladesh
Sharmin Ferdewsi Rakhi
School of Biological Science, College of Science and Engineering, Flinders University, Adelaide, Australia
A. H. M. Mohsinul Reza
Rabeya Akter
Mst Arzu Pervin
Halima Jahan
RA has planned the experiment, determined the acetylcholinesterase activities, hematological parameters, and drafted the final article. MAP and HJ have determined histopathologies, collected the fish and helped RA to set the experiment. SFR and AHMMR helped in data collection, analysis, and final drafting of the manuscript. ZH critically supervised and helped in experimental planning with the addition of manuscript drafting. The author(s) read and approved the final manuscript.
Correspondence to Zakir Hossain.
All animal procedures and treatment in this experiment were used agreeing the welfare recommendations of code of practice for the care and use of animals for scientific purposes of Bangladesh Agricultural University, approved by the Animal Welfare and Experimental Ethics Committee, BAU, Mymensingh-2202 (AWEEC/BAU/2019, 32) in accordance with the national guidelines for care and use of laboratory animals.
The authors declare that there is no conflict of interest of academic or financial nature with any individual or organization.
Akter, R., Pervin, M.A., Jahan, H. et al. Toxic effects of an organophosphate pesticide, envoy 50 SC on the histopathological, hematological, and brain acetylcholinesterase activities in stinging catfish (Heteropneustes fossilis). JoBAZ 81, 47 (2020). https://doi.org/10.1186/s41936-020-00184-w
Envoy 50 SC
Stinging catfish
Fish toxicology
Tissue damage | CommonCrawl |
Calculators Topics Go Premium About Snapxam
ENG • ESP
Processing image... Tap to take a pic of the problem
Given a function $f(x)$ and the interval $[a,b]$, the definite integral is equal to the area that is bounded by the graph of $f(x)$, the x-axis and the vertical lines $x=a$ and $x=b$
Evaluate $\int_1^4x^2\:dx$ 1d ago
Evaluate $\int_{61}^{70}\csc^4\left(4v\right)\cot\left(4t\right)\csc\left(4v\right)dv$ 1d ago
Evaluate $\int_0^2\left(x+2\right)dx$ 1d ago
Evaluate $\int_1^2\left(\frac{e^{\frac{1}{x^3}}}{x^4}\right)dx$ 1d ago
Evaluate $\int_0^1\left(15x\sqrt{x^2+4}\right)dx$ 1d ago
Evaluate $\int_0^1\left(x\cdot\sin\left(n\pi\right)\right)dx$ 1d ago
Evaluate $\int_0^1\left(x\cdot\cos\left(n\pi\right)\right)dx$ 1d ago
Evaluate $\int_0^1\left(x\cos\left(n\pi\right)\right)dx$ 1d ago
Evaluate $\int_0^{\infty}\left(\frac{1}{\sqrt{e^x}}\right)dx$ 2d ago
Evaluate $\int_0^{12}2x\sqrt{2x-1}dx$ 2d ago
Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day!
© 2018-2020 Snapxam, Inc. About Us Privacy Terms Contact
Calculators Topics Go Premium | CommonCrawl |
Persistence in non-autonomous quasimonotone parabolic partial functional differential equations with delay
DCDS-B Home
Trajectory and global attractors for generalized processes
August 2019, 24(8): 3971-3994. doi: 10.3934/dcdsb.2018339
Stochastic dynamics of cell lineage in tissue homeostasis
Yuchi Qiu 1, , Weitao Chen 2, and Qing Nie 3,,
Department of Mathematics, University of California, Irvine, Irvine, CA 92697, USA
Department of Mathematics, University of California, Riverside, Riverside, CA 92507, USA
Department of Mathematics, Department of Developmental and Cell Biology, University of California, Irvine, Irvine, CA 92697, USA
* Corresponding author: Qing Nie
Contributed in honor of Peter Kloeden on the occasion of his 70th birthday
Received June 2018 Revised August 2018 Published January 2019
Fund Project: This work is supported by the NIH grants U01AR073159, R01GM107264, and R01NS095355; a grant from the Simons Foundation (594598, QN), and the NSF grant DMS1763272, DMS1562176, and DMS1762063
During epithelium tissue maintenance, lineages of cells differentiate and proliferate in a coordinated way to provide the desirable size and spatial organization of different types of cells. While mathematical models through deterministic description have been used to dissect role of feedback regulations on tissue layer size and stratification, how the stochastic effects influence tissue maintenance remains largely unknown. Here we present a stochastic continuum model for cell lineages to investigate how both layer thickness and layer stratification are affected by noise. We find that the cell-intrinsic noise often causes reduction and oscillation of layer size whereas the cell-extrinsic noise increases the thickness, and sometimes, leads to uncontrollable growth of the tissue layer. The layer stratification usually deteriorates as the noise level increases in the cell lineage systems. Interestingly, the morphogen noise, which mixes both cell-intrinsic noise and cell-extrinsic noise, can lead to larger size of layer with little impact on the layer stratification. By investigating different combinations of the three types of noise, we find the layer thickness variability is reduced when cell-extrinsic noise level is high or morphogen noise level is low. Interestingly, there exists a tradeoff between low thickness variability and strong layer stratification due to competition among the three types of noise, suggesting robust layer homeostasis requires balanced levels of different types of noise in the cell lineage systems.
Keywords: Stem cell, noise, tissue size, morphogen, feedback.
Mathematics Subject Classification: Primary: 92B05, 60H15; Secondary: 65C30.
Citation: Yuchi Qiu, Weitao Chen, Qing Nie. Stochastic dynamics of cell lineage in tissue homeostasis. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3971-3994. doi: 10.3934/dcdsb.2018339
M. Acar, J. T. Mettetal and A. Van Oudenaarden, Stochastic switching as a survival strategy in fluctuating environments, Nature genetics, 40 (2008), 471.Google Scholar
D. Austin, M. Allen, J. McCollum, R. Dar, J. Wilgus, G. Sayler, N. Samatova, C. Cox and M. Simpson, Gene network shaping of inherent noise spectra, Nature, 439(2006), 608.Google Scholar
S. V. Avery, Microbial cell individuality and the underlying sources of heterogeneity, Nature Reviews Microbiology, 4 (2006), 577.Google Scholar
A. Becskei and L. Serrano, Engineering stability in gene networks by autoregulation, Nature, 405 (2000), 590.Google Scholar
W. J. Blake, G. Balaázsi, M. A. Kohanski, F. J. Isaacs, K. F. Murphy, Y. Kuang, C. R. Cantor, D. R. Walt and J. J. Collins, Phenotypic consequences of promoter-mediated transcriptional noise, Molecular Cell, 24 (2006), 853-865. Google Scholar
T. Borovski, E. M. Felipe De Sousa, L. Vermeulen and J. P. Medema, Cancer stem cell niche: The place to be, Cancer Research, 71 (2011), 634-639. Google Scholar
C.-S. Chou, W.-C. Lo, K. K. Gokoffski, Y.-T. Zhang, F. Y. Wan, A. D. Lander, A. L. Calof and Q. Nie, Spatial dynamics of multistage cell lineages in tissue stratification, Biophysical Journal, 99 (2010), 3145-3154. Google Scholar
F. Doetsch, A niche for adult neural stem cells., Development, 13 (2003), 543-550. Google Scholar
H. Du, Y. Wang, D. Haensel, B. Lee, X. Dai and Q. Nie, Multiscale modeling of layer formation in epidermis, PLoS Computational Biology, 14 (2018), e1006006.Google Scholar
A. D. Economou, A. Ohazama, T. Porntaveetus, P. T. Sharpe, S. Kondo, M. A. Basson, A. Gritli-Linde, M. T. Cobourne and J. B. Green, Periodic stripe formation by a Turing mechanism operating at growth zones in the mammalian palate, Nature Genetics, 44 (2012), 348.Google Scholar
M. B. Elowitz, A. J. Levine, E. D. Siggia and P. S. Swain, Stochastic gene expression in a single cell, Science, 297 (2002), 1183-1186. Google Scholar
L. Gammaitoni, P. Haänggi, P. Jung and F. Marchesoni, Stochastic resonance, Reviews of Modern Physics, 70 (1998), 223.Google Scholar
H. Ge, H. Qian and X. S. Xie, Stochastic phenotype transition of a single cell in an intermediate region of gene state switching, Physical Review Letters, 114 (2015), 078101.Google Scholar
J. Hasty, J. Pradines, M. Dolnik and J. J. Collins, Noise-based switches and amplifiers for gene expression., Proceedings of the National Academy of Sciences, 97 (2000), 2075-2080. Google Scholar
D. Huh and J. Paulsson, Non-genetic heterogeneity from stochastic partitioning at cell division, Nature Genetics, 43 (2011), 95.Google Scholar
A. Jentzen and P. E. Kloeden, Taylor Approximations for Stochastic Partial Differential Equations, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011. doi: 10.1137/1.9781611972016. Google Scholar
M. Kærn, T. C. Elston, W. J. Blake and J. J. Collins, Stochasticity in gene expression: From theories to phenotypes, Nature Reviews Genetics, 6 (2005), 451.Google Scholar
D. C. Kirouac, G. J. Madlambayan, M. Yu, E. A. Sykes, C. Ito and P. W. Zandstra, Cell-cell interaction networks regulate blood stem and progenitor cell fate, Molecular Systems Biology, 5 (2009), 293.Google Scholar
P. E. Kloeden, The Numerical Solution of Stochastic Differenttial Equations, Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-662-12616-5. Google Scholar
A. D. Lander, Pattern, growth, and control, Cell, 144 (2011), 955-969. Google Scholar
A. D. Lander, K. K. Gokoffski, F. Y. Wan, Q. Nie and A. L. Calof, Cell lineages and the logic of proliferative control, PLoS Biology, 7 (2009), e1000015.Google Scholar
A. D. Lander, J. Kimble, H. Clevers, E. Fuchs, D. Montarras, M. Buckingham, A. L. Calof, A. Trumpp and T. Oskarsson, What does the concept of the stem cell niche really mean today?, BMC Biology, 10 (2012), 19.Google Scholar
A. Li, S. Figueroa, T.-X. Jiang, P. Wu, R. Widelitz, Q. Nie and C.-M. Chuong, Diverse feather shape evolution enabled by coupling anisotropic signalling modules with self organizing branching programme, Nature Communications, 8 (2017), ncomms14139.Google Scholar
L. Li and T. Xie, Stem cell niche: Structure and function, Annu. Rev. Cell Dev. Biol., 21 (2005), 605-631. Google Scholar
C.-M. Lin, T. X. Jiang, R. E. Baker, P. K. Maini, R. B. Widelitz and C.-M. Chuong, Spots and stripes: pleomorphic patterning of stem cells via p-ERK-dependent cell chemotaxis shown by feather morphogenesis and mathematical simulation, Developmental Biology, 334 (2009), 369-382. Google Scholar
W.-C. Lo, C.-S. Chou, K. K. Gokoffski, F. Y.-M. Wan, A. D. Lander, A. L. Calof and Q. Nie, Feedback regulation in multistage cell lineages, Mathematical Biosciences and Engineering: MBE, 6 (2009), 59-82. doi: 10.3934/mbe.2009.6.59. Google Scholar
F. Luciani, D. Champeval, A. Herbette, L. Denat, B. Aylaj, S. Martinozzi, R. Ballotti, R. Kemler, C. R. Goding and F. De Vuyst, Biological and mathematical modeling of melanocyte development, Development, 138 (2011), 3943-3954. Google Scholar
A. Marciniak-Czochra, T. Stiehl, A. D. Ho, W. Jaäger and W. Wagner, Modeling of asymmetric cell division in hematopoietic stem cells-regulation of self-renewal is essential for efficient repopulation, Stem Cells and Development, 18 (2009), 377-386. Google Scholar
H. H. McAdams and A. Arkin, Stochastic mechanisms in gene expression, Proceedings of the National Academy of Sciences, 94 (1997), 814-819. Google Scholar
S. McCroskery, M. Thomas, L. Maxwell, M. Sharma and R. Kambadur, Myostatin negatively regulates satellite cell activation and self-renewal, The Journal of Cell Biology, 162 (2003), 1135-1147. Google Scholar
M. D. McDonnell and D. Abbott, What is stochastic resonance? Definitions, misconceptions, debates, and its relevance to biology, PLoS Computational Biology, 5 (2009), e1000348, 9pp. doi: 10.1371/journal.pcbi.1000348. Google Scholar
F. L. Moolten and N. L. Bucher, Regeneration of rat liver: Transfer of humoral agent by cross circulation, Science, 158 (1967), 272-274. Google Scholar
K. A. Moore and I. R. Lemischka, Stem cells and their niches, Science, 311 (2006), 1880-1885. Google Scholar
J. Ovadia and Q. Nie, Numerical Methods for Two-Dimensional Stem Cell Tissue Growth, Journal of Scientific Computing, 58 (2014), 149-175. doi: 10.1007/s10915-013-9728-6. Google Scholar
J. Ovadia and Q. Nie, Stem cell niche structure as an inherent cause of undulating epithelial morphologies., Biophysical Journal, 104 (2013), 237-246. Google Scholar
C. Rackauckas, T. Schilling and Q. Nie, Mean-independent noise control of cell fates via intermediate states, iScience, 3 (2018), 11-20. Google Scholar
C. V. Rao, D. M. Wolf and A. P. Arkin, Control, exploitation and tolerance of intracellular noise, Nature, 420 (2002), 231.Google Scholar
J. Raspopovic, L. Marcon, L. Russo and J. Sharpe, Digit patterning is controlled by a Bmp-Sox9-Wnt Turing network modulated by morphogen gradients, Science, 345 (2014), 566-570. Google Scholar
T. Ruiz-Herrero, K. Alessandri, B. V. Gurchenkov, P. Nassoy and L. Mahadevan, Organ size control via hydraulically gated oscillations, Development, 144 (2017), 4422-4427. Google Scholar
M. L. Simpson, C. D. Cox, M. S. Allen, J. M. McCollum, R. D. Dar, D. K. Karig and J. F. Cooke, Noise in biological circuits, Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology, 1 (2009), 214-225. Google Scholar
C. L. Stokes, D. A. Lauffenburger and S. K. Williams, Migration of individual microvessel endothelial cells: stochastic model and parameter measurement, Journal of Cell Science, 99 (1991), 419-430. Google Scholar
M. Thattai and A. Van Oudenaarden, Stochastic gene expression in fluctuating environments, Genetics, 167 (2004), 523-530. Google Scholar
T. Tumbar, G. Guasch, V. Greco, C. Blanpain, W.E. Lowry, M. Rendl and E. Fuchs, Defining the epithelial stem cell niche in skin, Science, 303 (2004), 359-363. Google Scholar
L. Wang, J. Xin and Q. Nie, A critical quantity for noise attenuation in feedback systems, PLoS Computational Biology, 6 (2010), e1000764, 17pp. doi: 10.1371/journal.pcbi.1000764. Google Scholar
Q. Wang, W. R. Holmes, J. Sosnik, T. Schilling and Q. Nie, Cell sorting and noise-induced cell plasticity coordinate to sharpen boundaries between gene expression domains, PLoS Computational Biology, 13 (2017), e1005307.Google Scholar
H.-H. Wu, S. Ivkovic, R. C. Murray, S. Jaramillo, K. M. Lyons, J. E. Johnson and A. L. Calof, Autoregulation of neurogenesis by GDF11, Neuron, 37 (2003), 197-207. Google Scholar
T.-H. Yen and N. A. Wright, The gastrointestinal tract stem cell niche, Stem Cell Reviews, 2 (2006), 203-212. Google Scholar
J. Zhang, C. Niu, L. Ye, H. Huang, X. He, W.-G. Tong, J. Ross, J. Haug, T. Johnson and J. Q. Feng, Identification of the haematopoietic stem cell niche and control of the niche size., Nature, 425(2003), 836.Google Scholar
L. Zhang, K. Radtke, L. Zheng, A. Q. Cai, T. F. Schilling and Q. Nie, Noise drives sharpening of gene expression boundaries in the zebrafish hindbrain, Molecular Systems Biology, 8 (2012), 613.Google Scholar
Figure 1. A schematic diagram of a main cell lineage in epithelium. Stem cells and TA cells proliferate with probabilities $ p_0 $ and $ p_1 $ and differentiate with probabilities $ 1-p_0 $ and $ 1-p_1 $. TD cells undergo cell death with rate $ d_2 $. All three types of cells can secrete molecule A that inhibits self-renewal probability $ p_0 $. TD and TA cells secrete molecule G that inhibits self-renewal probability $ p_1 $. Molecules A and G are diffusive in the epithelium. The apical surface is moving with the dynamic position $ z_{\max} $ and no-flux boundary condition is imposed. On the other hand, leaky boundary condition is imposed at the basal lamina with its position fixed.
Figure 2. A baseline simulation for the system containing all three kinds of noise. The spatial distribution of three types of cells and different mophogens at four different time points: A. t = 0; B. t = 330; C. t = 860; D. t = 1200. E. Layer thickness in one particular stochastic simulation. F. Stratification factor of stem cells ($ sf(C_0) $). G. Stratification factor of TA cells ($ sf(C_1) $). In E-G, the black dash line is the steady-state value for corresponding quantities in the deterministic system. The noise levels used are $ \varepsilon_0 = \varepsilon_1 = 0.6 $, $ \sigma_0 = \sigma_1 = 10^{-4} $, and $ \omega_0 = \omega_1 = 0.58 $.
Figure 3. Simulations with only cell-intrinsic noise. Dash lines represent the corresponding quantities at homeostasis. A. Layer thickness in three simulations with $ \varepsilon = 0.2 $, $ 0.6 $ and $ 1 $. B. The mean $ TH $. The error bars show the standard deviation. C. The mean $ CV $. The error bars show the standard deviation of $ CV $. The mean $ SF $ of D. stem cells and E. TA cells. The error bars show the standard deviation. F. Distribution of cells and morphogens in a specific simulation with $ \varepsilon = 0.6 $ at time $ t = 400 $. In (B-E), all statistical quantities are captured based on $ 20 $ simulations, and the standard deviations (error bars) are negligible compared to the means.
Figure 4. Simulations with only cell-extrinsic noise. Dash lines represent the corresponding quantities at homeostasis. A. Layer thickness in three simulations with $ \sigma = 1\times 10^{-3} $, $ 2\times 10^{-3} $ and $ 4\times 10^{-3} $. B. The mean $ TH $. The error bars show the standard deviation. C. The mean $ CV $. The error bars show the standard deviation of $ CV $. The mean $ SF $ of D. stem cells and E. TA cells. The error bars show the standard deviation. F. Distribution of cells and morphogens in a specific simulation with $ \sigma = 3\times 10^{-3} $ at time $ t = 400 $. In (B-E), all statistical quantities are captured based on $ 20 $ simulations, and the standard deviations (error bars) are negligible compared to the means.
Figure 5. Simulations with only morphogens noise. Dash lines represent the corresponding quantities at homeostasis. A. Layer thickness in three simulations with $ \omega = 0.4 $, $ 0.6 $ and $ 1 $. B. The mean $ TH $. The error bars show the standard deviation. C. The mean $ CV $. The error bars show the standard deviation of $ CV $. The mean $ SF $ of D. stem cells and E. TA cells. The error bars show the standard deviation. F. Distribution of cells and morphogens in a specific simulation with $ \omega = 0.6 $ at time $ t = 400 $. In (B-E), all statistical quantities are captured based on $ 20 $ simulations, and the standard deviations (error bars) are negligible compared to the means.
Figure 6. Simulations with both cell-intrinsic noise and cell-extrinsic noise. Simulations with different noise levels are shown in (A-I). In each subfigure, the panel on the top shows the dynamics of layer thickness, the panel on the bottom shows the dynamics of layer stratification of stem cells ($ sf(C_0) $). The dash line represents for the corresponding quantity at homeostasis. Three different levels are chosen for each type of noise. For cell-intrinsic noise level $ \varepsilon $: $ 0.2 $ (Low), $ 0.6 $ (Medium), $ 1 $ (High). For cell-extrinsic noise level $ \sigma $: $ 5\times 10^{-4} $ (Low), $ 1\times 10^{-3} $ (Medium), $ 2\times 10^{-3} $ (High).
Figure 7. Simulations with both cell-intrinsic noise and morphogen noise. Simulations with different noise levels are shown in (A-I). In each subfigure, the panel on the top shows the dynamics of layer thickness, the panel on the bottom shows the dynamics of layer stratification of stem cells ($ sf(C_0) $). The dash line represents for the corresponding quantity at homeostasis. Three different levels are chosen for each type of noise. For cell-intrinsic noise level $ \varepsilon $: $ 0.2 $ (Low), $ 0.6 $ (Medium), $ 1 $ (High). For morphogen noise level $ \omega $: $ 0.2 $ (Low), $ 0.6 $ (Medium), $ 1 $ (High).
Figure 8. Simulations for maintaining homeostasis ($ SS $ = 0.49mm) with different combinations of three types of noise. Points with the same color and the same marker represent for simulations with the same cell-intrinsic noise level $ \varepsilon $, where $ \varepsilon = 0.2 $, $ 0.4 $, $ 0.6 $, $ 0.8 $ and $ 1 $ respectively. The strips, filled with color gradient, roughly divide the plane into several regions. Data points located in the region next to dark/light color of an individual strip have more/less desirable properties. A. The relation between the cell-extrinsic noise level $ \sigma $ and the morphogen noise level $ \omega $. The blue strip sketches the green points with maximal cell-intrinsic noise level $ \varepsilon = 1 $. It divides this plane into stabilized region (region Ⅰ-Ⅳ) and non-stabilized region (region Ⅴ). The stabilized region is divided into four parts (region Ⅰ-Ⅳ) by a red strip and a green strip. These regions will be introduced next. B. The relation between layer thickness variability ($ CV $) and layer stratification factor of stem cells ($ SF(C_0) $). The red strip with $ CV = 20% $ divides this plane into two regions with low $ CV $ or high $ CV $. Also the green strip with $ SF(C_0) = 0.4 $ divides the plane into two regions with high $ SF $ or low $ SF $. The red and the green strips together divide the stabilized region into four regions (Region Ⅰ: low $ CV $ and high $ SF $; Region Ⅱ: high $ CV $ and high $ SF $; Region Ⅲ: low $ CV $ and low $ SF $; Region Ⅳ: high $ CV $ and low $ SF $). C. The relation between $ \sigma $ and $ CV $. D. The relation between $ \omega $ and $ CV $. E. The relation between $ \sigma $ and $ SF(C_0) $. F. The relation between $ \omega $ and $ SF(C_0) $.
Table 1. The statistics of $ TH $, $ CV $ and $ SF(C_0) $ with combined cell-intrinsic ($ \varepsilon $) and cell-extrinsic ($ \sigma $) noise. All quantities are captured based on $ 20 $ simulations.
$ 0 $ $ 5\times10^{-4} $ $ 1\times10^{-3} $ $ 2\times10^{-3} $
$ 0 $ $ TH $ $ 0.49 $mm $ 0.53 $mm $ 0.58 $mm $ 0.75 $mm
$ CV $ $ 0% $ $ 1% $ $ 3% $ $ 7% $
$ SF $ $ 0.91 $ $ 0.90 $ $ 0.88 $ $ 0.40 $
$ 0.2 $ $ TH $ $ 0.45 $mm $ 0.49 $mm $ 0.54 $mm $ 0.69 $mm
$ CV $ $ 30% $ $ 25% $ $ 23% $ $ 17% $
Table 2. The statistics of $ TH $, $ CV $ and $ SF(C_0) $ with combined cell-intrinsic ($ \varepsilon $) and morphogen ($ \omega $) noise. All quantities are captured based on $ 20 $ simulations.
$ 0 $ $ 0.2 $ $ 0.6 $ $ 1 $
$ CV $ $ 0% $ $ 3% $ $ 9% $ $ 11% $
$ CV $ $ 7% $ $ 7% $ $ 11% $ $ 13% $
$ CV $ $ 89% $ $ 87% $ $ 97% $ $ 108% $
Table 3. Parameters used in Eq. (2) to Eq. (7).
Parameters Values Units
$ \nu_0 $, $ \nu_1 $ $ 1 $ $ \ln 2* $(cell cycle)$ ^{-1} $
$ d_2 $ $ 0.01 $ $ \ln 2* $(cell cycle)$ ^{-1} $
$ D_A $, $ D_G $ $ 10^{-5} $ mm$ ^2 $s$ ^{-1} $
$ \mu_0 $, $ \mu_1 $, $ \mu_2 $, $ \eta_1 $, $ \eta_2 $ $ 10^{-3} $ s$ ^{-1}\mu M $
$ a_{\deg} $, $ g_{\deg} $ $ 10^{-3} $ s$ ^{-1} $
$ \alpha_A $, $ \alpha_G $ $ 10 $ mm$ ^{-1} $
$ \bar{p}_0 $ $ 0.95 $ -
$ \bar{p}_1 $ $ 0.5 $ -
$ \gamma_A $ $ 1.6 $ $ \mu M^{-1} $
$ \gamma_G $ $ 2 $ $ \mu M^{-1} $
Table 4. Noise levels used in Eq. (7) and (8) in different figures.
$ \varepsilon_0 $, $ \varepsilon_1 $ $ \sigma_0 $, $ \sigma_1 $ $ \omega_0 $, $ \omega_1 $
Figure 2 $ 0.6 $ $ 10^{-4} $ 0.58
Figure 3F $ 0.6 $ $ 0 $ $ 0 $
Figure 4F $ 0 $ $ 3\times10^{-3} $ $ 0 $
Figure 5F $ 0 $ $ 0 $ $ 0.6 $
Figure 6 Low; $ 0.2 $ Low: $ 5\times 10^{-4} $ $ 0 $
Medium: $ 0.6 $ Medium: $ 1\times10^{-3} $
High: $ 1 $ High: $ 2\times10^{-3} $
Figure 7 Low: $ 0.2 $ $ 0 $ Low: $ 0.2 $
Medium: $ 0.6 $ Medium: $ 0.6 $
High: $ 1 $ High: $ 1 $
David Iron, Adeela Syed, Heidi Theisen, Tamas Lukacsovich, Mehrangiz Naghibi, Lawrence J. Marsh, Frederic Y. M. Wan, Qing Nie. The role of feedback in the formation of morphogen territories. Mathematical Biosciences & Engineering, 2008, 5 (2) : 277-298. doi: 10.3934/mbe.2008.5.277
Yangjin Kim, Hans G. Othmer. Hybrid models of cell and tissue dynamics in tumor growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1141-1156. doi: 10.3934/mbe.2015.12.1141
Keith E. Howard. A size structured model of cell dwarfism. Discrete & Continuous Dynamical Systems - B, 2001, 1 (4) : 471-484. doi: 10.3934/dcdsb.2001.1.471
Qiaojun Situ, Jinzhi Lei. A mathematical model of stem cell regeneration with epigenetic state transitions. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1379-1397. doi: 10.3934/mbe.2017071
Oleg U. Kirnasovsky, Yuri Kogan, Zvia Agur. Resilience in stem cell renewal: development of the Agur--Daniel--Ginosar model. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 129-148. doi: 10.3934/dcdsb.2008.10.129
Tomas Alarcon, Philipp Getto, Anna Marciniak-Czochra, Maria dM Vivanco. A model for stem cell population dynamics with regulated maturation delay. Conference Publications, 2011, 2011 (Special) : 32-43. doi: 10.3934/proc.2011.2011.32
Wing-Cheong Lo, Ching-Shan Chou, Kimberly K. Gokoffski, Frederic Y.-M. Wan, Arthur D. Lander, Anne L. Calof, Qing Nie. Feedback regulation in multistage cell lineages. Mathematical Biosciences & Engineering, 2009, 6 (1) : 59-82. doi: 10.3934/mbe.2009.6.59
Jan Kelkel, Christina Surulescu. On some models for cancer cell migration through tissue networks. Mathematical Biosciences & Engineering, 2011, 8 (2) : 575-589. doi: 10.3934/mbe.2011.8.575
Christian Engwer, Markus Knappitsch, Christina Surulescu. A multiscale model for glioma spread including cell-tissue interactions and proliferation. Mathematical Biosciences & Engineering, 2016, 13 (2) : 443-460. doi: 10.3934/mbe.2015011
Mostafa Adimy, Fabien Crauste. Modeling and asymptotic stability of a growth factor-dependent stem cell dynamics model with distributed delay. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 19-38. doi: 10.3934/dcdsb.2007.8.19
Mostafa Adimy, Abdennasser Chekroun, Tarik-Mohamed Touaoula. Age-structured and delay differential-difference model of hematopoietic stem cell dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2765-2791. doi: 10.3934/dcdsb.2015.20.2765
Qi Wang, Lifang Huang, Kunwen Wen, Jianshe Yu. The mean and noise of stochastic gene transcription with cell division. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1255-1270. doi: 10.3934/mbe.2018058
Richard L Buckalew. Cell cycle clustering and quorum sensing in a response / signaling mediated feedback model. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 867-881. doi: 10.3934/dcdsb.2014.19.867
Arthur D. Lander, Qing Nie, Frederic Y. M. Wan. Spatially Distributed Morphogen Production and Morphogen Gradient Formation. Mathematical Biosciences & Engineering, 2005, 2 (2) : 239-262. doi: 10.3934/mbe.2005.2.239
H. T. Banks, R.C. Smith. Feedback control of noise in a 2-D nonlinear structural acoustics model. Discrete & Continuous Dynamical Systems - A, 1995, 1 (1) : 119-149. doi: 10.3934/dcds.1995.1.119
József Z. Farkas, Thomas Hagen. Asymptotic analysis of a size-structured cannibalism model with infinite dimensional environmental feedback. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1825-1839. doi: 10.3934/cpaa.2009.8.1825
Pavol Bokes. Maintaining gene expression levels by positive feedback in burst size in the presence of infinitesimal delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-14. doi: 10.3934/dcdsb.2019070
Orit Lavi, Doron Ginsberg, Yoram Louzoun. Regulation of modular Cyclin and CDK feedback loops by an E2F transcription oscillator in the mammalian cell cycle. Mathematical Biosciences & Engineering, 2011, 8 (2) : 445-461. doi: 10.3934/mbe.2011.8.445
Ying Hao, Fanwen Meng. A new method on gene selection for tissue classification. Journal of Industrial & Management Optimization, 2007, 3 (4) : 739-748. doi: 10.3934/jimo.2007.3.739
M.A.J Chaplain, G. Lolas. Mathematical modelling of cancer invasion of tissue: dynamic heterogeneity. Networks & Heterogeneous Media, 2006, 1 (3) : 399-439. doi: 10.3934/nhm.2006.1.399
Yuchi Qiu Weitao Chen Qing Nie | CommonCrawl |
pdgLive Home > ${{\mathit H}^{0}}$ > ${{\mathit H}^{0}}{{\mathit H}^{0}}$ Production
OTHER ${{\mathit H}^{0}}$ PRODUCTION PROPERTIES
${{\mathit H}^{0}}{{\mathit H}^{0}}$ Production
The 95$\%$ CL limits are for the cross section (CS) and Higgs self coupling (${{\mathit \kappa}_{{\lambda}}}$ ) scaling factors both relative to the SM predictions.
${{\mathit \kappa}_{{\lambda}}}$
CL%
$ <7.7 $ $-3.3$ to 8.5 95 1
SIRUNYAN
CMS 13 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$
$ <6.9 $ $-5.0$ to 12.0 95 2
ATLS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \gamma}}{{\mathit \gamma}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}}{{\mathit W}^{*}}$ , ${{\mathit W}}{{\mathit W}^{*}}{{\mathit \gamma}}{{\mathit \gamma}}$ , ${{\mathit W}}{{\mathit W}^{*}}{{\mathit W}}{{\mathit W}^{*}}$
$<40$ 95 3
ATLS 13 TeV, ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$
$<840$ 95 4
ATLS 13 TeV, VBF, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$
$<12.9$ 95 5
AABOUD
ATLS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$
2019 O
ATLS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}}{{\mathit W}^{*}}$
ATLS 13 TeV, ${{\mathit W}}{{\mathit W}^{*}}{{\mathit W}}{{\mathit W}^{*}}$
$ <24 $ $-11$ to 17 95 8
2019 AB
CMS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$
$ <22.2 $ $-11.8$ to 18.8 95 10
2019 BE
CMS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \gamma}}{{\mathit \gamma}}$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}}{{\mathit W}^{*}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit Z}}{{\mathit Z}^{*}}$
$<179$ 95 11
ATLS 13 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}^{*}}$
$<12.7$ 95 13
2018 CQ
ATLS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$
$ <22 $ $-8.2$ to 13.2 95 14
2018 CW
ATLS 13 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$
$<30$ 95 15
CMS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$
2018 F
CMS 13 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$
2017 CN
CMS 8 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$
KHACHATRYAN
2016 BQ
CMS 8 TeV, ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$
ATLS 8 TeV, ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}}$
1 SIRUNYAN 2021K search for non-resonant ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ with data of 137 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ production cross section at 95$\%$ CL is measured to be 0.67 fb, which corresponds to about 7.7 times the SM prediction. The quartic coupling ( ${{\mathit V}}{{\mathit V}}{{\mathit H}^{0}}{{\mathit H}^{0}}$ , ${{\mathit V}}$ = ${{\mathit W}}$ ,${{\mathit Z}}$ ) scaling factor ${{\mathit \kappa}_{{2V}}}$ (= ${{\mathit c}_{{2V}}}$ ) is measured to be $-1.3$ $<$ ${{\mathit \kappa}_{{2V}}}$ $<$ 3.5 at 95$\%$ CL.
2 AAD 2020C combine results of up to 36.1 fb${}^{-1}$ data at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV for ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \gamma}}{{\mathit \gamma}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}}{{\mathit W}^{*}}$ , ${{\mathit W}}{{\mathit W}^{*}}{{\mathit \gamma}}{{\mathit \gamma}}$ , ${{\mathit W}}{{\mathit W}^{*}}{{\mathit W}}{{\mathit W}^{*}}$ (AABOUD 2018CW, AABOUD 2018CQ, AABOUD 2019A, AABOUD 2019O, AABOUD 2018BU, and AABOUD 2019T).
3 AAD 2020E search non-resonant for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$ , where one of the Higgs bosons decays to ${{\mathit b}}{{\overline{\mathit b}}}$ and the other decays to either ${{\mathit W}}{{\mathit W}^{*}}$ , ${{\mathit Z}}{{\mathit Z}^{*}}$ , or ${{\mathit \tau}}{{\mathit \tau}}$ , with data of 139 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section at 95$\%$ CL is measured to be 1.2 pb, which corresponds to about 40 times the SM prediction.
4 AAD 2020X search for ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ process via VBF with data of 126 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the SM non-resonant ${{\mathit H}}{{\mathit H}}$ production cross section is 1460 fb at 95$\%$ CL, which corresponds to 840 times the SM prediction. The quartic coupling ( ${{\mathit V}}{{\mathit V}}{{\mathit H}^{0}}{{\mathit H}^{0}}$ , ${{\mathit V}}$ = ${{\mathit W}}$ ,${{\mathit Z}}$ ) scaling factor ${{\mathit \kappa}_{{2V}}}$ is excluded in the region of ${{\mathit \kappa}_{{2V}}}$ $<$ $-0.43$ or ${{\mathit \kappa}_{{2V}}}$ $>$ $2.56$ at 95$\%$ CL.
5 AABOUD 2019A search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ with data of 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ production cross section at 95$\%$ is measured to be 147 fb, which corresponds to about 12.9 times the SM prediction.
6 AABOUD 2019O search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}}{{\mathit W}^{*}}$ with data of 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section at 95$\%$ CL is calculated to be 10 pb from the observed upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}}{{\mathit W}^{*}}$ production cross section of 2.5 pb assuming the SM branching fractions. The former corresponds to about 300 times the SM prediction.
7 AABOUD 2019T search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit W}}{{\mathit W}^{*}}{{\mathit W}}{{\mathit W}^{*}}$ with data of 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section at 95$\%$ is measured to be 5.3 pb, which corresponds to about 160 times the SM prediction.
8 SIRUNYAN 2019 search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ with data of 35.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ production cross section at 95$\%$ CL is measured to be 2.0 fb, which corresponds to about 24 times the SM prediction. The effective Higgs boson self-coupling $\kappa _{\lambda }$ ( = $\lambda _{ {{\mathit H}} {{\mathit H}} {{\mathit H}} }$ $/$ $\lambda {}^{SM}_{ {{\mathit H}} {{\mathit H}} {{\mathit H}} }$) is constrainted to be $-11$ $<$ $\kappa _{\lambda }$ $<$ $17$ at 95$\%$ CL assuming all other Higgs boson couplings are at their SM value.
9 SIRUNYAN 2019AB search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , where 4 heavy flavor jets from two Higgs bosons are resolved, with data of 35.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ production cross section at 95$\%$ is measured to be 847 fb, which corresponds to about 75 times the SM prediction.
10 SIRUNYAN 2019BE combine results of 13 TeV 35.9 fb${}^{-1}$ data: SIRUNYAN 2019 , SIRUNYAN 2018A, SIRUNYAN 2019AB, SIRUNYAN 2019H, and SIRUNYAN 2018F.
11 SIRUNYAN 2019H search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , where one of ${{\mathit b}}{{\overline{\mathit b}}}$ pairs is highly boosted and the other one is resolved, with data of 35.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ production cross section at 95$\%$ is measured to be 1980 fb, which corresponds to about 179 times the SM prediction.
12 AABOUD 2018BU search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}^{*}}$ with the final state of ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit j}}{{\mathit j}}$ using data of 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section at 95$\%$ CL is measured to be 7.7 pb, which corresponds to about 230 times the SM prediction. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}^{*}}$ at 95$\%$ CL is measured to be 7.5 fb (see thier Table 6).
13 AABOUD 2018CQ search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ with data of 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ production cross section at 95$\%$ is measured to be 30.9 fb, which corresponds to about 12.7 times the SM prediction.
14 AABOUD 2018CW search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ with data of 36.1 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section at 95$\%$ is measured to be 0.73 pb, which corresponds to about 22 times the SM prediction. The effective Higgs boson self-coupling $\kappa _{\lambda }$ is constrained to be $-8.2$ $<$ $\kappa _{\lambda }$ $<$ $13.2$ at 95$\%$ CL assuming all other Higgs boson couplings are at their SM value.
15 SIRUNYAN 2018A search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ with data of 35.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit g}}$ ${{\mathit g}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ production cross section is measured to be 75.4 fb, which corresponds to about 30 times the SM prediction. Limits on Higgs-boson trilinear coupling ${{\mathit \lambda}_{{HHH}}}$ and top Yukawa coupling ${{\mathit y}_{{t}}}$ are also given (see their Fig. 6).
16 SIRUNYAN 2018F search non-resonant for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$ , where ${{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$ is either ${{\mathit W}}$ ${{\mathit W}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$ or ${{\mathit Z}}$ ${{\mathit Z}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \nu}}$ (${{\mathit \ell}}$ is ${{\mathit e}}$ , ${{\mathit \mu}}$ or a leptonically decaying ${{\mathit \tau}}$ ), with data of 35.9 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$ production cross section at 95$\%$ CL is measured to be 72 fb, which corresponds to about 79 times the SM prediction.
17 SIRUNYAN 2017CN search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ with data of 18.3 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. Results are then combined with the published results of the ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ and ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , which use data of up to 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The upper limit on the ${{\mathit g}}$ ${{\mathit g}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section is measured to be 0.59 pb from ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , which corresponds to about 59 times the SM prediction (gluon fusion). The combined upper limit is 0.43 pb, which is about 43 times the SM prediction. The quoted values are given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV.
18 AABOUD 2016I search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ with data of 3.2 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on the ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ production cross section is measured to be 1.22 pb. This result corresponds to about 108 times the SM prediction (gluon fusion), which is $11.3$ ${}^{+0.9}_{-1.0}$ fb (NNLO+NNLL) including top quark mass effects. The quoted values are given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV .
19 KHACHATRYAN 2016BQ search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ with data of 19.7 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The upper limit on the ${{\mathit g}}$ ${{\mathit g}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ production is measured to be 1.85 fb, which corresponds to about 74 times the SM prediction and is translated into 0.71 pb for ${{\mathit g}}$ ${{\mathit g}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section. Limits on Higgs-boson trilinear coupling $\lambda $ are also given.
20 AAD 2015CE search for ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production using ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ and ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}}$ with data of 20.3 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. These results are then combined with the published results of the ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ and ${{\mathit H}^{0}}$ ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , which use data of up to 20.3 fb${}^{-1}$ at $\mathit E_{{\mathrm {cm}}}$ = 8 TeV. The upper limits on the ${{\mathit g}}$ ${{\mathit g}}$ $\rightarrow$ ${{\mathit H}^{0}}{{\mathit H}^{0}}$ production cross section are measured to be 1.6 pb, 11.4 pb, 2.2 pb and 0.62 pb from ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit b}}{{\overline{\mathit b}}}$ and ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit b}}{{\overline{\mathit b}}}$ , respectively. The combined upper limit is 0.69 pb, which corresponds to about 70 times the SM prediction. The quoted results are given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125.4 GeV. See their Table 4.
SIRUNYAN 2021K
JHEP 2103 257 Search for nonresonant Higgs boson pair production in final states with two bottom quarks and two photons in proton-proton collisions at $ \sqrt{s} $ = 13 TeV
AAD 2020X
JHEP 2007 108 Search for the $HH \rightarrow b \bar{b} b \bar{b}$ process via vector-boson fusion production using proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector
JHEP 2101 145 (errat.) Search for the $HH \rightarrow b \bar{b} b \bar{b}$ process via vector-boson fusion production using proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector
AAD 2020E
PL B801 135145 Search for non-resonant Higgs boson pair production in the $bb\ell\nu\ell\nu$ final state with the ATLAS detector in $pp$ collisions at $\sqrt{s} = 13$ TeV
AAD 2020C
PL B800 135103 Combination of searches for Higgs boson pairs in $pp$ collisions at $\sqrt{s} = $13 TeV with the ATLAS detector
AABOUD 2019O
JHEP 1904 092 Search for Higgs boson pair production in the $b\bar{b}WW^{*}$ decay mode at $\sqrt{s}=13$ TeV with the ATLAS detector
AABOUD 2019A
JHEP 1901 030 Search for pair production of Higgs bosons in the $b\bar{b}b\bar{b}$ final state using proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector
AABOUD 2019T
JHEP 1905 124 Search for Higgs boson pair production in the $WW^{(*)}WW^{(*)}$ decay channel using ATLAS data recorded at $\sqrt{s}=13$ TeV
SIRUNYAN 2019AB
JHEP 1904 112 Search for nonresonant Higgs boson pair production in the $\mathrm{b\overline{b}b\overline{b}}$ final state at $\sqrt{s} =$ 13 TeV
SIRUNYAN 2019
PL B788 7 Search for Higgs boson pair production in the $\gamma\gamma\mathrm{b\overline{b}}$ final state in pp collisions at $\sqrt{s}=$ 13 TeV
SIRUNYAN 2019H
JHEP 1901 040 Search for production of Higgs boson pairs in the four b quark final state using large-area jets in proton-proton collisions at $\sqrt{s}=$ 13 TeV
SIRUNYAN 2019BE
PRL 122 121803 Combination of searches for Higgs boson pair production in proton-proton collisions at $\sqrt{s} = $ 13 TeV
AABOUD 2018CW
JHEP 1811 040 Search for Higgs boson pair production in the $\gamma\gamma b\bar{b}$ final state with 13 TeV $pp$ collision data collected by the ATLAS experiment
AABOUD 2018BU
EPJ C78 1007 Search for Higgs boson pair production in the $\gamma\gamma WW^{*}$ channel using $pp$ collision data recorded at $\sqrt{s} = 13$ TeV with the ATLAS detector
AABOUD 2018CQ
PRL 121 191801 Search for resonant and non-resonant Higgs boson pair production in the ${b\bar{b}\tau^+\tau^-}$ decay channel in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector
SIRUNYAN 2018A
PL B778 101 Search for Higgs Boson Pair Production in Events with Two Bottom Quarks and Two Tau Leptons in Proton-Proton Collisions at $\sqrt {s }$ = 13 TeV
SIRUNYAN 2018F
JHEP 1801 054 Search for Resonant and Nonresonant Higgs Boson Pair Production in the ${\mathit {\mathit b}}{\mathit {\overline{\mathit b}}}{{\mathit \ell}}{{\mathit \nu}}{{\mathit \ell}}{{\mathit \nu}}$ Final State in Proton-Proton Collisions at $\sqrt {s }$ = 13 TeV
SIRUNYAN 2017CN
PR D96 072004 Search for Higgs Boson Pair Production in the ${\mathit {\mathit b}}{\mathit {\mathit b}}{{\mathit \tau}}{{\mathit \tau}}$ Final State in Proton-Proton Collisions at $\sqrt {s }$ = 8 TeV
AABOUD 2016I
PR D94 052002 Search for Pair Production of Higgs Bosons in the ${\mathit {\mathit b}}{\mathit {\overline{\mathit b}}}{\mathit {\mathit b}}{\mathit {\overline{\mathit b}}}$ Final State using Proton-Proton Collisions at $\sqrt {s }$ = 13 TeV with the ATLAS Detector
KHACHATRYAN 2016BQ
PR D94 052012 Search for Two Higgs Bosons in Final States Containing Two Photons and Two Bottom Quarks in Proton-Proton Collisions at 8 TeV
AAD 2015CE
PR D92 092004 Searches for Higgs Boson Pair Production in the ${{\mathit h}}$ ${{\mathit h}}$ $\rightarrow$ ${\mathit {\mathit b}}{\mathit {\mathit b}}{{\mathit \tau}}{{\mathit \tau}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit W}}{{\mathit W}^{*}}$ , ${{\mathit \gamma}}{{\mathit \gamma}}{\mathit {\mathit b}}{\mathit {\mathit b}}$, ${\mathit {\mathit b}}{\mathit {\mathit b}}{\mathit {\mathit b}}{\mathit {\mathit b}}$ Channels with the ATLAS Detector | CommonCrawl |
Macaques are risk-averse in a freely moving foraging task
Benjamin R. Eisenreich*1,
Benjamin Y. Hayden1 &
Jan Zimmermann1
Scientific Reports volume 9, Article number: 15091 (2019) Cite this article
Rhesus macaques (Macaca mulatta) appear to be robustly risk-seeking in computerized gambling tasks typically used for electrophysiology. This behavior distinguishes them from many other animals, which are risk-averse, albeit measured in more naturalistic contexts. We wondered whether macaques' risk preferences reflect their evolutionary history or derive from the less naturalistic elements of task design associated with the demands of physiological recording. We assessed macaques' risk attitudes in a task that is somewhat more naturalistic than many that have previously been used: subjects foraged at four feeding stations in a large enclosure. Patches (i.e., stations), provided either stochastically or non-stochastically depleting rewards. Subjects' patch residence times were longer at safe than at risky stations, indicating a preference for safe options. This preference was not attributable to a win-stay-lose-shift heuristic and reversed as the environmental richness increased. These findings highlight the lability of risk attitudes in macaques and support the hypothesis that the ecological validity of a task can influence the expression of risk preference.
Many animals, including humans, prefer sure things to gambles1. The tendency to minimize risk, i.e. unknowable and unpredictable variation, has been a topic of interest from behavioral ecology2,3 to economics4,5 and neuroscience6,7,8,9,10,11. Furthermore cognitive processes related to decision making in risky contexts underlies many maladaptive behaviors such as addiction and problem gambling12,13. Consequently, understanding risk attitudes in varying contexts provides important insight into the evolutionary origin, and thus the psychological and neural mechanisms, of addiction and maladaptive choice14.
Theoretical and experimental work on risk preference in non-human animals has delineated risk-aversion as a default preference for many species1,15,16,17,18,19. However risk preferences may not be as rigidly fixed as we might imagine; several factors have been demonstrated to shift risk-preference. Internal factors related to energetic states and metabolic processes are facultative on risk-preferences17,20,21,22,23. When faced with the possibility of starvation, many species will increase their tolerance for risk17,21,24. Likewise external factors related to the environmental richness, that is how much food is readily available, shift risk tolerance16,25,26,27. Risk preferences are also sensitive to the reward rate, both the timing of delivery and overall size of rewards15,28,29,30,31. Lastly, whether risk is explicitly cued or learned through experience impacts the expression of risk-preference32,33,34.
Rhesus macaques, the predominant model in neuroscience for understanding human decision making, are robustly risk-seeking in a variety of contexts6,8,35,36,37,38,39. Risk-seeking in macaques persists even when factors known to shift risk preferences are manipulated. For example, altering the cost of engaging in risk-seeking by increasing the inter-trial-interval reduces, but does not eliminate, macaques' preference for risk35. In fact, only one study we know of has reported risk-aversion in rhesus macaques40.
Explanations for why macaques exhibit robust risk-seeking in experimental tasks come in two types. One type of explanation assumes that macaques' risk attitudes are an evolved reflection of their foraging history. This view is supported by observed patterns of risk-seeking in primate species across a variety of experimental methods41,42,43,44. Another possibility is that macaques' risk-seeking is a consequence of experimental tools typically used to measure their risk preferences. The manner in which macaques' risk attitudes are measured is generally different from methods used for other species3,45,46 but it remains unclear how influential that difference is on general risk attitudes. The majority of data on macaque risk preference comes from studies tailored to the needs of electrophysiology, not cross-species comparison. Thus they are tested with rapid trials, often as fast as three seconds per trial, extremely small stakes, abstract stimuli, immediate rewards, overtraining, oculomotor responses, and hundreds or thousands of trials in a few hours. It may be that one of these factors, or some combination thereof, motivates risky choice. Indeed, even humans can become risk-seeking when gambling for small rewards in conditions designed to be similar to those used in non-human primate experiments34,45.
For foraging animals, risk manifests as an embedded component of their environment15,28,47. A macaque foraging for fruit may experience risk as variation in the likelihood of encountering patches of fruit-bearing trees or as variation in the quality and quantity of fruit located at an individual tree. In the former, risk may affect the decision on where to search and which tree to climb, while the latter may impact the decision for how long to reside within a particular patch or tree. Risk is also mitigated or exacerbated by the local dynamics of the foraging environment. These can include both the environmental richness and the movement costs related to the spatial position of food patches48,49. When food is plentiful the energetic cost of engaging in riskier foraging strategies is minimized27. Evolutionary pressures are believed to have shaped the cognitive architecture of foragers to navigate risk within nature and the degree to which experimental tasks match onto the natural dynamics of environment likely impacts the expression of risk50,51,52.
We hypothesized that embedding the experience of risk within a more naturalistic setting would result in macaques expressing risk preferences opposite to the trend of robust risk-seeking. We designed a naturalistic foraging task based on the patch-leaving problem from foraging theory2,53,54. We tested subjects (n = 3) using a single subject design within a large enclosure that allowed for free movement between four different feeding stations. Our task design incorporates risk within the stochasticity of patch harvest rates. Thus, we are able to examine the influence of risk across the use of patch types in addition to within particular patches. We found that macaques were risk-averse under these foraging conditions. We are able to abolish risk-averse preferences by increasing the overall richness of the environment in relation to the amount of variation in a risky patch. Two of the same subjects exhibited risk-seeking in a standard risk task designed for physiological recording, indicating that their risk preferences are task-specific, not individual-specific. Taken together, our results demonstrate the effect of the environmental structure on the expression of risk attitudes in rhesus macaques and highlight the importance of using naturalistic tasks for studying cognitive processes.
Subjects and apparatus
Three male rhesus macaques served as subjects for the experiment. Two of the subjects (C and K) had previously served as subjects on standard neuroeconomic tasks, including a set shifting task55, a diet selection task56,57, intertemporal choice tasks58, and a juice gambling task10, while the third subject (Y) was naïve to all experimental procedures. All three subjects were fed ad libitum and pair housed within a light and temperature controlled colony room. Subjects were water restricted to 25 mL/kg for initial training, and readily worked to maintain 50 mL/kg throughout experimental testing. All research and animal care was conducted in accordance with University of Minnesota Institutional Animal Care and Use Committee approval and in accord with National Institutes of Health standards for the care and use of non-human primates.
Subjects were behaviorally tested in a large cage (~3 m × 3 m × 3 m) made from framed panels consisting of 5 cm wire mesh (Fig. 1). This allowed for free movement of the subjects within the cage in three dimensions. Five 208 L drum barrels weighted with sand were placed within the cage to serve as perches for the subjects to sit upon. Four juice feeders were placed at each of the four corners of the cage in a rotationally symmetric alignment. The juice feeders consisted of a 16 × 16 LED screen, a lever, buzzer, a solenoid (Parker Instruments), and were controlled via an Arduino Uno microcontroller. Data were collected in MatLab (Mathworks) via Bluetooth communication with each of the juice feeders.
Subjects were tested within a large wire mesh enclosure. Juice feeders, attached to the walls of the cage in each corner, provided experimental stimuli and rewards. Five barrels served as perches for subjects to sit on during experimental testing.
Previous training history for two these subjects included two types of foraging tasks57,59, intertemporal choice tasks34,60, two types of gambling tasks10,61, attentional tasks similar to those in62, and two types of reward-based decision tasks63,64.
We first introduced subjects to the large cage and allowed them to acclimate to it. Acclimation consisted of placing subjects within the large cage for progressively longer periods of time over the course of about five weeks. To make the cage environment more positive, we provisioned the subjects with copious food rewards (chopped fruit and vegetables) placed throughout the enclosure. This process ensured that subjects were comfortable with the large cage. We then trained subjects to use the juice dispenser. All three subjects were initially trained to lever press for juice rewards in the testing enclosure. Acquisition of reliable lever pressing took about three weeks. We defined acquisition as obtaining juice rewards in excess of their daily water minimum. After completing lever training, we placed subjects onto the first experimental condition of the freely moving patch-leaving task.
Experimental testing
Working with captive non-human primate subjects imposes unavoidable practical limitations on the number of available subjects. We therefore structured our research design and analysis around a single subjects approach65. Formally, we used a multiple baseline approach for collecting and analyzing behavior in the freely moving patch-leaving task66. We tested subjects on the first experimental condition until five days of consistent behavior were observed. This training period also served as the initial learning period for the task contingencies. The criterion of five days was chosen a priori based on previous studies using foraging tasks59,67. This criterion ensured that the subjects were well trained and had ample opportunity to learn the task contingencies. We defined consistent behavior as similar allocation of lever presses at a juice feeder across days. We measured behavioral consistency as the total amount of juice collected at each feeder across days within a criteria of +/− 5 mL. After observing five days of consistent behavior, we then tested subjects for an additional five days. We then implemented the second experimental condition and repeated the same observation sequence. Throughout both experimental manipulations we used the same criterion of five days of consistent behavior, as a metric for ensuring subjects understood the experimental contingencies within a condition and were at a stable state of responding. All subjects were tested in the same order experiencing the standard environmental condition first and then the rich condition. Post hoc analyses of subject behavior revealed no significant changes across the five days of experimental testing. We performed all analyses on the five days of testing after establishing consistent behavior.
Behavioral tasks
Freely moving patch-leaving task
The freely moving patch-leaving task incorporates the dynamics of the natural environment by using multiple patches and a reward schedule designed to mimic the natural depletion of prey items from a patch the longer a subject forages from it57,67 (Fig. 2). Two of the four feeders diagonally across from each other were designated as variable (risky) feeders, while the other two served as safe feeders and had no variation in reward delivery. Feeders were visually identical, although they could be readily discriminated by their position relative to landmarks outside the cage. The feeder designations remained spatially fixed for each subject across experimental days. Each feeder displayed the total amount of juice available within the patch via a blue bar (8 × 16 LEDs). With each lever press, juice would be delivered and a portion of the blue bar would disappear, explicitly indicating its depletion status. Leaving a feeder to activate any of the other three feeders would cause the previously activated feeder to immediately fully replenish, cued by a full bar being displayed. Subjects were placed within the testing enclosure and allowed to forage freely between the four feeders for two hours each day.
Cartoon depiction of the freely moving patch-leaving risk task design and structure. (A) Subjects choose between four possible patches, two safe and two risky. Risk preferences manifest in the allocation of patch entries between the two patch types. (B) Once in a patch subjects receive rewards according to the predefined reward schedule and must choose to either leave or remain in the patch. Risk preferences at this state are expressed as different patch residence times between risky and safe patches.
Patch reward statistics and risk
Each feeder was programed to deliver a base reward schedule that decreased by a specified amount. In the standard condition each feeder delivered a base reward consisting of an initial 2 mL of juice that decreased by 0.125 mL with each subsequent delivery (turn). In the rich condition, the feeders provided 4 mL of juice that decreased by 0.25 mL each turn (Table 1). Risk, here defined as variation in reward amounts, was introduced by programming two of the juice feeders to randomly increase or decrease the juice delivery amount by 1 mL in addition to the base reward schedule at a probability of 0.5. Thus on any given turn, a response at the risky feeder may produce more or less juice, including non-reward delivery, than a safe feeder. Both feeder types delivered rewards following their respective schedules until reaching the base value of 0, at which point the patch is depleted and no more rewards were delivered. In practice this depletion process results in identical gain functions over the majority of patch residence times, that is the long run expectation of both feeder schedules are identical. However because the schedule had a bound at 0 mL, the tail end of the gain function for risky patches does diverge from safe patches (Fig. 3).
Table 1 Reward schedules for safe patches and risky patches across the two environmental manipulations, standard and rich.
Gain function, rate as a function of residence time, for safe patches (blue line) and risky patches (red line). The black arrow denotes the abscissa point of the maximum intake rate, and thus the rate-maximizing strategy for both patch types. Due to the programed variation in reward amounts, the gain function for risky patches diverges slightly from the safe patch at long residence times. This divergence arises due to the limitation of reward amounts being bounded at 0 seconds of solenoid open time.
Definition of risk preference
Our task design defines risk preference as frequency of risky decisions made by an animal. We defined the proportion of patch entries greater than chance into risky patches as risk-seeking, and the inverse of that as risk aversion. Equal entry into both patch types was considered risk neutral. For patch residence time we defined risk-seeking as a significant tendency to stay longer in risky patches than safe ones, and risk aversion as the opposite of this trend.
Coefficient of variation
Rich environments in which food sources are abundant have been demonstrated to increase risk-seeking foraging strategies27. The coefficient of variation describes this effect as the relationship between experienced variation and the overall mean reward rate. In many species risk-seeking increases as the coefficient of variation decreases25,26,68. We manipulated the coefficient of variation by increasing the overall rate of reward from 2 mL with a decay of 0.125 mL/lever press to 4 mL with a decay of 0.25 mL/lever press while holding the variation constant at +/− 1 mL. Importantly this manipulation does not change the overall expectation of the reward schedules; they are still matched for both risky and safe patches.
Juice gambling task
Data from the juice gambling task (Fig. 4), which we used as a comparison, were previously collected for electrophysiology experiments10,69,70 and only available for subjects C and K. In brief, the task consisted of paired choices presented rapidly (~3 sec duty cycle) while subjects sat in a specially designed chair (Christ Instruments, Hagerstown, MD). On each trial offers were presented asynchronously. The first offer appeared for 400 ms either on the left or right with equal probability. A blank period of 600 ms followed. Then the second offer appeared for 400 ms, followed by another 600 ms blank period. Following a brief central fixation period, subjects expressed their choices with saccades to the presented offers. Offers were colored bars that indicated probability and stakes. The stakes were indicated by the color of the displayed bar indicating a base reward amount (red = 0 μL, grey = 125 μL, blue = 165 μL, green = 240 μL). The probability was drawn from a uniform distribution and indicated by the height of an overlapping red bar. For example a blue bar covered halfway with a red bar represents a probability of 0.5 for receiving the reward corresponding to the color blue. Within the juice gambling task, risk is characterized as trial-to-trial variation in the probability of receiving a particular reward amount. Subjects were well trained on the task, having completed over 10,000 trials across many sessions before electrophysiological recording. For analysis we chose a random set of five days from the period of electrophysiological recording.
Timeline of the juice gambling task. Offers were presented asynchronously and signaled different gambles for water rewards. Offer stakes were represented by the rectangle's color (gray, blue, green), while probability was indicated by the size of an overlapping red bar.
We focused our analysis of the freely moving patch-leaving task on the five days of testing after the initial learning period in both the standard and rich condition. Drawing from our experimental design, we restricted our analyses to changes within each individual subject's behavior. A key strength of this approach lies in our ability to rule out individual differences as an explanation for behavioral changes, as each subject serves as their own control. Furthermore, each subject serves as a replication of the previous differing only in the inter subject domain. This allows us to infer strong causal relationships between our experimental manipulations and the subsequent behavior of our subjects.
Freely moving patch-leaving risk task
For the freely moving patch-leaving risk task we recorded lever presses at each of the four juice feeders throughout the 2-hour testing session. We defined patch entries as a lever press at a patch different from the previously recorded lever press. We defined consecutive lever presses or turns at a juice feeder as the patch residence time. Each daily session consisted of multiple patch entries at each of the four feeders of variable patch residence times. Data from daily sessions were combined across the five days following the initial learning period for each subject within the experimental condition.
We analyzed the differences in the proportion of subjects' patch entries between risky and safe patches across the two conditions of environmental richness using an 1-factor ANOVA. We investigated subjects' risk preferences on patch residence times across the manipulation of environmental richness using a 2-factor ANOVA (patch type × environmental richness). We analyzed differences in patch residence times between risky and safe patches using unpaired t-tests. To examine if subjects used a win-stay-lose-leave strategy we examined the effect of reward outcomes one turn back and two turns back from the end of each patch residency within risky patches.
Risk parameter estimation
A second way to categorize risk preferences is to examine the utility function derived from the expressed choices of subjects40,71. To analyze differences in risk preferences between the juice gambling task and our patch leaving risk task we fit each subject's choice preferences for offer 1 from the juice gambling task or for the decision to stay in the current risky patch to the two equations below (Eqs 1 and 2) using maximum likelihood estimation. Equations 1 and 2 produce expected utility curves whose shape is dictated by the parameter alpha. The parameter α functions as an index of risk preference such that α < 1 indicates risk-aversion, α > 1 indicates risk-seeking, and α = 1 risk-neutrality. Graphically a value of α = 1 will produce a straight line in which all reward amounts are equally weighted. Values of α < 1 produce a concave utility curve in which larger rewards undergo diminishing returns, while values of α > 1 produce convex utility curves in which larger rewards are given greater weight. The parameter b in both equations represents the slope of the sigmoid choice function around the point of indifference, p(choice) = 0.5. As such, b provides a measure of variation in choice.
$$Juice\,Gambling\,Task\,p(choice\,|offer\,1)=\frac{1}{1+(\exp ((p1\ast v{1}^{\alpha })-(p2\ast v\,{2}^{\alpha })\ast b)}$$
p1 = probability of offer 1
v1 = value of offer 1 (s)
v2 = value of offer 2(s)
α = risk preference index
b = measure of choice stochasticity
$$Patch\,leaving\,Risk\,Task\,p(stay|t)=1/(1+exp(threshol{d}^{\alpha }-V{(t)}^{\alpha })\ast b)$$
t = time measured in discrete lever presses
threshold = point of indifference between staying and leaving a patch
V(t) = current reward amount available given the time spent in the patch
a = risk preference index
Macaques spend more time in safe patches in a standard environment
We examined patch residence times in safe and risky patches defined as the number of turns spent at a feeder. Within the standard environment, all three subjects remained in the safe patches (turn means C: 9.10, K: 10.03, Y: 10.70) longer than in the risky (turn means C: 8.15, K: 8.67, Y: 9.52) ones (Fig. 5, unpaired t-test C: 0.9479 turns, t(248) = 2.198, p = 0.0144, d = 0.278; K: 1.35 turns, t(176) = 2.0289, p = 0.022, d = 0.304; Y: 1.17 turns, t(184) = 1.6842, p = 0.0469, d = 0.247). That is, all three subjects made more consecutive lever presses in safe patches than risky ones.
Histograms of recorded patch residence times for all subjects in safe (blue) and risky (red) patches for the standard environment condition. Residence time is indexed as the turn length or number of consecutive lever presses at a given patch before leaving (x-axis). The y-axis denotes the number of times a particular turn length occurred at the patch type. Solid lines indicate Gaussian fits to the observed leaving times. Patch residence times are significantly longer for safe than risky patches, indicating risk aversion.
No evidence for win-stay/lose-shift heuristic in guiding patch-leaving
It is possible that macaques' longer residence times in safe patches are due to a data censoring effect: perhaps they leave when any individual outcome is lower than some threshold. That is, they may obey a win-stay lose-shift heuristic72,73,74,75. To determine if subjects used this heuristic, we examined the likelihood of leaving a risky patch given the recent history of wins and losses. None of the three subjects exhibited a significant preference of increased patch-leaving immediately after losses (one sample t-test C: t(122) = 1.1740, p = 0.2427, K: t(82) = 0.5465, p = 0.5862, Y: t(85) = 0.6448, p = 0.5208). Nor did we observe any effect of harvest outcomes two steps back (ANOVA C: F(3,119) = 0.83, p = 0.8009, K: F(3,79) = 0.13, p = 0.9413; Y: F(3,82) = 1.44, p = 0.237).
Macaque risk preferences shift with the coefficient of variation
Shifting the environmental richness serves to alter the overall mean rate of reward for the environment. When the mean rate of reward increases and variation or risk remains constant the overall coefficient of variation decreases. In all three subjects we found a significant environment by patch type interaction on their patch residence times (2-factor ANOVA K: F(1,314) = 3.1928, p = 0.07; C: F(1,376) = 18.276, p < 0.001; Y: F(1,293) = 6.7078, p = 0.01). All three subjects exhibited shifts away from risk-aversion to risk-neutrality/seeking as the coefficient of variation decreased (turn mean risky C: 11.32, K: 7.59, Y: 5.89, turn mean safe C:8.75, K:6.49, Y:4.44, unpaired t-test C: t(117) = 3.3303, p = 0.0005, d = 0.605, K: t(99) = 1.2077, p = 0.115, d = 0.226, Y: t(94) = 1.7483, p = 0.0418, d = 0.351). Thus, subjects were willing to stay longer in risky patches as the overall magnitude of reward for the environment increased relative to the variation within risky patches (Fig. 6).
Histograms of recorded patch residence times for all subjects in the rich environment version of task. Plots follow the same conventions as Fig. 3. Subjects resided longer in risky patches than safe patches when the entire reward schedule for all feeder types was increased while maintaining the same variance as used in the standard environment.
Macaques are indifferent between patch types
Foragers may choose to strategically engage with patches of a particular type as a way of avoiding variation. We found no evidence to support a preference for either patch type in any of our subjects for both standard and rich environmental conditions (1-factor ANOVA C: F(1,378) = −0.442, p = 0.51, K: F(1,316) = −0.034, p = 0.85, Y: F(1,295) = 0.01, p = 0.9374).
Two of these macaques are risk-prone in computerized task
We next analyzed risky choice behavior in two subjects (C and K) in a standard (not foraging-based, not freely moving) juice gambling task10. Both subjects exhibited strong risk-seeking behavior. On trials with matched expected values subject C choose the risky option 67% (one sample t-test: t(1232) = 12.86, p < 0.0001) of the time, while subject K choose the risky option 66% of the time (one sample t-test: t(1437) = 12.55, p < 0.0001).
This preference can be quantified using the shape of the utility curve. Both subjects showed convex utility curves (Fig. 7, C: alpha = 2.284, 95%CI = 2.584–1.983; K: alpha = 3.632, 95%CI = 3.822–3.441). However within the more naturalistic freely moving patch-leaving task the same subjects exhibited concave utility curves indicative of strong risk aversion (Fig. 7, C: alpha = 0.550, 95% CI = 0.5922–0.508; K: alpha = 0.743, 95% CI = 0.889–0.586).
Plotted utility functions for two subjects who participated in both the feely moving patch task (lower panels) and a standard chaired economic task (upper panels). Dotted lines represent 95% CI. Two of the same macaques are risk-seeking in the standard task (convex utility curves), and risk-averse the freely moving patch task (concave utility curves).
Risk is ubiquitous in the natural environment and foragers must develop strategies for dealing with it1. There's a general observation that animals are, for the most part, risk-averse. The earliest studies of the neurophysiology of macaque risk attitudes were problematic because they demonstrated clear risk-seeking8,36,72,75,76,77. In other words, macaques appeared to be different from other species. We hypothesize that this difference is not innate. Instead we believe it reflects the strategic adjustments macaques make when faced with the specific environment of the laboratory gambling task18,45,46.
To test this hypothesis, we sought to examine risk attitudes from a more complex naturalistic task. To that end, we developed a large freely moving cage apparatus with four stations, and trained our subjects to forage in from variable and stable stations, and assessed their risk attitudes.
While we would expect risk preference to manifest as a preference for using one patch type over another, we did not observe this trend. This null result can be interpreted as an expression of risk neutrality (i.e. stochastic optimality) at the level of patch choice. Foraging primates have been shown to follow simple navigation rules for moving between patches of food, and within the spatial arrangement of our task these rules would manifest as risk indifference for patch entries78. Future research is needed to investigate the interplay between variation in the reward rates of a patch and the spatial arrangement of patches within the environment on patch choice. We did observe that subjects remain longer in safe patches than risky ones. This increased tolerance for safe rather than risky outcomes allows us to infer that subjects value safe patches more than they value the risky ones, and demonstrates that risk attitudes are fundamentally labile. Moreover, they suggest that effort made to make the task naturalistic pays off in the form of behavior that more closely resembles that found in the wild.
Our subjects' willingness to stay longer in safe patches as the environmental richness increases indicates that a subjective weighting of the experienced variation of rewards influences the valuation of a patch. Had subjects followed rate maximization policies under a condition of information uncertainty, we would have observed risk preferences manifest as a myopic short-term rate maximization strategies that produced a consistent censoring effect of early leaving from risky patches in both standard and rich environments15,47,71. One interesting question warranting further study is how the degree of information regarding the variance in reward influences the expression of risk between short-term maximization policies and the subjective weighting effects seen in conditions of pure risk.
Our results point to ostensibly minor task factors as a major component in the expression of macaque risk preferences3,18,46. These are the kinds of things that tend to get ignored in economic-inspired models of risky choice. Our results suggest that risk attitudes are so labile that one must carefully consider all parameters of the task design when interpreting economic preferences79,80. More fundamentally, these results suggest that animals may not have such a thing as a stable risk attitude. Rather, we believe that each subject has a consistent, but flexible cognitive repertoire that they use when encountering risk. In the case of rhesus macaques, their evolution and spread across diverse ecologies likely shaped their ability to adaptively shift choice strategies and preferences as environmental contingencies changed81. By considering how experimental tasks match onto the natural environment we can begin to fully elucidate how diverse cognitive functions such as memory, prospection, and estimation sub serve choice.
Subjects' measured risk aversion likely does not reflect lack of training or intolerance for ambiguity. In our freely moving patch-leaving risk task subjects were well trained. Reward schedules were fixed and subjects were fully trained in the reward contingencies before testing. This represents a case of "pure risk", in which the subject knows the reward statistics and can identify patches with variability from constant patches71, i.e. there is no additional ambiguity present. Furthermore, our manipulation of the coefficient of variation allows for a disassociation of reward rate strategies from subjective risk preferences in guiding patch usage, as the overall expectations of the reward schedules remains the same.
One limitation of all laboratory approaches arises out of constraints in sample size, and care should be taken with regard to any species level conclusions regarding macaque risk preference. However we are able to clearly demonstrate on a single subjects level a divergence in risk attitudes arising from the task structure. These results therefore constitute both an existence proof – that the effects we hypothesized can be observed in our members of the macaque species – and motivate a prediction that further studies will demonstrate a species-wide generality of these effects. In this regard, it is worth emphasizing that we did not pre-select subjects for behavior; nor did we exclude subjects for any reason.
Finally, our results call for greater effort to mimic the natural structure of the environment in order to study the evolved cognitive faculties of animals82. Foraging animals evolved to make decisions between foreground and background options83,84. Their cognitive strategies are adapted for exploiting the regularities of their natural environment, e.g. depleting patches and clumpy resource distributions57,85,86,87. It is only by carefully considering the ecological validity of our tasks that we will begin to untangle the cognitive and neural processes underlying decision making28,60,88,89. In this vein we join many others in arguing for greater consideration of how the environment shapes cognition and behavior11,28,51,52,89.
All data collected and used in the analysis is available from the corresponding author upon reasonable request or can be found at www.haydenlab.com/www.zimmermannlab.com.
Kacelnik, A. & Bateson, M. Risky Theories: The effects of variance on foraging decisions. Am. Zool. 434, 402–434, https://doi.org/10.1093/icb/36.4.402 (1996).
Stephens, D. W. & Krebs, J. R. Foraging Theory. (Princenton University Press, 1986).
Heilbronner, S. R. Modeling risky decision-making in nonhuman animals: shared core features. Curr. Opin. Behav. Sci. 16, 23–29, https://doi.org/10.1016/j.cobeha.2017.03.001 (2017).
Kahneman, D. & Tversky, A. A. Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291, https://doi.org/10.2307/1914185 (1979).
Article MathSciNet MATH Google Scholar
O'Donoghue, T. & Somerville, J. Modeling risk aversion in economics. J. Econ. Perspect. 32, 91–114, https://doi.org/10.1257/jep.32.2.91 (2018).
Genest, W., Stauffer, W. R. & Schultz, W. Utility functions predict variance and skewness risk preferences in monkeys. Proc. Natl. Acad. Sci. 113, https://doi.org/10.1073/pnas.1602217113 (2016).
Knutson, B. & Bossaerts, P. Neural Antecedents of financial decisions. J. Neurosci. 27, 8174–8177, https://doi.org/10.1523/JNEUROSCI.1564-07.2007 (2007).
Article CAS PubMed PubMed Central Google Scholar
McCoy, A. N. & Platt, M. L. Risk-sensitive neurons in macaque posterior cingulate cortex. Nat. Neurosci. 8, 1220–1227, https://doi.org/10.1038/nn1523 (2005).
Preuschoff, K., Quartz, S. R. & Bossaerts, P. Human insula activation reflects risk prediction errors as well as risk. J. Neurosci. 28, 2745–2752, https://doi.org/10.1523/JNEUROSCI.4286-07.2008 (2008).
Strait, C. E., Blanchard, T. C. & Hayden, B. Y. Reward value comparison via mutual inhibition in ventromedial prefrontal cortex. Neuron 82, 1357–1366, https://doi.org/10.1016/j.neuron.2014.04.032 (2014).
Calhoun, A. J. & Hayden, B. Y. The foraging brain. Curr. Opin. Behav. Sci. 5, 24–31, https://doi.org/10.1016/j.cobeha.2015.07.003 (2015).
Peters, S. K., Dunlop, K. & Downar, J. Cortico-striatal-thalamic loop circuits of the salience network: a central pathway in psychiatric disease and treatment. Front. Syst. Neurosci. 10, 1–23, https://doi.org/10.3389/fnsys.2016.00104 (2016).
Wilson, M. J. & Vassileva, J. Decision-making under risk, but not under ambiguity, predicts pathological gambling in discrete types of abstinent substance users. Front. Psychiatry 9, 1–10, https://doi.org/10.3389/fpsyt.2018.00239 (2018).
Santos, L. R. & Rosati, A. G. The evolutionary roots of human decision making. Annu. Rev. Psychol. 66, 321–347, https://doi.org/10.1146/annurev-psych-010814-015310.The (2015).
McNamara, J. Optimal patch use in a stochastic environment. Theor. Popul. Biol., https://doi.org/10.1016/0040-5809(82)90018-1 (1982).
Kacelnik, A. & Abreu, F. B. E. Risky choice and weber' s law. J. Theor. Biol. 194, 289–298 (1998).
Kacelnik, A. & El Mouden, C. Triumphs and trials of the risk paradigm. Anim. Behav. 86, 1117–1129, https://doi.org/10.1016/j.anbehav.2013.09.034 (2013).
Farashahi, S., Azab, H., Hayden, B. & Soltani, A. On the flexibility of basic risk attitudes in monkeys. J. Neurosci. 38, 4383–4398, https://doi.org/10.1523/JNEUROSCI.2260-17.2018 (2018).
Farashahi, S., Donahue, C. H., Hayden, B. Y., Lee, D. & Soltani, A. Flexible combination of reward information across primates. Nat. Hum. Behav., 10.1038/s41562-019-0714-3, https://doi.org/10.1038/s41562-019-0714-3 (2019).
Real, L. & Caraco, T. Risk and foraging in stochastic environments. Annu. Rev. Ecol. Syst. 17, 371–390, https://doi.org/10.1146/annurev.es.17.110186.002103 (1986).
Caraco, T. Energy budgets, risk and foraging preferences in dark-eyed juncos (Junco hyemalis). Behav. Ecol. Sociobiol. 8, 213–217, https://doi.org/10.1007/BF00299833 (1981).
McNamara, J. M. & Houston, A. I. Optimal foraging and learning. J. Theor. Biol. 117, 231–249, https://doi.org/10.1016/S0022-5193(85)80219-8 (1985).
Article MathSciNet Google Scholar
Pietras, C. J., Locey, M. L. M. L. & Hackenberg, T. D. Human risky choice under temporal constraints: tests of an energy-budget model. J. Exp. Anal. Behav. 80, 59–75, https://doi.org/10.1901/jeab.2003.80-59 (2003).
Craft, B. B. Risk-sensitive foraging: Changes in choice due to reward quality and delay. Anim. Behav. 111, 41–47, https://doi.org/10.1016/j.anbehav.2015.09.030 (2016).
Shafir, S. Risk-sensitive foraging: The effect of relative variability. Oikos. https://doi.org/10.1034/j.1600-0706.2000.880323.x (2000).
Weber, E. U., Shafir, S. & Blais, A.-R. R. Predicting risk sensitivity in humans and lower animals: risk as variance or coefficient of variation. Psychol. Rev. 111, 430–445, https://doi.org/10.1037/0033-295X.111.2.430 (2004).
Gilby, I. C. & Wrangham, R. W. Risk-prone hunting by chimpanzees (Pan troglodytes schweinfurthii) increases during periods of high diet quality. Behav. Ecol. Sociobiol., https://doi.org/10.1007/s00265-007-0410-6 (2007).
Stephens, D. W. Decision ecology: foraging and the ecology of animal decision making. Cogn. Affect. Behav. Neurosci. 8, 475–484, https://doi.org/10.3758/CABN.8.4.475 (2008).
Caraco, T., Kacelnick, A., Mesnick, N. & Smulewitz, M. Short-term rate maximization when rewards and delay covary. Anim. Behav. 44, 441–47, https://doi.org/10.1017/CBO9781107415324.004 (1992).
Shapiro, M. S., Schuck-Paim, C. & Kacelnik, A. Risk sensitivity for amounts of and delay to rewards: adaptation for uncertainty or by-product of reward rate maximising? Behav. Processes 89, 104–114, https://doi.org/10.1016/j.beproc.2011.08.016 (2012).
Krebs, J. R. & Kacelnik, A. Time horizons of foraging animals. Ann. N. Y. Acad. Sci. 423, 278–291, https://doi.org/10.1111/j.1749-6632.1984.tb23437.x (1984).
Article ADS CAS PubMed Google Scholar
Hertwig, R., Barron, G., Weber, E. U. & Erev, I. Decisions from experience and the effect of rare events in risky choice. Psychol. Sci., https://doi.org/10.1111/j.0956-7976.2004.00715.x (2004).
Hertwig, R. & Erev, I. The description-experience gap in risky choice. Trends in Cognitive Sciences, https://doi.org/10.1016/j.tics.2009.09.004 (2009).
Heilbronner, S. R. & Hayden, B. Y. The description-experience gap in risky choice in nonhuman primates. Psychon. Bull. Rev., https://doi.org/10.3758/s13423-015-0924-2 (2016).
Hayden, B. Y. & Platt, M. L. Temporal discounting predicts risk sensitivity in rhesus macaques. Curr. Biol. 17, 49–53, https://doi.org/10.1016/j.cub.2006.10.055 (2007).
O'Neill, M. & Schultz, W. Coding of reward risk by orbitofrontal neurons is mostly distinct from coding of reward value. Neuron 68, 789–800, https://doi.org/10.1016/j.neuron.2010.09.031 (2010).
So, N.-Y. & Stuphorn, V. Supplementary eye field encodes option and action value for saccades with variable reward. J. Neurophysiol. 104, 2634–2653, https://doi.org/10.1152/jn.00430.2010 (2010).
Stauffer, X. W. R. et al. Economic choices reveal probability distortion in macaque monkeys. J. Neurosci. 35, 3146–3154, https://doi.org/10.1523/JNEUROSCI.3653-14.2015 (2015).
Xu, E. R. & Kralik, J. D. Risky business: rhesus monkeys exhibit persistent preferences for risky options. Front. Psychol. 5, 1–12, https://doi.org/10.3389/fpsyg.2014.00258 (2014).
Yamada, H., Tymula, A., Louie, K. & Glimcher, P. W. Thirst-dependent risk preferences in monkeys identify a primitive form of wealth. Proc. Natl. Acad. Sci. 110, 15788–15793, https://doi.org/10.1073/pnas.1308718110/-/DCSupplemental.www.pnas.org/cgi/doi/10.1073/pnas.1308718110 (2013).
Heilbronner, S. R., Rosati, A. G., Stevens, J. R., Hare, B. & Hauser, M. D. A fruit in the hand or two in the bush? Divergent risk preferences in chimpanzees and bonobos. Biol. Lett. 4, 246–249, https://doi.org/10.1098/rsbl.2008.0081 (2008).
De Petrillo, F., Ventricelli, M., Ponsi, G. & Addessi, E. Do tufted capuchin monkeys play the odds? Flexible risk preferences in Sapajus spp. Anim. Cogn. 18, 119–130, https://doi.org/10.1007/s10071-014-0783-7 (2015).
Rosati, A. G. & Hare, B. Decision making across social contexts: competition increases preferences for risk in chimpanzees and bonobos. Anim. Behav. 84, 869–879, https://doi.org/10.1016/j.anbehav.2012.07.010 (2012).
Rosati, A. G. & Hare, B. Chimpanzees and bonobos exhibit emotional responses to decision outcomes. PLoS One, https://doi.org/10.1371/journal.pone.0063058 (2013).
Hayden, B. Y. & Platt, M. L. Gambling for gatorade: risk-sensitive decision making for fluid rewards in humans. Anim. Cogn. 12, 201–207, https://doi.org/10.1007/s10071-008-0186-8 (2009).
Heilbronner, S. R. & Hayden, B. Y. Contextual factors explain risk-seeking preferences in rhesus monkeys. Front. Neurosci. 7, 1–7, https://doi.org/10.3389/fnins.2013.00007 (2013).
Oaten, A. Optimal foraging in patches: a case for stochasticity. Theor. Popul. Biol., https://doi.org/10.1016/0040-5809(77)90046-6 (1977).
Fauchald. Foraging in a hierarchical patch system. Am. Nat., https://doi.org/10.2307/2463618 (2017).
Searle, K. R., Vandervelde, T., Hobbs, N. T., Shipley, L. A. & Wunder, B. A. Spatial context influences patch residence time in foraging hierarchies. Oecologia, https://doi.org/10.1007/s00442-005-0285-z (2006).
Real, L. A. Animal choice behavior and the evolution of cognitive architecture. Science (80-.). 253, https://doi.org/10.1126/science.1887231 (1990).
Todd, P. M. & Gigerenzer, G. Environments that make us smart. Curr. Dir. Psychol. Sci. 16, 167–171, https://doi.org/10.1111/j.1467-8721.2007.00497.x (2007).
Mallpress, D. E. W. W., Fawcett, T. W., Houston, A. I. & McNamara, J. M. Risk attitudes in a changing environment: an evolutionary model of the fourfold pattern of risk preferences. Psychol. Rev. 122, 364–375, https://doi.org/10.1037/a0038970 (2015).
Charnov, E. L. Optimal foraging, the marginal value theorem. Theor. Popul. Biol. 9, 129–136 (1976).
Article CAS PubMed MATH Google Scholar
Nonacs, P. State dependent behavior and the marginal value theorem. Behav. Ecol. 12, 71–83, https://doi.org/10.1093/oxfordjournals.beheco.a000381 (2001).
Sleezer, B. J. & Hayden, B. Y. Differential contributions of ventral and dorsal striatum to early and late phases of cognitive set reconfiguration. J. Cogn. Neurosci., https://doi.org/10.1162/jocn_a_01011 (2016).
Blanchard, T. C. & Hayden, B. Y. Neurons in dorsal anterior cingulate cortex signal postdecisional variables in a foraging task. J. Neurosci. 34, 646–655, https://doi.org/10.1523/JNEUROSCI.3151-13.2014 (2014).
Blanchard, T. C. & Hayden, B. Y. Monkeys are more patient in a foraging task than in a standard intertemporal choice task. PLoS One 1–11, https://doi.org/10.1371/journal.pone.0117057 (2015).
Blanchard, T. C., Pearson, J. M. & Hayden, B. Y. Postreward delays and systematic biases in measures of animal temporal discounting. Proc. Natl. Acad. Sci. 1–6, https://doi.org/10.1073/pnas.1310446110 (2013).
Blanchard, T. C., Strait, X. C. E., Hayden, B. Y., Strait, C. E. & Hayden, B. Y. Ramping ensemble activity in dorsal anterior cingulate neurons during persistent commitment to a decision. J. Neurophysiol. 114, 2439–2449, https://doi.org/10.1152/jn.00711.2015 (2015).
Hayden, B. Y. Economic choice: the foraging perspective. Current Opinion in Behavioral Sciences 24, 1–6 (Elsevier, 2018).
Azab, H. & Hayden, B. Y. Correlates of decisional dynamics in the dorsal anterior cingulate cortex. Plos Biol. 15, 1–25, https://doi.org/10.1371/journal.pbio.2003091 (2017).
Hayden, B. Y. & Gallant, J. L. Working memory and decision processes in visual area V4. Front. Neurosci., https://doi.org/10.3389/fnins.2013.00018 (2013).
Sleezer, B. J., Castagno, M. D. & Hayden, B. Y. Rule encoding in orbitofrontal cortex and striatum guides selection. J. Neurosci., https://doi.org/10.1523/jneurosci.1766-16.2016 (2016).
Wang, M. Z. & Hayden, B. Y. Reactivation of associative structure specific outcome responses during prospective evaluation in reward-based choices. Nat. Commun. 8, 1–13, https://doi.org/10.1038/ncomms15821 (2017).
Article ADS Google Scholar
Skinner, B. F. The Behavior of Organisms: An Experimental Analysis. (D. Appleton Century Crofts, INC., 1938).
Shadish, W. R., Cook, T. D. & Campbell, D. T. Experimental and Quasi-Experimental for Generalized Designs Causal Inference. Experimental and Quasi-Experimental Designs for Generalized Causal Inference, https://doi.org/10.1198/jasa.2005.s22 (2002).
Hayden, B. Y., Pearson, J. M. & Platt, M. L. Neuronal basis of sequential foraging decision in a patchy environment. Nat. Neurosci. 14, 933–939, https://doi.org/10.1038/nn.2856.Neuronal (2013).
Ludvig, E. A., Madan, C. R., Pisklak, J. M. & Spetch, M. L. Reward context determines risky choice in pigeons and humans. Biol. Lett. 10, https://doi.org/10.1098/rsbl.2014.0451 (2014).
Strait, C. E., Sleezer, B. J. & Hayden, B. Y. Signatures of value comparison in ventral striatum neurons. PLoS Biol., https://doi.org/10.1371/journal.pbio.1002173 (2015).
Blanchard, T. C. et al. Neuronal selectivity for spatial positions of offers and choices in five reward regions. J. Neurophysiol., https://doi.org/10.1152/jn.00325.2015 (2015).
Stephens, D. W. & Charnov, E. L. Optimal foraging: some simple stochastic models. Behav. Ecol. Sociobiol., https://doi.org/10.1007/BF00302814 (1982).
Hayden, B. Y., Nair, A. C., McCoy, A. N. & Platt, M. L. Posterior cingulate cortex mediates outcome-contingent allocation of behavior. Neuron 60, 19–25, https://doi.org/10.1016/j.neuron.2008.09.012 (2008).
Pearson, J. M., Hayden, B. Y., Raghavachari, S. & Platt, M. L. Report neurons in posterior cingulate cortex signal exploratory decisions in a dynamic multioption choice task. Curr. Biol. 19, 1532–1537, https://doi.org/10.1016/j.cub.2009.07.048 (2009).
Barraclough, D. J., Conroy, M. L. & Lee, D. Prefrontal cortex and decision making in a mixed- strategy game. Nat. Neurosci. 7, 404–410, https://doi.org/10.1038/nn1209 (2004).
Seo, H. & Lee, D. Temporal filtering of reward signals in the dorsal anterior cingulate cortex during a mixed-strategy game. J. Neurosci. 27, 8366–8377, https://doi.org/10.1523/JNEUROSCI.2369-07.2007 (2007).
Heilbronner, S. R., Hayden, B. Y. & Platt, M. L. Decision salience signals in posterior cingulate cortex. Front. Neurosci. 5, 1–9, https://doi.org/10.3389/fnins.2011.00055 (2011).
Hayden, B. Y., Heilbronner, S. R. & Platt, M. L. Ambiguity aversion in rhesus macaques. Front. Neurosci. 4, 1–7, https://doi.org/10.3389/fnins.2010.00166 (2010).
Teichroeb, J. A. & Smeltzer, E. A. Vervet monkey (Chlorocebus pygerythrus) behavior in a multi-destination route: evidence for planning ahead when heuristics fail. PLoS One 13, 1–18, https://doi.org/10.1371/journal.pone.0198076 (2018).
Stephens, D. W. & Anderson, D. The adaptive value of preference for immediacy: when shortsighted rules have farsighted consequences. Behav. Ecol., https://doi.org/10.1093/beheco/12.3.330 (2001).
Stephens, D. W., Kerr, B., Ferna, E. & Fernández-Juricic, E. Impulsiveness without discounting: the ecological rationality hypothesis. Proc. R. Soc. 271, 2459–2465, https://doi.org/10.1098/rspb.2004.2871 (2004).
Richard, A. F., Goldstein, S. J. & Dewar, R. E. Weed macaques: the evolutionary implications of macaque feeding ecology. Int. J. Primatol. 10, 569–594 (1989).
Pearson, J. M., Watson, K. K. & Platt, M. L. Decision making: the neuroethological turn. Neuron 82, 950–965, https://doi.org/10.1016/j.neuron.2014.04.037 (2014).
Stephens, D. W. & Dunlap, A. S. Why do animals make better choices in patch-leaving problems? Behav. Processes 80, 252–260, https://doi.org/10.1016/j.beproc.2008.11.014 (2009).
Dunlap, A. S. & Stephens, D. W. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects. Behav. Process. 89, 86–94, https://doi.org/10.1016/j.beproc.2011.10.005 (2012).
Wilke, A. & Barrett, H. C. The hot hand phenomenon as a cognitive adaptation to clumped resources. Evol. Hum. Behav. 30, 161–169, https://doi.org/10.1016/j.evolhumbehav.2008.11.004 (2009).
Blanchard, T. C., Wilke, A. & Hayden, B. Y. Hot-hand bias in rhesus monkeys. J. Exp. Psychol. Anim. Learn. Cogn. 40, 280–286, https://doi.org/10.1037/xan0000033 (2014).
Hammack, T., Cooper, J., Flach, J. M. & Houpt, J. Toward an ecological theory of rationality: debunking the hot hand "illusion". Ecol. Psychol. 29, 35–53, https://doi.org/10.1080/10407413.2017.1270149 (2017).
Krakauer, J. W., Ghazanfar, A. A., Gomez-marin, A., MacIver, M. A. & Poeppel, D. Neuroscience needs behavior: correcting a reductionist bias. Neuron 93, 480–490, https://doi.org/10.1016/j.neuron.2016.12.041 (2017).
Juavinett, A. L., Erlich, J. C. & Churchland, A. K. Decision-making behaviors: weighing ethology, complexity, and sensorimotor compatibility. Curr. Opin. Neurobiol. 49, 42–50, https://doi.org/10.1016/j.conb.2017.11.001 (2018).
This research was supported by a National Institute on Drug Abuse Grant R01 DA038106 to BYH, a NIH T32 to BRE and the UMN DTI and AIRP to BYH and JZ.
Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering University of Minnesota, Minneapolis, MN, 55455, USA
Benjamin R. Eisenreich*, Benjamin Y. Hayden & Jan Zimmermann
Benjamin R. Eisenreich*
Benjamin Y. Hayden
Jan Zimmermann
B.R.E., B.Y.H. and J.Z. designed experimental protocols. B.R.E. collected all data and preformed data analysis. B.R.E., B.Y.H. and J.Z. wrote the manuscript.
Correspondence to Benjamin R. Eisenreich*.
Eisenreich*, B.R., Hayden, B.Y. & Zimmermann, J. Macaques are risk-averse in a freely moving foraging task. Sci Rep 9, 15091 (2019). https://doi.org/10.1038/s41598-019-51442-z
The description–experience gap: a challenge for the neuroeconomics of decision-making under uncertainty
Basile Garcia
, Fabien Cerrotti
& Stefano Palminteri
Philosophical Transactions of the Royal Society B: Biological Sciences (2021)
Are the roots of human economic systems shared with non-human primates?
Elsa Addessi
, Michael J. Beran
, Sacha Bourgeois-Gironde
, Sarah F. Brosnan
& Jean-Baptiste Leca
Neuroscience & Biobehavioral Reviews (2020)
Behavioural variability contributes to over-staying in patchy foraging
Tyler Cash-Padgett
& Benjamin Hayden
Biology Letters (2020)
Automated markerless pose estimation in freely moving macaques with OpenMonkeyStudio
Praneet C. Bala
, Benjamin R. Eisenreich
, Seng Bum Michael Yoo
, Benjamin Y. Hayden
, Hyun Soo Park
& Jan Zimmermann | CommonCrawl |
← 5.4.2 Rotation Analysis
A Comedy of Error – Part II →
A Comedy of Error – Part I
In the 5.4.2 Rotation Analysis post, I mentioned that I was looking into some odd behavior in the SimC error statistics:
I'm actually doing a little statistical analysis on SimC results right now to investigate some deviations from this prediction, but that's enough material for another blog post, so I won't go into more detail yet. What it means for us, though, is that in practice I've found that when you run the sim for a large number of iterations (i.e. 50k or more) the reported confidence interval tends to be a little narrower than the observed confidence interval you get by calculating it from the data.So for example, at 250k iterations we regularly get a DPS Error of approximately 40. In theory that means we feel pretty confident that the DPS we found is within +/-40 of the true value. In practice, it might be closer to +/- 100 or so.
Over the past two weeks, I've been running a bunch of experiments to try to track down and correct the source of this effect. The good news is that with the help of two other SimC devs, we've fixed it, and future rotation analysis posts will be much more accurate as a result.
But before we discuss the solution, we have to identify the problem. And to do that, we need a little bit of statistics. I find that most people's understanding of statistical error is, humorously enough, rather erroneous. So in the interest of improving the level of discourse, let's take a few minute and talk about exactly what it means to measure or report "error."
Disclaimer: While I'm 99.9% sure everything in this post is accurate, keep in mind that I am not a statistician. I just play one on the internet to do math about video games (and in real life to analyze experimental results). If I've made an error or misspoken, please point it out in the comments!
Lies, Damn Lies, and Statistics
Let's start out with a thought experiment. If we're given a pair of standard 6-sided dice, what's the probability of rolling a seven?
There's a number of ways to solve this problem, but the simplest is probably to do some basic math. Each die has 6 sides, so there are 6 x 6 = 36 possible combinations. Out of those combinations, how many give us a sum of seven? Well, there are three ways to do that with the numbers one through six: 1+6, 2+5, and 3+4. However, we have two dice, so either one could contribute the "1" in 1+6. If we decide on a convention of reporting the rolls in the format (die #1)+(die #2), then we could also have 4+3, 5+2, and 6+1. So that's six total ways to roll a seven with a pair of dice, out of thirty-six possible combinations; our probability of rolling a seven is 6/36=1/6=0.1667, or 16.67%.
We could ask this same question for any other possible outcome, like 2, 5, 9, or 11. If we did that for every possible outcome (anything from 2 to 12), and then plotted the results, it would look like this:
The probability distribution that describes the results of rolling two six-sided dies.
This gives a visual interpretation of the numbers. It's clear from the plot that an 8 is less likely than a 7 (as it turns out, there are only five ways to roll an 8) and that rolling a 9 is even less likely (four ways) and that rolling a 2 or 12 is the least likely (one way each). What we have here is the probability distribution of the experiment. It tells us that on any given roll of the dice there's a ~2.78% chance of rolling a 2 or 12, a 5.56% chance of rolling a 3 or 11, and so on.
Now let's talk about two terms you've probably heard before: mean and standard deviation. These terms show up a lot in the discussion of error, so making sure we have a clear definition of them is a good foundation on which to build the discussion. The mean and the standard deviation describe a probability distribution, but provide slightly different information about that distribution.
The mean tells us about the center of the distribution. You're probably more familiar with it by another name: the average. Though both of those names are a bit ambiguous. "Average" can refer to several different metrics, though it's most commonly used to refer to the arithmetic mean. "Mean" is used slightly differently in different areas of math, but when we're talking about statistics it's used synonymously with the term "expected value." The Greek letter $\mu$ is commonly used to represent the mean. If you want the mathy details, it's calculated this way:
$$ \mu = \sum_k x_k P(x_k)$$
where $x_k$ is the outcome (i.e. "5") and $P(x_k)$ is the probability of that outcome (i.e. "11.11%" or 0.1111). For our purposes, though, it's enough to know that the mean tries to measure the middle of a distribution. If the data is perfectly symmetric (like ours is), it tells you what value is in the center. In the case of our dice, the mean is seven, which is what we'd expect the average to be if we made many rolls.
The standard deviation (usually represented by $\sigma$), on the other hand, describes the spread or width of the distribution. Its definition is a little more complicated than the mean:
$$ \sigma = \sqrt{\sum_k P(x_k) (x_k-\mu)^2} $$
But again, for our purposes it's enough to know that it's a measurement of how wide the distribution is, or how much it deviates from the mean. A distribution with a larger $\sigma$ is wider than a distribution with a smaller $\sigma$, which means that any given roll could be farther away from the mean. For our distribution, the standard deviation is 2.45.
The thing I want you to note is that neither of these terms tell us anything about error. We aren't surprised if we roll the dice and get a 10 or 12 instead of a 7. We don't return them to the manufacturer as defective. The mean and standard deviation tell us a little bit about the range of results we can get when we roll two dice. To talk about error, we need to start looking at actual results of dice rolls, not just the theoretical probability distribution for two dice.
Things Start Getting Dicey
Okay, so let's pretend we have two dice, and we roll them 100 times. We keep track of the result each time, and plot them on a histogram like so:
The outcome of 100 rolls of two six-sided dies.
Now, this doesn't look quite the same as our expected distribution. For one thing, it's definitely not symmetric – there were more high rolls than low rolls. We could express that by calculating the sample mean $\mu_{\rm sample}$, which is the mean of a particular set of data (a "sample"). By calling this the sample mean, we can keep straight whether we're talking about the mean of the sample or about the mean of entire probability distribution (often called "population mean"). The sample mean of this data set is 7.40, as shown in the upper right hand corner of the plot, which is higher than our expected value of 7.00 by a fair amount.
We can also calculate a sample standard deviation $\sigma_{\rm sample}$ for the data, which again is just the standard deviation of our data set. The sample standard deviation for this run is 2.52, which is a bit higher than the expected 2.45 because the distribution is "broader." Note that the maximum extent isn't any wider – we don't have any rolls above 12 or below 2 – but because the distribution is a little "flatter" than usual, with more results than expected in some of the extremes and fewer in the middle, the sample standard deviation goes up a little.
But note that, by themselves, neither $\mu_{\rm sample}$ nor $\sigma_{\rm sample}$ tell us about the error! They're still just describing the probability distribution that the data in the sample represents. At best, we might be able to compare our results to the theoretical $\mu$ and $\sigma$ we found for the ideal case to identify how our results differ. But it's not at all clear that this tells us anything about error. Why?
Because maybe these dice aren't ideal. Maybe they differ in some way from our model. For example, maybe you've heard the term "weighted dice" before? What if one of them is heavier on one side? That might cause it to roll e.g. 6 more often than 1, and give us a slightly different distribution. You could call that an "error" in the manufacturing of the dice, perhaps, but that's not what we generally mean when we talk about statistical error.
So perhaps it's time we seriously considered what "error" means. After all, it's hard to identify an "error" if we haven't clearly defined what "error" is. Let's say that we perform an experiment – we make our 100 die rolls and keep track of the results, and generate a figure like the one above. And in addition, let's say we're primarily interested in the mean of this distribution; we want to know what the average result of rolling these particular two dice will be. We know that if they were ideal dice, it should be seven. But when we ran our experiment, we got a mean of 7.40.
What we really want to know is the answer to the question, "how accurate is that result of 7.40?" Do we trust it so much that we're sure these dice are non-standard in some way? Or was it just a fluke accident. Remember, there's absolutely no reason we couldn't roll 100 twelves in a row, because each dice roll is independent of the last, and it's a random process. It's just really unlikely. So how do we know this value we came up with isn't just bad luck?
So let's say the "error" in the sample mean is a measure of accuracy. In other words, we want to be able to say that we're pretty confident that the "true" value of the population mean $\mu$ happens to fall within the interval $\mu_{\rm sample}-E < \mu < \mu_{\rm sample} + E$, where $E$ is our measure of error. We could call that range our confidence interval, because we feel pretty confident that the actual mean $\mu$ of the distribution for our dice happens to be in that interval. We'll talk about exactly how confident we are a little bit later.
It should be clear now why comparing our distribution to the "ideal" distribution doesn't tell us anything about how reliable our results are. We might know that the sample mean differs from the ideal, but we don't know why. It could be that our dice are defective, but it could also just be a random fluctuation. But since nothing we've discussed so far tells us how accurate our measured sample mean is, we don't know for sure. To get that, we need to figure out how to represent $E$, the number that sets the bounds on our confidence interval.
It's a common misconception that $E$ should just be the sample standard deviation $\sigma_{\rm sample}$. You may have seen results presented like $\mu \pm \sigma$, or $7.40 \pm 2.52$, to suggest an interval of confidence. That is, generally speaking, not correct. Or at least, very misleading. Because that's not what the standard deviation means.
What we really want here is something called the standard error, though it's also commonly called the standard error of the mean. It's also sometimes (mistakenly or carelessly) called the "standard deviation of the mean," but we'll clarify the difference in a second. I like the term "standard error of the mean," because it makes it clear that this is a measurement of accuracy of the sample mean. As you might guess, it's closely related to the sample standard deviation, but not quite the same. It's calculated by dividing the sample standard deviation by the number of individual "trials," or dice rolls, $N$:
$${\rm SE_{\mu}} = \frac{\sigma_{\rm sample}}{\sqrt{N}}.$$
This, at long last, is a good measurement of error. It's worth noting that the standard deviation of the mean is defined similarly, but uses the true standard deviation of the distribution:
$${\rm SD_{\mu}} = \frac{\sigma}{\sqrt{N}}.$$
The reason the two are often used interchangeably is that we generally don't know what the actual distribution looks like, nor do we know the expected values of $\mu$ and $\sigma$. Sometimes we do, of course; if we have a theory describing the process we're measuring, then we can often calculate the theoretical values of $\mu$ and $\sigma$. But we don't always know if our experiment matches the theory as well as we'd like – for example, if one of the dice is weighted and rolls more sixes than ones.
And sometimes, we don't have a well-described theory at all, we just have a pile of data. This is the case for most Simulationcraft data runs, because we don't have an easy analytical function that accurately describes your DPS due to any number of factors: procs, avoidance, movement, and so on. In that sort of situation, we can never truly know $\sigma$, so the lines between ${\rm SE}_{\mu}$ and ${\rm SD}_{\mu}$ blur a little bit, and we tend to get sloppy with terminology.
Now, we've thrown around a lot of terms that have "standard deviation" in them. It's no wonder the layperson is easily confused by statistics. So it's worth spending a moment to make the differences between these terms abundantly clear. Let's reiterate quickly why we use standard error to describe the accuracy of the sample mean rather than just using $\sigma$ or $\sigma_{\rm sample}$.
We have a theoretical probability distribution describing the result of rolling two 6-sided dice. Here's what each of the terms we've discussed so far tells us:
The mean (or "population mean") $\mu$ tells us the average value of a single roll.
The standard deviation $\sigma$ tells us about the fluctuations of any single dice roll. In other words, if we make a single roll, $\sigma$ tells us how much variation we can expect from the mean. When we make a single roll, we're not surprised if the result is $\sigma$ or $2\sigma$ away from the mean (ex: a roll of 9 or 11). The more $\sigma$s a roll is away from the mean, the less likely it is, and the more surprised we are. Our distribution here is finite, in that we can never roll less than two or more than 12, but in the general case a probability distribution could have non-zero probabilities farther out in the wings, such that talking about $4\sigma$ or $5\sigma$ is relevant.
The sample mean $\mu_{\rm sample}$ tells us the average value of a particular sample of rolls. In other words, we roll the dice 100 times and calculate the sample mean. This is an estimate of the population mean.
The sample standard deviation $\sigma_{\rm sample}$ tells us about the fluctuations of our particular sample of rolls. If we roll the dice 100 times, we can calculate the sample standard deviation by looking at the spread of the results. Again, this is an estimate of the population's standard deviation, and it tells us how much variation we should expect from a single dice roll.
The standard deviation of the mean $SD_{\mu}$ tells us about the fluctuations of the mean of an arbitrary sample. In other words, if we proposed an experiment where we rolled the dice 100 times, we would go into that experiment expecting to get a sample mean that's pretty close to (but not exactly) $\mu$. $SD_{\mu}$ tells us how close we'd expect to be. For example, under normal conditions we'd expect to get a result for $\mu_{\rm sample}$ that is between $\mu-2{\rm SD}_{\mu}$ and $\mu+2{\rm SD}_{\mu}$ about 95% of the time, and between $\mu-2.5{\rm SD}_{\mu}$ and $\mu+2.5{\rm SD}_{\mu}$ about 99% of the time.
The standard error of the mean $SE_{\mu}$ tells us about the fluctuations of the mean of our particular sample of rolls. Once we actually make those 100 rolls, and calculate the sample mean and sample standard deviation, we can state that we're 95% confident that the "true" population mean $\mu$ is between $\mu_{\rm sample}-2{\rm SE}_{\mu}$ and $\mu_{\rm sample}+2{\rm SE}_{\mu}$, and 99% confident that it's between $\mu_{\rm sample}-2.5{\rm SE}_{\mu}$ and $\mu_{\rm sample}+2.5{\rm SE}_{\mu}$
You can see why this gets confusing. But the key is that the standard deviation and sample standard deviation are telling you about single rolls. If you roll the dice once, you expect to get a value between $\mu+2\sigma$ and $\mu-2\sigma$ about 95% of the time.
Whereas the standard deviation of the mean and standard error tell us about groups of rolls. If we make 100 rolls the sample mean should be a much better estimate of the population mean than if we made only a handful of rolls. And if we make 1000 rolls, we should get a better estimate than if we only made 100 rolls.
So we use the standard deviation of the mean to answer the question, "if we made 100 rolls, how close do we expect $\mu_{\rm sample}$ (our sample mean) to be to $\mu$ (the population mean)?" And we use the standard error to answer the related (but different!) question, "now that I've made 100 rolls, how accurately do I think my calculated $\mu_{\rm sample}$ (sample mean) approximates $\mu$ (the population mean)?"
You might wonder what voodoo tricks I played to get these "95%" and "99%" values. These come from analysis of the normal distribution, which is a probability distribution that comes up frequently in statistics. If your probability distribution is normal, then about 68% of the data will fall within one standard deviation in either direction. Put another way, the region from $\mu-\sigma$ to $\mu+\sigma$ contains 68% of the data. Likewise, the region from $\mu-2\sigma$ to $\mu+2\sigma$ contains about 95% of the data, and over 99.7% of the data will fall between $\mu-3\sigma$ to $\mu+3\sigma$.
Our probability distribution isn't a normal distribution. First of all, it's truncated on either side, while the normal distribution goes on infinitely in either direction (we'll never be able to roll a one or 13 or 152 with our two dice). Second, it's a little too discrete to be a good normal distribution – there isn't quite enough granularity between 2 and 12 to flesh the distribution out sufficiently. It's really more of a triangle than a nice Gaussian, though it's not an awful approximation given the constraints. Luckily, none of that matters! As it turns out, the reason our distribution looks vaguely normal is closely related to the reason that we use the normal distribution to determine confidence intervals.
The Central Limit Theorem is the piece that completes our little puzzle. Quoth the Wikipedia,
the central limit theorem (CLT) states that, given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independent random variables, each with a well-defined expected value and well-defined variance, will be approximately normally distributed.
That's a bit technical, so let's break that down and make it a bit clearer with an example. We start with a dice roll (a "random variable") that has some probability distribution that doesn't change from roll to roll ("a well-defined expected value and well-defined variance") and each roll doesn't depend on any of the previous ones ("independent"). Now we roll those dice 10 times and calculate the sample mean. And then roll another 10 times and calculate the sample mean. And then do it again. And again, and again, and… you get the idea ("a sufficiently large number of iterates"). If we do that, and plot the probability distribution of those sample means, we'll get a normal distribution centered on the population mean $\mu$.
The beautiful part of this is that it doesn't matter what the probability distribution you started with looks like. It could be our triangular dice roll distribution or a "top-hat" (uniform) distribution or some other weird shape. Because we're not interested in that; we're interested in the sample means of a bunch of different samples of that distribution. And those are normally distributed about the mean, as long as the CLT applies. Which means that when we find a sample mean, we can use the normal distribution to estimate the error, regardless of what probability distribution that the individual rolls obey.
Now, there are two major caveats here that cause the CLT to break down if they aren't obeyed:
The random variables (rolls) need to be independent. In other words, the CLT will not necessarily be true if the result of the next roll depends on any of the previous rolls. Usually this is the case (and it is in our example), but not always. There are two wow-related examples I can think of off the top of my head.
Quest items that drop from mobs aren't truly random, at least post-BC (and possibly post-Vanilla). Most quest mobs have a progressively increasing chance to drop quest items, such that the more of them you kill, the higher the chance of an item dropping. This prevents the dreaded "OMG I've killed 8000 motherf@$#ing boars and they haven't dropped a single tusk" effect (yes, that's the technical term for it).
Similarly, bonus rolls have a system where every failed bonus roll will cause a slight increase in the chance of success with your next bonus roll against that boss. So this would be another example where the CLT won't apply, because the rolls aren't truly independent.
The random variables need to be identically distributed. In other words, the probability distribution can't be changing in-between rolls. If we swapped one of our 6-sided dice out for an 8-sided or 10-sided die, all of the sudden our probability distribution would change and there would be no guarantee that the CLT would apply.
You might ask if you could cite either of the two examples of dependence here as examples of non-identical distributions. After all, in each case the probability distribution is changing between rolls. However, that change is due to dependence on previous effects – in a sense, the definition of dependence is "changing the probability distribution between rolls based on prior outcomes." So dependence is a more specific subset of this category.
If either of those things occur, then we can't be sure that the CLT is valid for our situation. Luckily, none of that applies to our dice-rolling example, so we can properly apply the CLT to estimate the error in our set of 100 rolls.
Keep Rollin' Rollin' Rollin' Rollin'
So now that we've talked a lot about deep probability theory, let's actually do that. The standard error of our 100-roll sample is,
$$ {\rm SE}_{\mu} = \sigma_{\rm sample}/\sqrt{N} = 2.52/\sqrt{100} = 0.252 $$
To get our 95% confidence interval (CI), we'd want to look at values between $\mu_{\rm sample}-2{\rm SE}_{\mu}$ and $\mu_{\rm sample}+2{\rm SE}_{\mu}$, or $7.40 \pm 0.504$. And sure enough, the actual value of the population mean (7.00) falls within that confidence interval. Though note that it didn't have to – there was still a 5% chance it wouldn't!
We could improve the estimate by increasing the number of dice rolls. For example, what if we rolled 1000 dice instead? That might look something like this:
The outcome of 1000 rolls of two six-sided dice.
We see that our new sample mean is $\mu_{\rm sample}=6.95$ and our sample standard deviation is $\sigma_{\rm sample}=2.41$. But now $N=1000$, so our standard error is much smaller:
$$ {\rm SE}_{\mu} = \sigma_{\rm sample}/\sqrt{N} = 2.41/\sqrt{1000} = 0.0762$$
As before, we're 95% confident that our sample mean is within $\pm 2{\rm SE} = 0.1524$ of the population mean in one direction or the other, and sure enough it is.
Of course, we could keep going. Here's what 10000 rolls looks like:
The outcome of 10000 rolls of two six-sided dice.
And if we calculate our standard error for this distribution, we get:
$$ {\rm SE}_{\mu} = \sigma_{\rm sample}/\sqrt{N} = 2.43/\sqrt{10000} = 0.0243$$
So now we're pretty sure that the value of 7.01 is correct to within $\pm 0.0486$, again with 95% confidence. Like before, there's no guarantee that it will be – there's still that 5% chance it falls outside that range. But we can solve that by increasing our confidence interval (say, looking at $\pm 3{\rm SE}_{\mu}$) or by repeating the experiment a few times and thinking about the results. If we repeat it 100 times, we'd expect about 95 of them to cluster within $\pm 2{\rm SE}_{\mu}$ of 7.00.
You may have noticed that while the confidence interval is shrinking, it's not doing so as fast as it did going from 100 to 1000. That's because we're dividing by the square root of $N$, which means that to improve the standard error by a factor of $a$, we need to run $a^2$ times as many simulations. So if we want to increase our accuracy by a whole decimal place (a factor of 10), we need to make 100 times as many rolls. This is important stuff to know if you're designing an experiment, because you don't want your graduate thesis to rely on making five trillion dice rolls. Trust me.
You probably also noticed that the more rolls we make, the more the sample probability distribution resembles the ideal "triangular" case we arrived at theoretically. That's to be expected – the more rolls we make, the better the sample approximates the real distribution. This is related to another law (the amusingly-named law of large numbers) that's important for the CLT, but I don't have time to go into that here. But it was worth mentioning just because "law of large numbers" is probably the best name for a mathematical law ever.
Finally, I mentioned that our "triangular" distribution for two dice looks vaguely normal, and that this relates to the CLT somehow. Here's how. Each die is essentially its own random variable with a "flat" or "uniform" probability distribution (you have an equal chance to roll any number on the die). So when we take two of them and calculate the sum, we're really performing two experiments and finding two sample means (with a sample size of 1 roll each). The sum of those two sample means, which is just twice the average of the sample means, is our result. This is exactly how we phrased our description of the CLT!
The reason we get a triangle rather than a nice Gaussian is that two dice is not "a sufficiently large number of iterates." There is, unfortunately, no clean closed-form expression for this probability distribution for arbitrary numbers of $s$-sided dice (something called the binomial distribution works when $s$=2, i.e. for coin flips). But if we rolled 5 dice or 10 dice instead of two, and added all of those up, we'd start to get a distribution that looked very much like a normal distribution. And in fact, if you read either of the articles linked in this paragraph, you'll see that they both become well-approximated by a normal distribution as you increase the number of experiments (die rolls).
World of Stat-craft?
Now that you've read through 4000 words on probability theory, you may ask where the damn World of Warcraft content is. The short answer: next blog post. But as a teaser, let's consider a graph that shows up in your Simulationcraft output:
A DPS distribution generated by Simulationcraft.
When you simulate a character in SimC, you run some number of iterations. Each iteration gives you an average DPS result, which is essentially one result of a random variable. In other words, each iteration is comparable to a single roll of the dice in our example experiment. If we run a simulation for 1000 iterations, that gives us 1000 different data points, from which we can calculate a sample mean (367.7k in this case), a sample standard deviation, and a standard error value.
And all of the same statistics apply here. This plot gives us the "DPS distribution function," which is equivalent to the triangular distribution in our experiment. The DPS distribution looks Gaussian/normal, but be aware that there's no reason it has to be. It generally will look close to normal just because each iteration is the results of a large number of "RNG rolls," many of which are independent. But some of those RNG rolls are are not independent (for example, they may be contingent on the previous die roll succeeding and granting you a specific proc, like Grand Crusader). With certain character setups you can definitely generate DPS distributions that deviate significantly from a normal distribution (skewed heavily to one side, for example).
But again, because of the Central Limit Theorem, we don't care that much what this DPS distribution function looks like. As long as each iteration is independent, we can use the normal distribution to estimate the accuracy of the sample mean. So we can calculate the standard error and report that as a way of telling the user how confident they should be in the average DPS value of 367.7k DPS.
At the very beginning of this post, I said I was looking into a strange deviation from the expected error. What I was finding that my observed errors were larger than what Simulationcraft was reporting. Next time, we'll look a little more closely into how Simulationcraft reports error, and discuss the specifics of that effect – why it was happening, and how we fixed it.
This entry was posted in Simcraft, Simulation, Tanking, Theck's Pounding Headaches, Theorycrafting and tagged LaTeX, simcraft, theck, Theorycraft, theorycrafting, warcraft, WoW. Bookmark the permalink.
6 Responses to A Comedy of Error – Part I
Çapncrunch says:
The Law of Large Numbers is definitely a great name for a mathematical theorem, but if we include theorems related to probability (as opposed to strictly mathematics) I think the prize goes to the Infinite Monkey Theorem (also I like the Law of Truly Large Numbers, for times when Large Numbers just aren't large enough 😛 )
Talarian says:
This was an enjoyable and excellent review of my statistics and probability course I took a decade ago. Looking forward to your next post in the series.
Caltiom says:
Excellent review! Probably one of the best texts to explain some fundamental statistic metrics, including the CLT:
You asked for corrections:
Section Keep Rollin' Rollin' Rollin' Rollin'
"SEμ=σsample/N−−√=2.52/100−−−√=0.252
To get our 95% confidence interval (CI), we'd want to look at values between μsample−2σsample and μsample+2σsample, or 7.40±0.504."
I think you meant CI = μsample +- SEμ
Regarding distributiions in SimC potentially deviated from a normal distribution: You can very quickly get one by simulating a character going oom.
Yep, I did. Good catch.
Tyrunea says:
This is all like learning bits of a new language for me, I'm not huge with numbers. But this is still really interesting and I'm going to have to hunt for someone who does this for ret as well. On a WoW specific note: "This prevents the dreaded 'OMG I've killed 8000 motherf@$#ing boars and they haven't dropped a single tusk' effect (yes, that's the technical term for it)." I kinda miss this. I mean, yeah, it was aggravating, but the game felt slightly more challenging when you didn't have an increasing chance of success on everything you did. After you killed that eight-thousandth murloc in Southshore for the heads they seemingly don't have, it feels so much better than getting it done quick and easy. Just my two cents.
Pingback: A Comedy of Error – Part II | Sacred Duty | CommonCrawl |
BMC Biology
Mechanisms of blood homeostasis: lineage tracking and a neutral model of cell populations in rhesus macaques
Sidhartha Goyal1,
Sanggu Kim2,
Irvin SY Chen2,3 &
Tom Chou4
BMC Biology volume 13, Article number: 85 (2015) Cite this article
How a potentially diverse population of hematopoietic stem cells (HSCs) differentiates and proliferates to supply more than 1011 mature blood cells every day in humans remains a key biological question. We investigated this process by quantitatively analyzing the clonal structure of peripheral blood that is generated by a population of transplanted lentivirus-marked HSCs in myeloablated rhesus macaques. Each transplanted HSC generates a clonal lineage of cells in the peripheral blood that is then detected and quantified through deep sequencing of the viral vector integration sites (VIS) common within each lineage. This approach allowed us to observe, over a period of 4-12 years, hundreds of distinct clonal lineages.
While the distinct clone sizes varied by three orders of magnitude, we found that collectively, they form a steady-state clone size-distribution with a distinctive shape. Steady-state solutions of our model show that the predicted clone size-distribution is sensitive to only two combinations of parameters. By fitting the measured clone size-distributions to our mechanistic model, we estimate both the effective HSC differentiation rate and the number of active HSCs.
Our concise mathematical model shows how slow HSC differentiation followed by fast progenitor growth can be responsible for the observed broad clone size-distribution. Although all cells are assumed to be statistically identical, analogous to a neutral theory for the different clone lineages, our mathematical approach captures the intrinsic variability in the times to HSC differentiation after transplantation.
Around 1011 new mature blood cells are generated in a human every day. Each mature blood cell comes from a unique hematopoietic stem cell (HSC). Each HSC, however, has tremendous proliferative potential and contributes a large number and variety of mature blood cells for a significant fraction of an animal's life. Traditionally, HSCs have been viewed as a homogeneous cell population, with each cell possessing equal and unlimited proliferative potential. In other words, the fate of each HSC (to differentiate or replicate) would be determined by its intrinsic stochastic activation and signals from its microenvironment [1, 2].
However, as first shown in Muller-Sieburg et al. [3], singly transplanted murine HSCs differ significantly in their long-term lineage (cell-type) output and in their proliferation and differentiation rates [4–7]. Similar findings have been found from examining human embryonic stem cells and HSCs in vitro [8, 9]. While cell-level knowledge of HSCs is essential, it does not immediately provide insight into the question of blood homeostasis at the animal level. More concretely, analysis of single-cell transplants does not apply to human bone marrow transplants, which involve millions of CD34-expressing primitive hematopoietic and committed progenitor cells. Polyclonal blood regeneration from such hematopoietic stem and progenitor cell (HSPC) pools is more complex and requires regulation at both the individual cell and system levels to achieve stable [10, 11] or dynamic [12] homeostasis.
To dissect how a population of HSPCs supplies blood, several high-throughput assay systems that can quantitatively track repopulation from an individual stem cell have been developed [6, 11, 13, 14]. In the experiment analyzed in this study, as outlined in Fig. 1, each individual CD34+ HSPC is distinctly labeled by the random incorporation of a lentiviral vector in the host genome before transplantation into an animal. All cells that result from proliferation and differentiation of a distinctly marked HSPC will carry identical markings defined by the location of the original viral vector integration site (VIS). By sampling nucleated blood cells and enumerating their unique VISs, one can quantify the cells that arise from a single HSPC marked with a viral vector. Such studies in humans [15] have revealed highly complex polyclonal repopulation that is supported by tens of thousands of different clones [15–18]; a clone is defined as a population of cells of the same lineage, identified here by a unique VIS. These lineages, or clones, can be distributed across all cell types that may be progeny of the originally transplanted HSC after it undergoes proliferation and differentiation. However, the number of cells of any VIS lineage across certain cell types may be different. By comparing abundances of lineages across blood cells of different types, for example, one may be able to determine the heterogeneity or bias of the HSC population or if HSCs often switch their output. This type of analysis remains particularly difficult in human studies since transplants are performed under diseased settings and followed for only 1 or 2 years.
Probing hematopoietic stem and progenitor cell (HSPC) biology through polyclonal analysis. a Mobilized CD34+ bone marrow cells from rhesus macaques are first marked individually with lentiviral vectors and transplanted back into the animal after nonlethal myeloablative irradiation [19]. Depending on the animal, 30–160 million CD34+ cells were transplanted, with a fraction ∼0.07–0.3 of them being lentivirus-marked. The clonal contribution of vector-marked HSPCs is measured from blood samples periodically drawn over a dozen years [19]. An average fraction f ∼0.03–0.1 of the sampled granulocytes and lymphocytes in the peripheral blood was found to be marked. This fraction is smaller than the fraction of marked CD34+ cells due probably to repopulation by surviving unmarked stem cells in the marrow after myeloablative conditioning. Within any post-transplant sample, S=1342–44,415 (average 10,026) viral integration sites of the marked cells were sequenced (see [14, 19] for details). b The fraction of all sequenced VIS reads belonging to each clone is shown by the thickness of the slivers. Small clones are not explicitly shown
We analyze here a systematic clone-tracking study that used a large number of HSPC clones in a transplant and competitive repopulation setting comparable to that used in humans [19]. In these nonhuman primate rhesus macaque experiments, lentiviral vector-marked clones were followed for up to a decade post-transplantation (equivalent to about 30 years in humans if extrapolated by average life span). All data are available in the supplementary information files of Kim et al. [19]. This long-term study allows one to distinguish clearly HSC clones from other short-term progenitor clones that were included in the initial pool of transplanted CD34+ cells. Hundreds to thousands of detected clones participated in repopulating the blood in a complex yet highly structured fashion. Preliminary examination of some of the clone populations suggests waves of repopulation with short-lived clones that first grow then vanish within the first 1 or 2 years, depending on the animal [19].
Subsequent waves of HSC clones appear to rise and fall sequentially over the next 4–12 years. This picture is consistent with recent observations in a transplant-free transposon-based tagging study in mice [20] and in human gene therapy [15, 16]. Therefore, the dynamics of a clonally tracked nonhuman primate HSPC repopulation provides rich data that can inform our understanding of regulation, stability, HSPC heterogeneity, and possibly HSPC aging in hematopoiesis.
Although the time-dependent data from clonal repopulation studies are rich in structure, in this study we focus on one specific aspect of the data: the number of clones that are of a certain abundance as described in Fig. 2. Rather than modeling the highly dynamic populations of each clone, our aim here is to develop first a more global understanding of how the total number of clones represented by specific numbers of cells arises within a mechanistically reasonable model of hematopoiesis. The size distributions of clones present in the blood sampled from different animals at different times are characterized by specific shapes, with the largest clones being a factor of 100–1000 times more abundant than the most rarely detected clones. Significantly, our analysis of renormalized data indicates that the clone size distribution (measuring the number of distinct lineages that are of a certain size) reaches a stationary state as soon as a few months after transplantation (see Fig. 4 below). To reconcile the observed stationarity of the clone size distributions with the large diversity of clonal contributions in the context of HSPC-mediated blood repopulation, we developed a mathematical model that treats three distinct cell populations: HSCs, transit-amplifying progenitor cells, and fully differentiated nucleated blood cells (Fig. 3). While multistage models for a detailed description of differentiation have been developed [21], we lump different stages of cell types within the transit-amplifying progenitor pool into one population, avoiding excess numbers of unmeasurable parameters. Another important feature of our model is the overall effect of feedback and regulation, which we incorporate via a population-dependent cell proliferation rate for progenitor cells.
Quantification of marked clones. a Assuming each transplanted stem cell is uniquely marked, the initial number of CD34+ cells representing each clone is one. b The pre-transplant clone size distribution is thus defined by the total number of transplanted CD34+ cells and is peaked at one cell. Post-transplant proliferation and differentiation of the HSC clones result in a significantly broader clone size distribution in the peripheral blood. The number of differentiated cells for each clone and the number of clones represented by exactly k cells, 5 years' post-transplantation (corresponding to Fig. 1a), are overlaid in (a) and (b) respectively. c Clone size distribution (blue) and the cumulative normalized clone size distribution (red) of the pre-transplant CD34+ population. d After transplantation, clone size distributions in the transit-amplifying (TA) and differentiated peripheral cell pools broaden significantly (with clones ranging over four decades in size) but reach a steady state. The corresponding cumulative normalized distribution is less steep
Schematic of our mathematical model. Of the ∼106– 107 CD34+ cells in the animal immediately after transplantation, C active HSCs are distinctly labeled through lentiviral vector integration. U HSCs are unlabeled because they were not mobilized, escaped lentiviral marking, or survived ablation. All HSCs asymmetrically divide to produce progenitor cells, which in turn replicate with an effective carrying capacity-limited rate r. Transit-amplifying progenitor cells die with rate μ p or terminally differentiate with rate ω. The terminal differentiation of the progenitor cells occurs symmetrically with probability η or asymmetrically with probability 1−η. This results in a combined progenitor-cell removal rate μ=μ p+η ω. The differentiated cells outside the bone marrow are assumed not to be subject to direct regulation but undergo turnover with a rate μ d. The mean total numbers of cells in the progenitor and differentiated populations are denoted N p and N d, respectively. Finally, a small fraction ε≪1 of differentiated cells is sampled, sequenced, and found to be marked. In this example, S=ε N d=5. Because some clones may be lost as cells successively progress from one pool to the next, the total number of clones in each pool must obey C≥C p≥C d≥C s. Analytic expressions for the expected total number of clones in each subsequent pool are derived in Additional file 1. HSC hematopoietic stem cell, TA transit-amplifying
Rescaled and renormalized data. a Individual clone populations (here, peripheral blood mononuclear cells of animal RQ5427) show significant fluctuations in time. For clarity, only clones that reach an appreciable frequency are plotted. b The corresponding normalized clone size distributions at each time point are rescaled by the sampled and marked fraction of blood, ν=q/S×f, where q is the number of reads of a particular clone within the sample. After an initial transient, the fraction of clones (dashed curves) as a function of relative size remains stable over many years. For comparison, the dot-dashed gray curves represent binomial distributions (with S=103 and 104 and equivalent mean clone sizes) and underestimate low population clones
The effective proliferation rate will be modeled using a Hill-type suppression that is defined by the limited space for progenitor cells in the bone marrow. Such a regulation term has been used in models of cyclic neutropenia [22] but has not been explicitly treated in models of clone propagation in hematopoiesis. Our mathematical model is described in greater detail in the next section and in Additional file 1.
Our model shows that both the large variability and the characteristic shape of the clone size distribution can result from a slow HSC-to-progenitor differentiation followed by a burst of progenitor growth, both of which are generic features of hematopoietic systems across different organisms. By assuming a homogeneous HSC population and fitting solutions of our model to available data, we show that randomness from stochastic activation and proliferation and a global carrying capacity are sufficient to describe the observed clonal structure. We estimate that only a few thousand HSCs may be actively contributing toward blood regeneration at any time. Our model can be readily generalized to include the role of heterogeneity and aging in the transplanted HSCs and provides a framework for quantitatively studying physiological perturbations and genetic modifications of the hematopoietic system.
Mathematical Model
Our mathematical model explicitly describes three subpopulations of cells: HSCs, transit-amplifying progenitor cells, and terminally differentiated blood cells (see Fig. 3). We will not distinguish between myeloid or lymphoid lineages but will use our model to analyze clone size distribution data for granulocytes and peripheral blood mononuclear cells independently. Our goal will be to describe how clonal lineages, started from distinguishable HSCs, propagate through the amplification and terminal differentiation processes.
Often clone populations are modeled directly by dynamical equations for n j (t), the number of cells of a particular clone j identified by its specific VIS [23]. Since all cells are identical except for their lentiviral marking, mean-field rate equations for n j (t) are identical for all j. Assuming identical initial conditions (one copy of each clone), the expected populations n j (t) would be identical across all clones j. This is a consequence of using identical growth and differentiation rates to describe the evolution of the mean number of cells of each clone.
Therefore, for cells in any specific pool, rather than deriving equations for the mean number n j of cells of each distinct clone j (Fig. 2 a), we perform a hodograph transformation [24] and formulate the problem in terms of the number of clones that are represented by k cells, \(c_{k} = \sum _{j}\delta _{k,n_{j}}\) (see Fig. 2 b), where the Kronecker δ function \(\delta _{k,n_{j}}=1\) only when k=n j and is 0 otherwise. This counting scheme is commonly used in the study of cluster dynamics in nucleation [25] and in other related models describing the dynamics of distributions of cell populations. By tracking the number of clones of different sizes, the intrinsic stochasticity in the times of cell division (especially that of the first differentiation event) and the subsequent variability in the clone abundances are quantified. Figure 2 a, b qualitatively illustrates n j and c k , pre-transplant and after 5 years, corresponding to the scenario depicted in Fig. 1 a. Cells in each of the three pools are depicted in Fig. 3, with different clones grouped according to the number of cells representing each clone.
The first pool (the progenitor-cell pool) is fed by HSCs through differentiation. Regulation of HSC differentiation fate is known to be important for efficient repopulation [26, 27] and control [28] and the balance between asymmetric and symmetric differentiation of HSCs has been studied at the microscopic and stochastic levels [29–32]. However, since HSCs have life spans comparable to that of an animal, we reasoned that the total number of HSCs changes only very slowly after the initial few-month transient after transplant. For simplicity, we will assume, consistent with estimates from measurements [33], that HSCs divide only asymmetrically. Therefore, upon differentiation, each HSC produces one partially differentiated progenitor cell and one replacement HSC. How symmetric HSC division might affect the resulting clone sizes is discussed in Additional file 1 through a specific model of HSC renewal in a finite-sized HSC niche. We find that the incorporation of symmetric division has only a small quantitative effect on the clone size distribution that we measure and ultimately analyze.
Next, consider the progenitor-cell pool. From Fig. 3, we can count the number of clones c k represented by exactly k cells. For example, the black, red, green, and yellow clones are each represented by three cells, so c 3=4. Each progenitor cell can further differentiate with rate ω into a terminally differentiated cell. If progenitor cells undergo symmetric differentiation with probability η and asymmetric differentiation with probability 1−η, the effective rate of differentiation is 2η ω+(1−η)ω=(1+η)ω. In turn, fully differentiated blood cells (not all shown in Fig. 3) are cleared from the peripheral pool at rate μ d, providing a turnover mechanism. Finally, each measurement is a small-volume sample drawn from the peripheral blood pool, as shown in the final panel in Fig. 3.
Note that the transplanted CD34+ cells contain both true HSCs and progenitor cells. However, we assume that at long times, specific clones derived from progenitor cells die out and that only HSCs contribute to long-lived clones. Since we measure the number of clones of a certain size rather than the dynamics of individual clone numbers, transplanted progenitor cells should not dramatically affect the steady-state clone size distribution. Therefore, we will ignore transplanted progenitor cells and assume that after transplantation, effectively only U unlabeled HSCs and C labeled (lentivirus-marked) HSCs are present in the bone marrow and actively asymmetrically differentiating (Fig. 3). Mass-action equations for the expected number of clones c k of size k are derived from considering simple birth and death processes with immigration (HSC differentiation):
$$ \begin{aligned} \frac{\mathrm{d} c_{k}}{\mathrm{d} t} = \underbrace{ \alpha\left[c_{k-1} - c_{k}\right]}_{\textrm{HSC differentiation}} &+ \underbrace{r\left[(k-1)c_{k-1}-{kc}_{k}\right]}_{\textrm{progenitor birth}}\\ &+ \underbrace{\mu\left[(k+1)c_{k+1} - k c_{k}\right]}_{\textrm{progenitor death}}, \end{aligned} $$
((1))
where k=1,2,…,C and \(c_{0}(t) \equiv C - \sum _{k=1}^{\infty }c_{k}(t)\) is the number of clones that are not represented in the progenitor pool. Since C is large, and the number of clones that are of size comparable to C is negligible, we will approximate C→∞ in our mathematical derivations. We have suppressed the time dependence of c k (t) for notational simplicity. The constant parameter α is the asymmetric differentiation rate of all HSCs, while r and μ are the proliferation and overall clearance rates of progenitor cells. In our model, HSC differentiation events that feed the progenitor pool are implicitly a rate- α Poisson process. The appreciable number of detectable clones (Fig. 1 b) implies the initial number C of HSC clones is large enough that asymmetric differentiation of individual HSCs is uncorrelated. The alternative scenario of a few HSCs undergoing synchronized differentiation would not lead to appreciably different results since the resulting distribution c k is more sensitive to the progenitor cells' unsynchronized replication and death than to the statistics of the immigration by HSC differentiation.
The final differentiation from progenitor cell to peripheral blood cell can occur through symmetric or asymmetric differentiation, with probabilities η and 1−η, respectively. If parent progenitor cells are unaffected after asymmetric terminal differentiation (i.e., they die at the normal rate μ p), the dynamics are feed-forward and the progenitor population is not influenced by terminal differentiation. Under symmetric differentiation, a net loss of one progenitor cell occurs. Thus, the overall progenitor-cell clearance rate can be decomposed as μ=μ p+η ω. We retain the factor η in our equations for modeling pedagogy, although in the end it is subsumed in effective parameters and cannot be independently estimated from our data.
The first term in Eq. 1 corresponds to asymmetric differentiation of each of the C active clones, of which c k are of those lineages with population k already represented in the progenitor pool. Differentiation of this subset of clones will add another cell to these specific lineages, reducing c k . Similarly, differentiation of HSCs in lineages that are represented by k−1 progenitor cells adds cells to these lineages and increases c k . Note that Eq. 1 are mean-field rate equations describing the evolution of the expected number of clones of size k. Nonetheless, they capture the intrinsic dispersion in lineage sizes that make up the clone size distribution. While all cells are assumed to be statistically identical, with equal rates α, p, and μ, Eq. 1 directly model the evolution of the distribution c k (t) that arises ultimately from the distribution of times for each HSC to differentiate or for the progenitor cells to replicate or die. Similar equations have been used to model the evolving distribution of virus capsid sizes [34].
Since the equations for c k (t) describe the evolution of a distribution, they are sometimes described as master equations for the underlying process [34, 35]. Here we note that the solution to Eq. 1, c k (t), is the expected distribution of clone sizes. Another level of stochasticity could be used to describe the evolution of a probability distribution \(P_{b}(\textbf {b};t) = P_{b}(b_{0}, b_{1},\ldots,b_{N_{\mathrm {p}}};t)\phantom {\dot {i}\!}\) over the integer numbers b k . This density represents the probability that at time t, there are b 0 unrepresented lineages, b 1 lineages represented by one cell in the progenitor pool, b 2 lineages represented by two cells in the progenitor pool, and so on. Such a probability distribution would obey an N p-dimensional master equation rather than a one-dimensional equation, like Eq. 1, and once known, can be used to compute the mean \(c_{k}(t) = \sum _{\textbf {b}} b_{k}P(\textbf {b};t)\). To consider the entire problem stochastically, the variability described by probability distribution P b would have to be propagated forward to the differentiated cell pool as well. Given the modest number of measured data sets and the large numbers of lineages that are detectable in each, we did not attempt to use the data as samples of the distribution P b and directly model the mean values c k instead. Variability from both intrinsic stochasticity and sampling will be discussed in Additional file 1.
After defining u(t) as the number of unlabeled cells in the progenitor pool, and \(N_{\mathrm {p}}(t) = u(t)+\sum _{k=1}^{\infty }{kc}_{k}(t)\) as the total number of progenitor cells, we find \(\dot {u} = (r - \mu) u + \alpha U\) and
$$ \frac{\mathrm{d} N_{\mathrm{p}}(t)}{\mathrm{d} t} = \alpha \left(U+C\right)+\left(r-\mu \right)N_{\mathrm{p}}(t). $$
Without regulation, the total population N p(t→∞) will either reach N p≈α(U+C)/(μ−r) for μ>r or will exponentially grow without bound for r>μ. Complex regulation terms have been employed in deterministic models of differentiation [28] and in stochastic models of myeloid/lymphoid population balance [36]. For the purpose of estimating macroscopic clone sizes, we assume regulation of cell replication and/or spatial constraints in the bone marrow can be modeled by a simple effective Hill-type growth law [22, 37]:
$$ r = r(N_{\mathrm{p}}) \equiv \frac{pK}{N_{\mathrm{p}}+K} $$
where p is the intrinsic replication rate of an isolated progenitor cell. We assume that progenitor cells at low density have an overall positive growth rate p>μ. The parameter K is the progenitor-cell population in the bone marrow that corresponds to the half-maximum of the effective growth rate. It can also be interpreted as a limit to the bone marrow size that regulates progenitor-cell proliferation to a value determined by K, p, and μ and is analogous to the carrying capacity in logistic models of growth [38]. For simplicity, we will denote K as the carrying capacity in Eq. 3 as well. Although our data analysis is insensitive to the precise form of regulation used, we chose the Hill-type growth suppression because it avoids negative growth rates that confuse physiological interpretation. An order-of-magnitude estimate of the bone marrow size (or carrying capacity) in the rhesus macaque is K∼109. Ultimately, we are interested in how a limited progenitor pool influences the overall clone size distribution, and a simple, single-parameter (K) approximation to the progenitor-cell growth constraint is sufficient.
Upon substituting the growth law r(N p) described by Eq. 3 into Eq. 2, the total progenitor-cell population N p(t→∞) at long times is explicitly shown in Additional file 1: Eq. A19 to approach a finite value that depends strongly on K. Progenitor cells then differentiate to supply peripheral blood at rate (1+η)ω so that the total number of differentiated blood cells obeys
$$ \frac{\mathrm{d} N_{\mathrm{d}}(t)}{\mathrm{d} t} = (1+\eta)\omega N_{\mathrm{p}} - \mu_{\mathrm{d}}N_{\mathrm{d}}. $$
At steady state, the combined peripheral nucleated blood population is estimated to be N d∼109– 1010 [39], setting an estimate of N d/N p≈(1+η)ω/μ d∼1–10. Moreover, as we shall see, the relevant factor in our steady-state analysis will be the numerical value of the effective growth rate r, rather than its functional form. Therefore, the chosen form for regulation will not play a role in the mathematical results in this paper except to define parameters (such as K) explicitly in the regulation function itself.
To distinguish and quantify the clonal structure within the peripheral blood pool, we define \(y_{n}^{(k)}\) to be the number of clones that are represented by exactly n cells in the differentiated pool and k cells in the progenitor pool. For example, in the peripheral blood pool shown in Fig. 3, \(y_{1}^{(3)} = y_{2}^{(3)} = y_{4}^{(3)} = y_{6}^{(3)} = 1\). This counting of clones across both the progenitor and peripheral blood pools is necessary to balance progenitor-cell differentiation rates with peripheral blood turnover rates. The evolution equations for \(y_{n}^{(k)}\) can be expressed as
$$ \frac{\mathrm{d} y_{n}^{(k)}}{\mathrm{d} t} = (1+\eta)\omega k \left(y_{n-1}^{(k)} - y_{n}^{(k)}\right) + (n+1) \mu_{\mathrm{d}}y_{n+1}^{(k)} - n \mu_{d} y_{n}^{(k)}, $$
where \(y_{0}^{(k)} \equiv c_{k} - \sum _{n=1}^{\infty }y_{n}^{(k)}\) represents the number of progenitor clones of size k that have not yet contributed to peripheral blood. The transfer of clones from the progenitor population to the differentiated pool arises through \(y_{0}^{(k)}\) and is simply a statement that the number of clones in the peripheral blood can increase only by differentiation of a progenitor cell whose lineage has not yet populated the peripheral pool. The first two terms on the right-hand side of Eq. 5 represent immigration of clones represented by n−1 and n differentiated cells conditioned upon immigration from only those specific clones represented by k cells in the progenitor pool. The overall rate of addition of clones from the progenitor pool is thus (1+η)ω k, in which the frequency of terminal differentiation is weighted by the stochastic division factor (1+η). By using the Hill-type growth term r(N p) from Eq. 3, Eq. 1 can be solved to find c k (t), which in turn can be used in Eq. 5 to find \(y_{n}^{(k)}(t)\). The number of clones in the peripheral blood represented by exactly n differentiated cells is thus \(y_{n}(t) = \sum _{k=1}^{\infty }y_{n}^{(k)}(t)\).
As we mentioned, Eqs. 1 and 5 describe the evolution of the expected clone size distribution. Since each measurement represents one realization of the distributions c k (t) and y n (t), the validity of Eqs. 1 and 5 relies on a sufficiently large C such that the marked HSCs generate enough lineages and cells to allow the subsequent peripheral blood clone size distribution to be sampled adequately. In other words, measurement-to-measurement variability described by e.g., \(\phantom {\dot {i}\!}\langle c_{k}(t)c_{k^{\prime }}(t)\rangle - \langle c_{k}(t)\rangle \langle c_{k^{\prime }}(t)\rangle \) is assumed negligible (see Additional file 1). Our modeling approach would not be applicable to studying single HSC transplant studies [4–6] unless the measured clone sizes from multiple experiments are aggregated into a distribution.
Finally, to compare model results with animal blood data, we must consider the final step of sampling small aliquots of the differentiated blood. As derived in Additional file 1: Eq. A11, if S marked cells are drawn and sequenced successfully (from a total differentiated cell population N d), the expected number of clones 〈m k (t)〉 represented by k cells is given by
$$ \begin{array}{cc}\left\langle {m}_k(t)\right\rangle & =F\left(q,t\right)-F\left(q-1,t\right)\\ {}=\sum_{\ell =0}^{\infty }{\mathrm{e}}^{-\ell \varepsilon}\frac{{\left(\ell \varepsilon \right)}^k}{k!}{y}_{\ell }(t),\end{array} $$
where ε≡S/N d≪1 and \(F(q,t) \equiv \sum _{k=0}^{q}\langle m_{k}(t)\rangle \) is the sampled, expected cumulative size distribution. Upon further normalization by the total number of detected clones in the sample, C s(t)=F(S,t)−F(0,t), we define
$$ Q(q,t) \equiv \frac{F(q, t) - F(0,t)}{F(S,t)-F(0,t)} $$
as the fraction of the total number of sampled clones that are represented by q or fewer cells. Since the data represented in terms of Q will be seen to be time-independent, explicit expressions for \(c_{k}, y_{n}^{(k)}\), 〈m k 〉, and Q(q) can be derived. Summarizing, the main features and assumptions used in our modeling include:
A neutral-model framework [40] that directly describes the distribution of clone sizes in each of the three cell pools: progenitor cells, peripheral blood cells, and sampled blood cells. The cells in each pool are statistically identical.
A constant asymmetric HSC differentiation rate α. The appreciable numbers of unsynchronized HSCs allow the assumption of Poisson-distributed differentiation times of the HSC population. The level of differentiation symmetry is found to have little effect on the steady-state clone size distribution (see Additional file 1). The symmetry of the terminal differentiation step is also irrelevant for understanding the available data.
A simple one-parameter (K) growth regulation model that qualitatively describes the finite maximum size of the progenitor population in the bone marrow. Ultimately, the specific form for the regulation is unimportant since only the steady-state value of the growth parameter r affects the parameter fitting.
Using only these reasonable model features, we are able to compute clone size distributions and compare them with data. An explicit form for the expected steady-state clone size distribution 〈m k 〉 is given in Additional file 1: Eq. A32, and the parameters and variables used in our analysis are listed in Table 1.
Table 1 Model parameters and variables. Estimates of steady-state values are provided where available. We assume little prior knowledge on all but a few of the more established parameters. Nonetheless, our modeling and analysis place constraints on combinations of parameters, allowing us to fit data and provide estimates for steady-state values of U+C∼103– 104 and α(N p+K)/(p K)∼0.002–0.1
In this section, we describe how previously published data (the number of cells of each detected clone in a sample of the peripheral blood, which are available in the supplementary information files of Kim et al. [19]) are used to constrain parameter values in our model. We emphasize that our model is structurally different from models used to track lineages and clone size distributions in retinal and epithelial tissues [41, 42]. Rather than tracking only the lineages of stem cells (which are allowed to undergo asymmetric differentiation, symmetric differentiation, or symmetric replication), our model assumes a highly proliferative population constrained by a carrying capacity K and slowly fed at rate α by an asymmetrically dividing HSC pool of C fixed clones. We have also included terminal differentiation into peripheral blood and the effects of sampling on the expected clone size distribution. These ingredients yield a clone size distribution different from those previously derived [41, 42], as described in more detail below.
Stationarity in time
Clonal contributions of the initially transplanted HSC population have been measured over 4–12 years in four different animals. As depicted in Fig. 4 a, populations of individual clones of peripheral blood mononuclear cells from animal RQ5427, as well as all other animals, show significant variation in their dynamics. Since cells of any detectable lineage will number in the millions, this variability in lineage size across time cannot be accounted for by the intrinsic stochasticity of progenitor-cell birth and death. Rather, these rises and falls of lineages likely arise from a complicated regulation of HSC differentiation and lineage aging. However, in our model and analysis, we do not keep track of lineage sizes n i . Instead, define Q(ν) as the fraction of clones arising with relative frequency ν≡f q/S or less (here, q is the number of VIS reads of any particular clone in the sample, f is the fraction of all sampled cells that are marked, and S is the total number of sequencing reads of marked cells in a sample). Figure 4 b shows data analyzed in this way and reveals that Q(ν) appears stationary in time.
The observed steady-state clone size distribution is broad, consistent with the mathematical model developed above. The handful of most populated clones constitutes up to 1–5 % of the entire differentiated blood population. These dominant clones are followed by a large number of clones with fewer cells. The smallest clones sampled in our experiment correspond to a single read q=1, which yields a minimum measured frequency ν min=f/S. A single read may comprise only 10−4– 10−3 % of all differentiated blood cells. Note that the cumulative distribution Q(ν) exhibits higher variability at small sizes simply because fewer clones lie below these smaller sizes.
Although engraftment occurs within a few weeks and total blood populations N p and N d (and often immune function) re-establish themselves within a few months after successful HSC transplant [43, 44], it is still surprising that the clone size distribution is relatively static within each animal (see Additional file 1 for other animals). Given the observed stationarity, we will use the steady-state results of our mathematical model (explicitly derived in Additional file 1) for fitting data from each animal.
Implications and model predictions
By using the exact steady-state solution for c k (Additional file 1: Eq. A21) in Additional file 1: Eq. A18, we can explicitly evaluate the expected clone size distribution 〈m k 〉 using Eq. 6, and the expected cumulative clone fraction Q(q) using Eq. 7. In the steady state, the clone size distribution of progenitor cells can also be approximated by a gamma distribution with parameters a≡α/r and \(\bar {r} \equiv r/\mu \): \(c_{k} \sim \bar {r}^{k} k^{-1+a}\) (see Additional file 1: Eq. A27). In realistic steady-state scenarios near carrying capacity, r=r(N p)≲μ, as calculated explicitly in Additional file 1: Eq. A20. By defining \(\bar {r}=r/\mu = 1-\delta \), we find that δ is inversely proportional to the carrying capacity:
$$ \delta \approx \frac{\alpha}{\mu} \frac{\mu}{p-\mu} \frac{U+C}{K} \ll 1. $$
The dependencies of 〈m q 〉 on δ and a=α/r are displayed in Fig. 5 a, in which we have defined w≡(1+η)ω/μ d.
Clone size distributions and total number of sampled clones. a Expected clone size distributions C −1〈m q 〉 derived from the approximation in Additional file 1: Eq. A32 are plotted for various a and δ/(ε w) [where w≡(1+η)ω/μ d]. The nearly coincident solid and dashed curves indicate that variations in a mainly scale the distribution by a multiplicative factor. In contrast, the combination δ/(ε w) controls the weighting at large clone sizes through the population cut-off imposed by the carrying capacity. Of the two controlling parameters, the steady-state clone size distribution is most sensitive to R≅δ/(ε w). The dependence of data-fitting on these two parameters is derived in Additional file 1 and discussed in the next section. b For ε w=5×10−5, the expected fraction C s/C of active clones sampled as a function of lnδ and α. The expected number of clones sampled increases with carrying capacity K, HSC differentiation rate a=α/r, and the combined sampling and terminal differentiation rate ε w
Although our equations form a mean-field model for the expected number of measured clones of any given size, randomness resulting from the stochastic differentiation times of individual HSCs (all with the same rate α) is taken into account.
This is shown in Additional file 1: Eqs. A36–A39, where we explicitly consider the fully stochastic population of a single progenitor clone that results from the differentiation of a single HSC. Since independent unsynchronized HSCs differentiate at times that are exponentially distributed (with rate α), we construct the expected clone size distribution from the birth–death–immigration process [45] to find a result equivalent to that derived from our original model (Eq. 1 and Additional file 1: Eq. A21). Thus, we conclude that if a=α/r is small, the shape of the expected clone size distribution is mainly determined at short times by the initial repopulation of the progenitor-cell pool.
Our model also suggests that the expected number of sampled clones relative to the number of active transplanted clones (see Additional file 1: Eq. A24) can be expressed as:
$$ \begin{aligned} \frac{C_{\mathrm{s}}}{C} & \approx \left[1-\left(\frac{\delta}{1-(1-\delta)e^{-\varepsilon w}}\right)^{a}\right] \\ & \approx \frac{\alpha}{r}\ln \left(\frac{\varepsilon w}{\delta}+1\right), \end{aligned} $$
where the last approximation is accurate for ε w≪1 and C s/C≪1. The clonal diversity one expects to measure in the peripheral blood sample is sensitive to the combination of biologically relevant parameters and rates δ and a=α/r. Figure 5 b shows the explicit dependence of the fraction of active clones on a and the combination of parameters defining δ, for ε w=ε(1+η)ω/μ d=5×10−5.
Our analysis shows how scaled measurable quantities such as C s/C and C −1〈m q 〉 depend on just a few combinations of experimental and biological parameters. This small domain of parameter sensitivity reduces the number of parameters that can be independently extracted from clone size distribution data. For example, the mode of terminal differentiation described by η clearly cannot be extracted from clonal tracking measurements. Similarly, models that are more detailed of the complex regulation processes would introduce additional parameters that are not resolved by these experiments. Nonetheless, we shall fit our data and known information contained in the experimental protocol to our model to estimate biologically relevant parameters, such as the total number of activated HSCs U+C, and thus indirectly C.
Model fitting
Our mathematical model for 〈m k 〉 (and F(q) and Q(q)) includes numerous parameters associated with the processes of HSC differentiation, progenitor-cell amplification, progenitor-cell differentiation, peripheral blood turnover, and sampling. Data fitting is performed using clone size distributions derived separately from the read counts from both the left and right ends of each VIS (see [14] for details on sequencing). Even though we fit our data to 〈m k 〉 using three independent parameters, a=α/r, \(\bar {r}= r/\mu \), and ε w, we found that within the relevant physiological regime, all clone distributions calculated from our model are most sensitive to just two combinations of parameters (see Additional file 1 for an explicit derivation):
$$ a \equiv \frac{\alpha}{r}\quad \text{and} \quad R \equiv \frac{\varepsilon w}{\ln \left(1/\bar{r}\right)}\approx \frac{\varepsilon w}{\delta} = \frac{(1+\eta)\omega S}{N_{\mathrm{d}}\mu_{\mathrm{d}}\delta}, $$
where the last approximation for R is valid when \(1-\bar {r} = \delta \ll 1\). While the fits are rather insensitive to ε w this parameter can fortunately be approximated from estimates of S and the typical turnover rate of differentiated blood. Consequently, we find two maximum likelihood estimates (MLEs) for a and R at each time point. It is important to note that fitting our model to steady-state clone size distributions does not determine all of the physiological parameters arising in our equations. Rather, they provide only two constraints that allow one to relate their values.
For ease of presentation, henceforth we will show all data and comparisons with our model equations in terms of the fraction Q(ν) or Q(q) (Figs. 4 b and 6 a, b). Figure 6 a, b shows MLE fitting to the raw data 〈m k 〉 plotted in terms of the normalized but unrescaled data Q(q) for two different peripheral blood cell types from two animals (RQ5427 and RQ3570). Data from all other animals are shown and fitted in Additional file 1, along with overall goodness-of-fit metrics. Raw cell count data are given in Kim et al. [19].
Data fitting. a Fitting raw (not rescaled, as shown in Figure 4) clone size distribution data to 〈m k 〉 from Eq. 6 at two time points for animal RQ5427. The maximum likelihood estimates (MLEs) are (a ∗≈0.01,R ∗≈70) and (a ∗≈0.0025,R ∗≈400) for data taken at 32 (blue) and 67 (red) months post-transplant, respectively. Note that the MLE values for different samples vary primarily due to different values of S (and hence ε) used in each measurement. b For animal RQ3570, the clone fractions at 32 (blue) and 38 (red) months yield (a ∗≈0.04,R ∗≈30) and (a ∗≈0.1,R ∗≈60), respectively. For clarity, we show the data and fitted models in terms of Q(q). c Estimated number of HSCs U+C (circles) and normalized differentiation rate a (squares) for animal RQ5427. d U+C and a for animal RQ3570. Note the temporal variability (but also long-term stability) in the estimated number of contributing HSCs. Additional details and fits for other animals are qualitatively similar and given in Additional file 1. HSC hematopoietic stem cell, PBMC, peripheral blood mononuclear cell Grans, granulocytes
HSC asymmetric differentiation rate
The MLE for a=α/r, a ∗, was typically in the range 10−2– 10−1. Given realistic parameter values, this quantity mostly provides an estimate of the HSC relative differentiation rate a ∗∼α/(μ p+η ω). The smallness of a ∗ indicates slow HSC differentiation relative to the progenitor turnover rate μ p and the final differentiation rate η ω, consistent with the dominant role of progenitor cells in populating the total blood tissue. Note that besides the intrinsic insensitivity to ε w, the goodness-of-fit is also somewhat insensitive to small values of a ∗ due to the weak dependence of c k ∼1/k 1−a on a (see Additional file 1). The normalized relative differentiation rates estimated from two animals are shown by the squares (right axis) in Fig. 6 c, d.
Number of HSCs
The stability of blood repopulation kinetics is also reflected in the number of estimated HSCs that contribute to blood (shown in Fig. 6 c, d). The total number of HSCs is estimated by expressing U+C in terms of the effective parameters, R and a, which in turn are functions of microscopic parameters (α,p,μ p,μ d,w, and K) that cannot be directly measured. In the limit of small sample size, S≪R ∗ K, however, we find U+C≈S/(R ∗ a ∗) (see Additional file 1), which can then be estimated using the MLEs a ∗ and R ∗ obtained by fitting the clone size distributions. The corresponding values of U+C for two animals are shown by the circles (left axis) in Fig. 6 c, d. Although variability in the MLEs exists, the fluctuations appear stationary over the course of the experiment for each animal (see Additional file 1).
Our clonal tracking analysis revealed that individual clones of HSCs contributed differently to the final differentiated blood pool in rhesus macaques, consistent with mouse and human data. Carefully replotting the raw data (clone sizes) in terms of the normalized, rescaled cumulative clone size distribution (the fraction of all detected clones that are of a certain size or less) shows that these distributions reach steady state a few months after transplantation. Our results carry important implications for stem cell biology. Maintaining homeostasis of the blood is a critical function for an organism. Following a myeloablative stem cell transplant, the hematopoietic system must repopulate rapidly to ensure the survival of the host. Not only do individual clones rise and fall temporally, as previously shown [19], but as any individual clone of a certain frequency declines, it is replaced by another of similar frequency. This exchange-correlated mechanism of clone replacement may provide a mechanism by which overall homeostasis of hematopoiesis is maintained long term, thus ensuring continued health of the blood system.
To understand these observed features and the underlying mechanisms of stem cell-mediated blood regeneration, we developed a simple neutral population model of the hematopoietic system that quantifies the dynamics of three subpopulations: HSCs, transit-amplifying progenitor cells, and fully differentiated nucleated blood cells. We also include the effects of global regulation by assuming a Hill-type growth rate for progenitor cells in the bone marrow but ignore cell-to-cell variation in differentiation and proliferation rates of all cells.
Even though we do not include possible HSC heterogeneity, variation in HSC activation, progenitor-cell regulation, HSC and progenitor-cell aging (progenitor bursting), niche- and signal molecule-mediated controls, or intrinsic genetic and epigenetic differences, solutions to our simple homogeneous HSC model are remarkably consistent with observed clone size distributions. As a first step, we focus on how the intrinsic stochasticity in just the cellular birth, death, and differentiation events gives rise to the progenitor clone size distribution.
To a large extent, the exponentially distributed first HSC differentiation times and the growth and turnover of the progenitor pool control the shape of the expected long-time clone size distribution. Upon constraining our model to the physiological regime relevant to the experiments, we find that the calculated shapes of the clone size distributions are sensitive to effectively only two composite parameters. The HSC differentiation rate α sets the scale of the expected clone size distribution but has little effect on the shape. Parameters, including carrying capacity K, active HSCs U+C, and birth and death rates p,ω,μ p,μ d, influence the shape of the expected clone size distribution 〈m q 〉 only through the combination R, and only at large clone sizes.
Our analysis allowed us to estimate other combinations of model parameters quantitatively. Using a MLE, we find values for the effective HSC differentiation rate a ∗∼10−2– 10−1 and the number of HSCs that are contributing to blood within any given time frame U+C∼103– 104. Since the portion of HSCs that contribute to blood may vary across their typical life span L∼25 years, the total number of HSCs can be estimated by (U+C)×L/τ, where τ∼1 year [19]. Our estimate of a total count of ∼3×104– 3×105 HSCs is about 30-fold higher than the estimate of Abkowitz et al. [33] but is consistent with Kim et al. [19]. Note that the ratio of C to the total number of initially transplanted CD34+ cells provides a measure of the overall potency of the transplant towards blood regeneration. In the extreme case in which one HSC is significantly more potent (through, e.g., a faster differentiation rate), this ratio would be smaller. An example of this type of heterogeneity would be an HSC with one or more cancer-associated mutations, allowing it to out-compete other transplanted normal HSCs. Hence, our clonal studies and the associated mathematical analysis can provide a framework for characterizing normal clonal diversity as well as deviations from it, which may provide a metric for early detection of cancer and other related pathologies.
Several simplifying assumptions have been invoked in our analysis. Crucially, we assumed HSCs divided only asymmetrically and ignored instances of symmetric self-renewal or symmetric differentiation. The effects of symmetric HSC division can be quantified in the steady-state limit. In previous studies, the self-renewal rate for HSCs in primates is estimated as 4–9 months [46, 47], which is slightly longer than the short timescale (∼2–4 months) on which we observe stabilization of the clone size distribution. Therefore, if the HSC population slowly increases in time through occasional symmetric division, the clone size distribution in the peripheral blood will also shift over long times. The static nature of the clone distributions over many years suggests that size distributions are primarily governed by mechanisms operating at shorter timescales in the progenitor pool. For an HSC population (such as cancerous or precancerous stem cells [48]) that has already expanded through early replication, the initial clone size distribution within the HSC pool can be quantified by assuming an HSC pool with separate carrying capacity K HSC. Such an assumption is consistent with other analyses of HSC renewal [49]. All our results can be used (with the replacement C→K HSC) if the number of transplanted clones C≥K HSC because replication is suppressed in this limit. When K HSC≫C≫1, replicative expansion generates a broader initial HSC clone size distribution (see Additional file 1). The resulting final peripheral blood clone size distribution can still be approximated by our result (Eq. 6) if the normalized differentiation rate a≪1, exhibiting the insensitivity of the differentiated clone size distribution to a broadened clone size distribution at the HSC level. However, if HSC differentiation is sufficiently fast (a≪̸1), the clonal distribution in the progenitor and differentiated pools may be modified.
To understand the temporal dynamics of clone size distributions, a more detailed numerical study of our full time-dependent neutral model is required. Such an analysis can be used to investigate the effects of rapid temporal changes in the HSC division mode [41]. Temporal models would also allow investigation into the evolution of HSC mutations and help unify concepts of clonal stability (as indicated by the stationarity of rescaled clone size distributions) with ideas of clonal succession [10, 11] or dynamic repetition [12] (as indicated by the temporal fluctuations in the estimated number U+C of active HSCs). Predictions of the time-dependent behavior of clone size distributions will also prove useful in guiding future experiments in which the animals are physiologically perturbed via e.g., myeloablation, hypoxiation, and/or bleeding. In such experimental settings, regulation may also occur at the level of HSC differentiation (α) and a different mathematical model may be more appropriate.
We have not addressed the temporal fluctuations in individual clone abundances evident in our data (Fig. 4 a) or in the wave-like behavior suggested by previous studies [19]. Since the numbers of detectable cells of each VIS lineage in the whole animal are large, we believe these fluctuations do not arise from intrinsic cellular stochasticity or sampling. Rather, they likely reflect slow timescale HSC transitions between quiescent and active states and/or HSC aging [50]. Finally, subpopulations of HSCs that have different intrinsic rates of proliferation, differentiation, or clearance could then be explicitly treated. As long as each subtype in a heterogeneous HSC or progenitor-cell population does not convert into another subtype, the overall aggregated clone size distribution 〈m k 〉 will preserve its shape. Although steady-state data are insufficient to provide resolution of cell heterogeneity, more resolved temporal data may allow one to resolve different parameters associated with different cell types. Such extensions will allow us to study the temporal dynamics of individual clones and clone populations in the context of cancer stem cells and will be the subject of future work.
HSC:
hematopoietic stem cell
HSPC:
hematopoietic stem and progenitor cell
MLE:
maximum likelihood estimate
viral vector integration site
Enver T, Heyworth CM, Dexter TM. Do stem cells play dice?Blood. 1998; 92:348–52.
Hoang T. The origin of hematopoietic cell type diversity. Oncogene. 2004; 23:7188–98.
Muller-Sieburg CE, Cho RH, Thoman M, Adkins B, Sieburg HB. Deterministic regulation of hematopoietic stem cell self-renewal and differentiation. Blood. 2002; 100:1302–9.
Copley MR, Beer PA, Eaves CJ. Hematopoietic stem cell heterogeneity takes center stage. Cell Stem Cell. 2012; 10:690–7.
Muller-Sieburg CE, Sieburg HB, Bernitz JM, Cattarossi G. Stem cell heterogeneity: implications for aging and regenerative medicine. Blood. 2012; 119:3900–7.
PubMed Central CAS Article PubMed Google Scholar
Lu R, Neff NF, Quake SR, Weissman IL. Tracking single hematopoietic stem cells in vivo using high-throughput sequencing in conjunction with viral genetic barcoding. Nat Biotechnol. 2011; 29:928–33.
Huang S. Non-genetic heterogeneity of cells in development: more than just noise. Development. 2009; 136:3853–62.
Osafune K, Caron L, Borowiak M, Martinez RJ, Fitz-Gerald CS, Sato Y, et al. Marked differences in differentiation potential among human embryonic stem cell lines. Nat Biotechnol. 2008; 26:313–15.
Pang WW, Price EA, Sahoo D, Beerman I, Maloney WJ, Rossi DJ, et al. Human bone marrow hematopoietic stem cells are increased in frequency and myeloid-biased with age. Proc Natl Acad Sci USA. 2011; 108:20012–17.
Harrison DE, Astle CM, Lerner C. Number and continuous proliferative pattern of transplanted primitive immunohematopoietic stem cells. Proc Natl Acad Sci USA. 1988; 85:822–6.
Verovskaya E, Broekhuis MJC, Zwart E, Ritsema M, van Os R, de Haan G, et al. Heterogeneity of young and aged murine hematopoietic stem cells revealed by quantitative clonal analysis using cellular barcoding. Blood. 2013; 122:523–32.
Takizawa H, Regoes RR, Boddupalli CS, Bonhoeffer S, Manz MG. Dynamic variation in cycling of hematopoietic stem cells in steady state and inflammation. J Exp Med. 2011; 208:273–84.
Gerrits A, Dykstra B, Kalmykowa OJ, Klauke K, Verovskaya E, Broekhuis MJC, et al. Cellular barcoding tool for clonal analysis in the hematopoietic system. Blood. 2010; 115:2610–18.
Kim S, Kim N, Presson AP, An DS, Mao SH, Bonifacino AC, et al. High-throughput, sensitive quantification of repopulating hematopoietic stem cell clones. J Virol. 2010; 84:11771–80.
Biffi A, Montini E, Lorioli L, Cesani M, Fumagalli F, Plati T, et al. Lentiviral hematopoietic stem cell gene therapy benefits metachromatic leukodystrophy. Science. 2013; 341:1233158.
Aiuti A, Biasco L, Scaramuzza S, Ferrua F, Cicalese MP, Baricordi C, et al. Lentiviral hematopoietic stem cell gene therapy in patients with Wiskott–Aldrich syndrome. Science. 2013; 341:1233151.
PubMed Central Article PubMed Google Scholar
Cavazzana-Calvo M, Payen E, Negre O, Wang G, Hehir K, Fusil F, et al. Transfusion independence and HMGA2 activation after gene therapy of human β-thalassaemia. Nature. 2010; 467:318–22.
Cartier N, Hacein-Bey-Abina S, Bartholomae CC, Veres G, Schmidt M, Kutschera I, et al. Hematopoietic stem cell gene therapy with a lentiviral vector in X-linked adrenoleukodystrophy. Science. 2009; 326:818–23.
Kim S, Kim N, Presson AP, Metzger ME, Bonifacino AC, Sehl M, et al. Dynamics of HSPC repopulation in nonhuman primates revealed by a decade-long clonal-tracking study. Cell Stem Cell. 2014; 14:473–85.
Sun J, Ramos A, Chapman B, Johnnidis JB, Le L, Ho YJ, et al. Clonal dynamics of native haematopoiesis. Nature. 2014; 514:322–7.
Loeffler M, Roeder I. Tissue stem cells: definition, plasticity, heterogeneity, self-organization and models – a conceptual approach. Cells Tissue Organs. 2002; 171:8–26.
Bernard S, Belair J, Mackey MC. Oscillations in cyclical neutropenia: new evidence based on mathematical modeling. J Theor Biol. 2003; 223:283–98.
Dingli D, Pacheco JM. Modeling the architecture and dynamics of hematopoiesis. Wiley Interdiscip Rev Syst Biol Med. 2010; 2:235–44.
Courant R. Differential and integral calculus. Vol. II. London: Blackie & Son; 1936.
D'Orsogna MR, Lakatos G, Chou T. Stochastic self-assembly of incommensurate clusters. J Chem Phys. 2012; 136:084110.
Marciniak-Czochra A, Stiehl T, Ho AD, Jager W, Wagner W. Modeling of asymmetric cell division in hematopoietic stem cells – regulation of self-renewal is essential for efficient repopulation. Stem Cells Dev. 2009; 18:377–85.
Kent DG, Li J, Tanna H, Fink J, Kirschner K, Pask DC, et al. Self-renewal of single mouse hematopoietic stem cells is reduced by JAK2V617F without compromising progenitor cell expansion. PLoS Biol. 2013; 11:1001576.
Lander AD, Gokoffski KK, Wan FYM, Nie Q, Calof AL. Cell lineages and the logic of proliferative control. PLoS Biol. 2009; 7:1000015.
Hoffmann M, Chang HH, Huang S, Ingber DE, Loeffler M, Galle J. Noise-driven stem cell and progenitor population dynamics. PLoS One. 2008; 3:2922.
Roshan A, Jones PH, Greenman CD. Exact, time-independent estimation of clone size distributions in normal and mutated cells. J Roy Soc Interface. 2014; 11:20140654.
McHale PT, Lander A. The protective role of symmetric stem cell division on the accumulation of heritable damage. PLoS Comput Biol. 2014; 10:1003802.
Antal T, Krapivsky PL. Exact solution of a two-type branching process: Clone size distribution in cell division kinetics. J Stat Mech. 2010; P07028.
Abkowitz JL, Caitlin SN, McCallie MT, Guttorp P. Evidence that the number of hematopoietic stem cells per animal is conserved in mammals. Blood. 2002; 100:2665–7.
Morozov AY, Bruinsma R, Rudnick J. Assembly of viruses and the pseudo-law of mass action. J Chem Phys. 2009; 131:155101.
Krapivsky PL, Ben-Naim E, Redner S. Statistical physics of irreversible processes. Cambridge, UK: Cambridge University Press; 2010.
Szkely TJr, Burrage K, Mangel M, Bonsall MB. Stochastic dynamics of interacting haematopoietic stem cell niche lineages. PLoS Comput Biol. 2014; 10:1003794.
Mackay R. Unified hypothesis for the origin of aplastic anemia and periodic hematopoiesis. Blood. 1978; 51:941–56.
Keshet-Edelstein L. Mathematical models in biology. New York, NY: SIAM; 2005.
Wolfensohn S, Lloyd M. Handbook of laboratory animal management and welfare, 3rd ed. Oxford: Blackwell Publishing; 2003.
Kimura M. Population genetics, molecular evolution, and the neutral theory: selected papers In: Takahata N, editor. Chicago, IL: University of Chicago Press: 1995.
He J, Zhang G, Almeida AD, Cayoutte M, Simons BD, Harris WA. How variable clones build an invariant retina. Neuron. 2012; 75:786–98.
Blanpain C, Simons BD. Unravelling stem cell dynamics by lineage tracing. Nat Rev Mol Cell Biol. 2013; 14:489–502.
Guillaume T, Rubenstein DB, Symann M. Immune reconstitution and immunotherapy after autologous hematopoietic stem cell transplantation. Blood. 1998; 92:1471–90.
Tzannou I, Leen AM. Accelerating immune reconstitution after hematopoietic stem cell transplantation. Clin Transl Immunol. 2014; 3:11.
Allen LJS. An introduction to stochastic processes with applications to biology. Upper Saddle, NJ: Pearson Prentice Hall; 2003.
Shepherd BE, Guttorp P, Lansdorp PM, Abkowitz JL. Estimating human hematopoietic stem cell kinetics using granulocyte telomere lengths. Exp Hematol. 2004; 32:1040–50.
Shepherd BE, Kiem HP, Lansdorp PM, Dunbar CE, Aubert G, LaRochelle A, et al. Hematopoietic stem-cell behavior in nonhuman primates. Blood. 2007; 110:1806–13.
Driessens G, Beck B, Caauwe A, Simons BD, Blanpain C. Defining the mode of tumour growth by clonal analysis. Nature. 2012; 488:527–30.
Sieburg HB, Cattarossi G, Muller-Sieburg CE. Lifespan differences in hematopoietic stem cells are due to imperfect repair and unstable mean-reversion. PLoS Comput Biol. 2013; 9:1003006.
Weiss GH. Equations for the age structure of growing populations. Bull Math Biophys. 1968; 30:427–35.
Catlin SN, Busque L, Gale RE, Guttorp P, Abkowitz JL. The replication rate of human hematopoietic stem cells in vivo. Blood. 2011; 117:4460–6.
DeBoer RJ, Mohri H, Ho DD, Perelson AS. Turnover rates of B cells, T cells, and NK cells in simian immunodeficiency virus-infected and uninfected rhesus macaques. J Immunol. 2003; 170:2479–87.
Pillay J, den Braber I, Vrieskoop N, Kwast LM, de Boer RJ, Borghans JAM, et al. In vivo labeling with 2H2O reveals a human neutrophil lifespan of 5.4 days. Blood. 2010; 116:625–7.
This work was supported by grants from the National Institutes of Health (R01AI110297 and K99HL116234), the California Institute of Regenerative Medicine (TRX-01431), the University of California, Los Angeles, AIDS Institute/Center for AIDS Research (AI28697), the National Science Foundation (PHY11-25915 KITP/UCSB), and the Army Research Office (W911NF-14-1-0472). The authors also thank B Shraiman and RKP Zia for helpful discussions.
Department of Physics, University of Toronto, St George Campus, Toronto, Canada
Sidhartha Goyal
Department of Microbiology, Immunology, and Molecular Genetics, UCLA, Los Angeles, USA
Sanggu Kim & Irvin SY Chen
UCLA AIDS Institute and Department of Medicine, UCLA, Los Angeles, USA
Irvin SY Chen
Departments of Biomathematics and Mathematics, UCLA, Los Angeles, USA
Tom Chou
Sanggu Kim
Correspondence to Tom Chou.
TC and SG designed and developed the mathematical modeling and data analysis. TC, SG, and SK wrote the manuscript. SK and IC participated in study design and data interpretation. All authors read and approved the final manuscript.
Additional file 1
Mathematical appendices and data fitting. (PDF 327 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Goyal, S., Kim, S., Chen, I.S. et al. Mechanisms of blood homeostasis: lineage tracking and a neutral model of cell populations in rhesus macaques. BMC Biol 13, 85 (2015). https://doi.org/10.1186/s12915-015-0191-8
Received: 09 June 2015
Stem cell clones
Lineage tracking
Mathematical modeling
Beyond Mendel: modeling in biology
Submission enquiries: [email protected] | CommonCrawl |
Would SHA-256(SHA-256(x)) produce collisions?
Was reviewing some Bitcoin public-key hash literature and the use of RIPEMD-160 and the SHA-256 as below:
RIPEMD160(SHA256(ECDSA_publicKey))
The Proof-of-work on the other hand uses SHA256 two times (instead of RIPEMD-160).
There is some notes on why RIPEMD160 was chosen (here).
Considering the 256-bit output space of SHA256, what would happen (theoritically) if one were to use SHA256 on a SHA256 output? For example:
SHA256(SHA256(x))
Would this be a bijective mapping? or Surjective mapping?
Can such mapping be used, in any way, to break the SHA-256?
Since SHA-256 is supposed to be a one-to-one function, there is no way the SHA256(SHA256(x)) could be injective function (since the input space and output space are both 256-bits). But if it is not injective, then SHA-256 cannot be one-to-one function for longer message (>256-bit input). How is this contradiction being worked out in the algorithm?
sha-256 group-theory one-way-function cryptocurrency
Gopalakrishna PalemGopalakrishna Palem
$\begingroup$ Also relevant: crypto.stackexchange.com/q/58542 $\endgroup$ – Squeamish Ossifrage Sep 21 '19 at 18:47
$\begingroup$ @nissimabehcera SHA-256 has 256-bit output size. $\endgroup$ – kelalaka Sep 21 '19 at 19:10
$\begingroup$ @kelalaka Which questions are not addressed in the duplicate? $\endgroup$ – Squeamish Ossifrage Sep 22 '19 at 2:12
$\begingroup$ @SqueamishOssifrage the OP has many questions from the basic one-to-oneness, to not knowing about the real input size, etc. only one has answered in the fgrieu's answer Can such mapping be used, in any way, to break the SHA-256? $\endgroup$ – kelalaka Sep 22 '19 at 6:56
First of all, note that, SHA-256 operates on a minimum of 512-bit messages. The message is always padded to be a multiple of 512-bit ( see padding below). For double SHA256(SHA256(m)), after the first hash, the result is padded to 512-bit.
padding: The SHA-256 message format |L|1|0..0|message size in 64 bits|. L is the original message bits to be hashed, it is followed by 1, and many zeros except the last 64-bit so that the padded message is multiple of 512-bit, minimally. The last 64-bit is the message size. The maximal message that can fit into one 512-bit hash block is 447-bit.
So, if $x = \operatorname{SHA256}(m) $ the it will be padded as
| x 256-bit| 1 | 0000's 191-bit | 64-bit size of x) |
for the next SHA-256 calculation.
Now, the input-out space will be exactly 256-bit. In this case, we don't know it is one-to-one or not. The space is huge for calculations. If it is one-to-one then it will be a permutation, too. There are $2^{256}!$ permutations and there are $(2^{256})^{(2^{256})}$ functions. It will be amazing if it is a permutation. For simplicity, take 5-bit as an example, there are 32! permutations ~112-bit and there are $32^{32}$ functions ~161-bit. If we consider that the restricted SHA-256 is a randomly selected function then the probability of being permutation is around $\frac{1}{2^{50}}$. See a glimpse from WolframAlpha in a logarithmic scale.
Since SHA-256 is supposed to be a one-to-one function
SHA-256 is not a one-to-one function. It is a one-way function i.e. you cannot revert it. Since the minimum input size 512-bit and the output size is always 256-bit, there is no way to be one-to-one.
It would be surjective mapping.
But if it is not injective, then SHA-256 cannot be one-to-one function for longer message (>256-bit input).
It is not one-to-one.
If we consider that you are talking about hashing bitcoin public keys, it has 33 bytes compressed and 65 bytes uncompressed public keys.
If the key is uncompressed, it has 520-bit therefore by the pigeonhole principle there will be collisions.
If the key is compressed, it has 264-bit again therefore by the pigeonhole principle there will be collisions, the output is 256-bit.
Note that SHA-256(SHA-256(x)) will be still collision-resistant.
See this question Weaknesses" in SHA-256d? for the nice answer of FGrieu.
kelalakakelalaka
$\begingroup$ nitpick: It is widely believed when limited to 256 bit input sha256 has collisions but no one can prove this currently. $\endgroup$ – Meir Maor Sep 21 '19 at 16:48
$\begingroup$ @MeirMaor thanks. I was considered to include that (I'm expecting this, too), but couldn't find a proper reference. Dou you know one? $\endgroup$ – kelalaka Sep 21 '19 at 16:51
$\begingroup$ A reference for what? Our inability to show SHA256 has no known collision, that we have not even a non constructive proof of such existing. In general symmetric cryptography building blocks tend to come with very little in the waybof proof. Even if we were magically given a 256 bit pseudo random function it stall has an infentesimal but non zero chance of being collision free. $\endgroup$ – Meir Maor Sep 21 '19 at 17:01
$\begingroup$ not, exactly a paper, not from a random person like me :) $\endgroup$ – kelalaka Sep 21 '19 at 17:03
$\begingroup$ It would be rather surprising if SHA-256 limited to 256-bit inputs were a surjective mapping. If it were, it would necessarily also be injective, and therefore a permutation on 256-bit strings, so SHA-256(SHA-256(x)) would also be a permutation on 256-bit strings, and neither of them wold have any collisions among 256-bit inputs. $\endgroup$ – Squeamish Ossifrage Sep 21 '19 at 18:44
SHA-256 is almost certainly not injective on 256-bit inputs, so it is almost certainly not a bijection or a surjection onto 256-bit outputs either. And if SHA-256 is not injective, then applying it twice can't be injective—if $x \ne x'$ are distinct preimages of $h$ under SHA-256, then they are preimages of $\operatorname{SHA256}(h)$ under the composition.
Why do I say SHA-256 is almost certainly not injective? A reasonable model for SHA-256 is a uniform random function. The vast majority of functions from 256-bit strings to 256-bit strings are not injective. Only the permutations of 256-bit strings are injective. There are $F = \bigl(2^{256}\bigr)^{2^{256}}$ functions from 256-bit strings to 256-bit strings, and only $P = 2^{256}!$ permutations of 256-bit strings, which by Stirling's approximation is roughly $$P = 2^{256}! \approx \sqrt{2\pi 2^{256}} \bigl(2^{256}/e\bigr)^{2^{256}} \!= \sqrt{2\pi 2^{256}} e^{-2^{256}} \bigl(2^{256}\bigr)^{2^{256}} \!= \sqrt{2\pi}\,2^{128} e^{-2^{256}} F.$$ That is, the fraction of functions which are permutations—which is the probability that a uniform random function is actually a permutation—is $$P/F \approx \sqrt{2\pi}\,e^{128 \log 2 - 2^{256}} \approx 1/2^{2^{256}}$$ which is so staggeringly improbable that it is roughly comparable to flipping a coin for every atom in the Milky Way galaxy—about $1.5 \times 10^{12}$ solar masses by recent estimates, with one solar mass equal to about $2 \times 10^{30}\,\mathrm{kg}$ based on the solar mass parameter $G \cdot M_S \approx 1.327\,124 \times 10^{20}\,\mathrm{m^3\,s^{-2}}$ and the gravitational constant $G \approx 6.674 \times 10^{-11}\,\mathrm{m^3\,kg^{-1}\,s^{-2}}$ reported by the IAU NSFA Current Best Estimates; assuming it consists entirely of hydrogen atoms at $1.67 \times 10^{-27}\,\mathrm{kg}$ a pop, that's a total of about $2 \times 10^{69}$ atoms—and having them all come up heads. And having the entire population of Shanghai, about thirty million people, repeat the experiment with the same all-heads outcomes.
That said, just because there almost certainly are collisions doesn't mean we have a way to find them.
No. If it could then we would consider SHA-256 to be broken. However, protocols that use $\operatorname{SHA256}(\operatorname{SHA256}(x))$ may be broken even if SHA-256 is not.
SHA-256 is almost certainly not a one-to-one function. Rather, it is conjectured to be collision-resistant, meaning that nobody has found a way to find two distinct messages $x \ne x'$ that SHA-256 maps to the same hash, short of a generic search (i.e., a search that treats SHA-256 as a black box) that would take longer than humanity has left before it roasts the planet. Which admittedly is not a very long time, but the generic search would take much longer than that anyway even if you spent all humanity's available energy on running the generic search in parallel.
$\begingroup$ Nice usage of Strinling formula, that I've forgotten $\endgroup$ – kelalaka Sep 22 '19 at 17:28
Not the answer you're looking for? Browse other questions tagged sha-256 group-theory one-way-function cryptocurrency or ask your own question.
"Weaknesses" in SHA-256d?
Is double hashing collision resistant?
Can iterated hashing be used to mitigate collision and preimage weaknesses?
Does the double-hash H(H(x)) have greater collision probability than H(x)?
"perfect" hash function
What is the probability of a collision in SHA-512 when the input is 512 bits of data
SHA-256: (Probabilistic?) partial preimage possible?
Question regarding multiple SHA-256 rounds on a Bitcoin Brain Wallet passphrase…
How to deal with collisions in Bitcoin addresses?
What gives SHA-256 its preimage resistance?
Can we map SHA-256 output bits to fixed-length input bits?
If you wrote a reversible SHA-256 algorithm, how many "metadata" bits would be required for reversability?
Bitcoin mining: how is a block header of 80 bytes processed in SHA-256? Isn't it too big?
Entropy preservation through cryptographic hash function | CommonCrawl |
Mathematical modeling on helper T cells in a tumor immune system
DCDS-B Home
A mathematical model of multistage hematopoietic cell lineages
January 2014, 19(1): 27-53. doi: 10.3934/dcdsb.2014.19.27
Spectral minimal partitions of a sector
Virginie Bonnaillie-Noël 1, and Corentin Léna 2,
IRMAR, ENS Cachan Bretagne, Univ. Rennes 1, CNRS, UEB, av Robert Schuman, F-35170 Bruz
Laboratoire de Mathématiques d'Orsay, Université Paris-Sud, Bât. 425, F-91405 Orsay Cedex, France
Received December 2012 Revised July 2013 Published December 2013
In this article, we are interested in determining spectral minimal $k$-partitions for angular sectors. We first deal with the nodal cases for which we can determine explicitly the minimal partitions. Then, in the case where the minimal partitions are not nodal domains of eigenfunctions of the Dirichlet Laplacian, we analyze the possible topologies of these minimal partitions. We first exhibit symmetric minimal partitions by using a mixed Dirichlet-Neumann Laplacian and then use a double covering approach to catch non symmetric candidates. In this way, we improve the known estimates of the energy associated with the minimal partitions.
Keywords: nodal domains, numerical simulations, Spectral theory, finite element method., Aharonov-Bohm Hamiltonian, minimal partitions.
Mathematics Subject Classification: Primary: 35B05, 35J05, 49M25, 65F15, 65N25; Secondary: 65N3.
Citation: Virginie Bonnaillie-Noël, Corentin Léna. Spectral minimal partitions of a sector. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 27-53. doi: 10.3934/dcdsb.2014.19.27
Y. Aharonov and D. Bohm, Significance of electromagnetic potentials in the quantum theory,, Phys. Rev., 115 (1959), 485. doi: 10.1103/PhysRev.115.485. Google Scholar
B. Alziary, J. Fleckinger-Pellé and P. Takáč, Eigenfunctions and Hardy inequalities for a magnetic Schrödinger operator in $\mathbbR^2$,, Math. Methods Appl. Sci., 26 (2003), 1093. doi: 10.1002/mma.402. Google Scholar
V. Bonnaillie-Noël and B. Helffer, Numerical analysis of nodal sets for eigenvalues of Aharonov-Bohm Hamiltonians on the square with application to minimal partitions,, Exp. Math., 20 (2011), 304. doi: 10.1080/10586458.2011.565240. Google Scholar
V. Bonnaillie-Noël, B. Helffer and T. Hoffmann-Ostenhof, Aharonov-Bohm Hamiltonians, isospectrality and minimal partitions,, J. Phys. A, 42 (2009). doi: 10.1088/1751-8113/42/18/185203. Google Scholar
V. Bonnaillie-Noël, B. Helffer and G. Vial, Numerical simulations for nodal domains and spectral minimal partitions,, ESAIM Control Optim. Calc. Var., 16 (2010), 221. doi: 10.1051/cocv:2008074. Google Scholar
D. Bucur, G. Buttazzo and A. Henrot, Existence results for some optimal partition problems,, Adv. Math. Sci. Appl., 8 (1998), 571. Google Scholar
M. Conti, S. Terracini and G. Verzini, An optimal partition problem related to nonlinear eigenvalues,, J. Funct. Anal., 198 (2003), 160. doi: 10.1016/S0022-1236(02)00105-2. Google Scholar
M. Conti, S. Terracini and G. Verzini, On a class of optimal partition problems related to the Fučík spectrum and to the monotonicity formulae,, Calc. Var. Partial Differential Equations, 22 (2005), 45. doi: 10.1007/s00526-004-0266-9. Google Scholar
M. Conti, S. Terracini and G. Verzini, A variational problem for the spatial segregation of reaction-diffusion systems,, Indiana Univ. Math. J., 54 (2005), 779. doi: 10.1512/iumj.2005.54.2506. Google Scholar
R. Courant and D. Hilbert, Methods of mathematical physics. Vol. I,, Interscience Publishers, (1953). Google Scholar
E. C. M. Crooks, E. N. Dancer and D. Hilhorst, On long-time dynamics for competition-diffusion systems with inhomogeneous Dirichlet boundary conditions,, Topol. Methods Nonlinear Anal., 30 (2007), 1. Google Scholar
NIST Digital Library of Mathematical Functions, Online companion to [20], Release 1.0.5 of 2012-10-01., Available from: , (). Google Scholar
B. Helffer, M. Hoffmann-Ostenhof, T. Hoffmann-Ostenhof and M. P. Owen, Nodal sets for groundstates of Schrödinger operators with zero magnetic field in non-simply connected domains,, Comm. Math. Phys., 202 (1999), 629. doi: 10.1007/s002200050599. Google Scholar
B. Helffer and T. Hoffmann-Ostenhof, On minimal partitions: New properties and applications to the disk,, in Spectrum and Dynamics, (2010), 119. Google Scholar
B. Helffer and T. Hoffmann-Ostenhof, Minimal partitions for anisotropic tori,, J. Spectr. Theory, (2013). Google Scholar
B. Helffer and T. Hoffmann-Ostenhof, On a magnetic characterization of spectral minimal partitions,, Journal of the European Mathematical Society, (2013). doi: 10.4171/JEMS/415. Google Scholar
B. Helffer, T. Hoffmann-Ostenhof and S. Terracini, Nodal domains and spectral minimal partitions,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 101. doi: 10.1016/j.anihpc.2007.07.004. Google Scholar
B. Helffer, T. Hoffmann-Ostenhof and S. Terracini, On spectral minimal partitions: the case of the sphere,, in Around the Research of Vladimir Maz'ya. III, (2010), 153. doi: 10.1007/978-1-4419-1345-6_6. Google Scholar
D. Martin, Mélina, Bibliothèque de Calculs éléments Finis,, 2007. Available from: , (). Google Scholar
F. W. J. Olver, D. W. Lozier, R. F. Boisvert and C. W. Clark, NIST Handbook of Mathematical Functions,, Cambridge University Press, (2010). Google Scholar
K. Pankrashkin and S. Richard, Spectral and scattering theory for the Aharonov-Bohm operators,, Rev. Math. Phys., 23 (2011), 53. doi: 10.1142/S0129055X11004205. Google Scholar
Bernard Helffer, Thomas Hoffmann-Ostenhof, Susanna Terracini. Nodal minimal partitions in dimension $3$. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 617-635. doi: 10.3934/dcds.2010.28.617
Z. Jackiewicz, B. Zubik-Kowal, B. Basse. Finite-difference and pseudo-spectral methods for the numerical simulations of in vitro human tumor cell population kinetics. Mathematical Biosciences & Engineering, 2009, 6 (3) : 561-572. doi: 10.3934/mbe.2009.6.561
Sarah Day; William D. Kalies; Konstantin Mischaikow and Thomas Wanner. Probabilistic and numerical validation of homology computations for nodal domains. Electronic Research Announcements, 2007, 13: 60-73.
Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. Conference Publications, 2007, 2007 (Special) : 768-778. doi: 10.3934/proc.2007.2007.768
Daniel Peterseim. Robustness of finite element simulations in densely packed random particle composites. Networks & Heterogeneous Media, 2012, 7 (1) : 113-126. doi: 10.3934/nhm.2012.7.113
Carlos Escudero, Fabricio Macià, Raúl Toral, Juan J. L. Velázquez. Kinetic theory and numerical simulations of two-species coagulation. Kinetic & Related Models, 2014, 7 (2) : 253-290. doi: 10.3934/krm.2014.7.253
Cornel M. Murea, H. G. E. Hentschel. A finite element method for growth in biological development. Mathematical Biosciences & Engineering, 2007, 4 (2) : 339-353. doi: 10.3934/mbe.2007.4.339
Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic & Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59
Leonid Golinskii, Mikhail Kudryavtsev. An inverse spectral theory for finite CMV matrices. Inverse Problems & Imaging, 2010, 4 (1) : 93-110. doi: 10.3934/ipi.2010.4.93
Chaoxu Pei, Mark Sussman, M. Yousuff Hussaini. A space-time discontinuous Galerkin spectral element method for the Stefan problem. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3595-3622. doi: 10.3934/dcdsb.2017216
Binjie Li, Xiaoping Xie, Shiquan Zhang. New convergence analysis for assumed stress hybrid quadrilateral finite element method. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2831-2856. doi: 10.3934/dcdsb.2017153
Kun Wang, Yinnian He, Yueqiang Shang. Fully discrete finite element method for the viscoelastic fluid motion equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 665-684. doi: 10.3934/dcdsb.2010.13.665
Junjiang Lai, Jianguo Huang. A finite element method for vibration analysis of elastic plate-plate structures. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 387-419. doi: 10.3934/dcdsb.2009.11.387
So-Hsiang Chou. An immersed linear finite element method with interface flux capturing recovery. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2343-2357. doi: 10.3934/dcdsb.2012.17.2343
Donald L. Brown, Vasilena Taralova. A multiscale finite element method for Neumann problems in porous microstructures. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1299-1326. doi: 10.3934/dcdss.2016052
Qingping Deng. A nonoverlapping domain decomposition method for nonconforming finite element problems. Communications on Pure & Applied Analysis, 2003, 2 (3) : 297-310. doi: 10.3934/cpaa.2003.2.297
Runchang Lin. A robust finite element method for singularly perturbed convection-diffusion problems. Conference Publications, 2009, 2009 (Special) : 496-505. doi: 10.3934/proc.2009.2009.496
Salim Meddahi, David Mora. Nonconforming mixed finite element approximation of a fluid-structure interaction spectral problem. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 269-287. doi: 10.3934/dcdss.2016.9.269
Caterina Calgaro, Meriem Ezzoug, Ezzeddine Zahrouni. Stability and convergence of an hybrid finite volume-finite element method for a multiphasic incompressible fluid model. Communications on Pure & Applied Analysis, 2018, 17 (2) : 429-448. doi: 10.3934/cpaa.2018024
Florian De Vuyst, Francesco Salvarani. Numerical simulations of degenerate transport problems. Kinetic & Related Models, 2014, 7 (3) : 463-476. doi: 10.3934/krm.2014.7.463
Virginie Bonnaillie-Noël Corentin Léna | CommonCrawl |
# What's this?
$$f(k) = \frac{-1}{\ln(1-p)} \; \frac{p^k}{k}, \quad s.t. \quad k \geq 1 \quad \mathrm{and} \quad 0 < p < 1$$ # Distributions
The Universe And It's Origin: my personal attempts to understand it
"Give me a derivation of a UTM from nothingness, and I don't need your physics to understand the origin of Universe..."
[2015-03-28]: An idea, and a more serious followup discussion on Everything-List.
Since 2.7 years old, immediately after the death of my grandfather, and the birth of my brother 11 days later, I created my dream of my life. The dream was "to understand the Universe", thinking that understanding it would let me know all answers, including the answer to question how to escape from the death.
Lying in my little bed that evening, I nearly cried because of the intensity of my sense of curiosity. I wanted to know everything. Unexpectedly I saw visions which appeared violet, white, blue and yellow (similar to this). I felt like having seen a glimpse of the Truth about the the Universe.
Later in my life I was mainly interested in astronomy, familiarized myself with great distances, timescales, densities, temperatures, energies and gravitational singularities in space, which helped me to accept the ideas of the Big Bang model and become a fan of it, because the model was (and is) supported by the most comprehensive and accurate explanations from current scientific evidence and observation.
The Big Bang model explains a great deal of structural aspects of the universe about 10^-12 seconds after the Big Bang, but it seems because of inability to accelerate particles indefinitely, this approach is limited by our technological advancement.
In spite of the limitations, I still wanted to get a picture of the earliest moments of the universe, and I was not satisfied with the uncertainties. I wanted the precise picture. I continued to ponder many years by myself.
When I had the Internet in 2001, I found that several websites including www.the-origin.org by Rogger Ellman suggested that "Nothingness" must be the best candidate for the initial state of the Universe as it doesn't require any additional explanation. Anything else requires explanation of its own existence. So I took the idea, and tried to develop upon it. I recognized that the essential thing to explore is the change from "Nothing" to "Something", and since changes are subject of mathematical analysis, I became interested in mathematics in order to explore the concept of change. However, after trying to estimate the changes resulting from the assumption of "Nothingness", I had little results.
Later in 2004 I found Stephen Wolfram's book "A New Kind of Science", about simple rules (called cellular automata) that sometimes yield complex patterns when repeatedly applied. It was exactly what I was looking for - something that could explain a complex pattern with a simple rule. For example, a seashell species Conus textile is said to have a pattern resembling the Rule 30 cellular automaton. This demonstrates how a simple rule can model a complex natural pattern. The fact that some rules are equivalent to universal turing machines (can in theory calculate anything that any computer can) looked sufficient for me to believe in Konrad Zuse's hypothesis that the entire Universe is being computed on a computer.
Consequently, in 2006 I had an idea to search for simple rules that explain the CMBR pattern. However, I was told that the CMBR pattern observed was very nearly homogeneous, to such an extent that up until recently it was not possible to measure the fraction that is not homogeneous via the COBE. I was also told that CMB dates from a time when the Universe was already quite old and large in the context of its smallest structures, and that any patterns we can see in the CMB are going to involve very large-scale variations, and thus it is unlikely to tell us much about the underlying simple rule.
It temporarily discouraged me from the idea, but in 2007 I found a Stephen Anastasi, claiming to be working on a non-axiomatic [set] theory uniting the mathematics, physics and philosophy; and explaining his ideas bottom-up from Cartesian argument in his weblog. Similarity to my initial thoughts had encouraged me to persist thinking about it, and became the source of my interest in mathematics (mainly mathematical logics and axiomatic set theory, which can be considered to be one of the possible kinds of deterministic rules (simple programs) from which definite conclusion can be drawn).
In addition, I have found a short article by Dr. Ulvi Yurtsever (JPL's Quantum Computing Technologies group) stating that it follows from the existence of entangled quantum states for spatially separated composite systems, and the fact that the Universe is large and expanding, that assuming the possibility of faster-than-light communication, it is reasonable to believe that the observed Universe could have evolved from simple initial conditions with simple, deterministic rules.
So, I became encouraged once again: if the rule that governs the Universe is rather simple, then we might simply disocover it, perhaps by comparing the essential features of simple-rule generated computational data with the features of the observable universe.
For example, some of the features of the Universe are already summarized and formulated as physical laws, others might be less documented. One feature that looks interesting to me is that our Universe objects appears to be three-dimensional in macroscopic scale, and it's interesting to consider what data patterns could give rise to impression of three spatial dimensions. Although this question is partly answered, I am not sure weather it is used as an essential feature of the space to classify data generated by simple rules.
Taking data generated by simple rules and searching for traits equivalent to the physical laws of our Universe could potentially result in the discovery of a rule that precisely models our Universe (i.e., allows us to simulate our own Universe, provides the precise pictures of its birth).
Another way that this question could be answered, I think is the following possibility:
If we would take a rule that is equivalent to universal turing machine, and then discovered how this rule could have undoubtedly formed out of the assumption of nothingness, then we would have a good reason to believe that it is the generating rule of the Universe.
The discovery of such a transition I would call the greatest discovery ever, because that would allow us to precisely simulate the beginning of Universe, and be certain about it.
Mindey, 2009-12-01
[2012]: https://www.davidhbailey.com/dhbpapers/normality-digits-pi.pdf
[2020-04-14]: Stephen Wolfram discovers and introduces (www.wolframphysics.org) a generalization of the simple rules, that look like hypergraph grammars, that look like the right direction to think about and explore computational universes. | CommonCrawl |
Transforming geographic scale: a comparison of combined population and areal weighting to other interpolation methods
Elaine Hallisey ORCID: orcid.org/0000-0002-9733-96111,
Eric Tai2,
Andrew Berens1,
Grete Wilt1,
Lucy Peipins2,
Brian Lewis1,
Shannon Graham1,
Barry Flanagan1 &
Natasha Buchanan Lunsford2
Transforming spatial data from one scale to another is a challenge in geographic analysis. As part of a larger, primary study to determine a possible association between travel barriers to pediatric cancer facilities and adolescent cancer mortality across the United States, we examined methods to estimate mortality within zones at varying distances from these facilities: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For the primary study, we used county mortality counts from the National Center for Health Statistics (NCHS) and population data by census tract for the United States to estimate zone mortality. In this paper, to evaluate the five mortality estimation methods, we employed address-level mortality data from the state of Georgia in conjunction with census data. Our objective here is to identify the simplest method that returns accurate mortality estimates.
The distribution of Georgia county adolescent cancer mortality counts mirrors the Poisson distribution of the NCHS counts for the U.S. Likewise, zone value patterns, along with the error measures of hierarchy and fit, are similar for the state and the nation. Therefore, Georgia data are suitable for methods testing. The mean absolute value arithmetic differences between the observed counts for Georgia and the five methods were 5.50, 5.00, 4.17, 2.74, and 3.43, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences showed no statistical difference among the methods. However, we found a strong positive correlation (r = 0.63) between estimated Georgia mortality rates and combined weighting rates at zone level. Most importantly, Bland–Altman plots indicated acceptable agreement between paired arithmetic differences of Georgia rates and combined population and areal weighting rates.
This research contributes to the literature on areal interpolation, demonstrating that combined population and areal weighting, compared to other tested methods, returns the most accurate estimates of mortality in transforming small counts by county to aggregated counts for large, non-standard study zones. This conceptually simple cartographic method should be of interest to public health practitioners and researchers limited to analysis of data for relatively large enumeration units.
The challenge of transforming spatial data collected at one scale to another scale, often referred to as areal interpolation or cross-area estimation, has long been recognized in spatial analysis [1]. In many cases, geographic boundaries, such as counties, are unsuitable in terms of the units needed for meaningful data analysis. This spatial misalignment of data is referred to as the change-of-support problem, which is concerned with inferences about the value of any particular variable at an enumeration unit different from that at which data were collected [2, 3]. Researchers and practitioners sometimes require estimates for non-standard geographic areas, i.e. target zones, to be derived from existing source zones, i.e. the zones from which the data are obtained. For example, an analyst who requires data for a non-standard enumeration unit, say a zone surrounding a U.S. hospital (target zone), must transform data collected at another zone level, such as a group of U.S. census tracts (source zones), to match the boundaries of the zone surrounding the hospital. With the growth of available data and geographic information systems that can integrate these data, there has been a parallel increase in the development of methods to address this problem.
Geospatial techniques, well documented in texts and the literature, are widely used to deal with transformation between scales [1,2,3,4,5,6,7,8]. Examples of methods include centroid assignment, areal weighting, dasymetric, regression, and geostatistical (or surface-generating).
For simple geographic centroid assignment, counts of some phenomenon are summed for a source zone, and allocated to the geographic centroid, that is, the areal center of gravity of the zone [9, 10]. Values assigned to zone centroids that fall within a target zone are then summed to estimate a count for the target zone. The binary nature of this technique means centroid assignment is either completely in or out of the zone, in other words, an all-or-nothing operation. Additionally, the geometry of the zone's polygon affects the positioning of the geographic centroid. Automated centroid placement is likely to be different depending upon the selection of input zone polygons.
Areal weighting, often used to disaggregate populations, is a cartographic overlay method that preserves volume, meaning subdivided populations sum to the original population. Weights are determined from the size of the overlapping source and target zone areas. For example, if a source zone (e.g., a census tract) with a population of 4000 is split so that 25% of the area falls in target zone A, and 75% falls in target zone B, 1000 individuals are allocated to target zone A and 3000 individuals to target zone B. A limitation is that areal weighting assumes an even distribution of population within each source zone [6, 8].
Methods exist to estimate prospective error in areal weighting and, as they are relevant to this paper, are discussed here. Simpson describes two measures to express the amount of estimation involved in the transformation from source to target zones: the degree of hierarchy, and the degree of fit [11]. The degree of hierarchy, or nesting, for an entire study area is the proportion of all source zones that fall completely within any of the target zones. The degree of hierarchy for an individual target zone is the proportion of source zones that fall completely within that target zone. Degree of hierarchy is calculated as:
$$H = \left( {\frac{{\mathop \sum \nolimits_{s,t} \left( {w_{st} = 1} \right)}}{{\mathop \sum \nolimits_{s} \left( 1 \right)}}} \right)$$
where: H is the degree of hierarchy; s is a source zone; t is a target zone; and w st is the areal overlap of the source zone with the target zone.
The degree of fit, or overlap, for the entire study area sums the maximum proportion, or weight, of each source zone as a proportion of all source zones. The degree of fit for a single target zone sums the weights of each source zone as a proportion of all source zones within the target zone. Degree of fit is calculated as:
$$F = \left( {\frac{{\mathop \sum \nolimits_{s} \left( {\hbox{max} \, w_{st} } \right)}}{{\mathop \sum \nolimits_{s} \left( 1 \right)}}} \right)$$
where: F is the degree of fit; s is a source zone; t is a target zone; and w st is the areal overlap of the source zone with the target zone.
Degree of hierarchy and degree of fit are usually multiplied by 100 to be expressed as percentages. The closer the output of these measures to 100%, the better the transformation estimate; accuracy increases as nesting increases and as the number of target zones decreases [12]. Researchers and practitioners, particularly in population geography, have used Simpson's measures to estimate potential error in cartographic areal interpolation [13, 14].
Dasymetric techniques use various ancillary data, such as cadastral, land cover, remotely-sensed, or fine resolution population data, to inform data disaggregation [15,16,17,18,19,20,21,22]. Applying a process conceptually similar to a dasymetric approach in the first step of their population-weighted interpolation, Wilson and Mansfield transformed county-level mortality rates to congressional districts (CDs) [18]. They used ancillary population data at census block level, census blocks nesting completely within both counties and CDs. For each county, the researchers first assigned the same mortality rate to each of the census blocks within the county. They then multiplied each block rate by block population count as a proportion of the total CD population and finally summed all the population-weighted block rates to estimate a CD mortality rate. As well as improving area-to-area transformation, ancillary data can, for instance, also be applied to point-level data to generate population-weighted centroids.
The cartographic methods described above have generally been used to transform large populations and rates. However, regression and geostatistical methods can accommodate small counts as well. Global or regional regression approaches use ancillary data as explanatory variables to develop models that predict population distribution in the source zones to better estimate populations in the target zones. These models assume a relationship exists among the population and other variables, such as land cover or parcel data [6, 8, 23]. Regression models offer the ability to refine estimates with the incorporation of covariates and to measure uncertainty. However, they also introduce complexity [22], require transformation of covariate geography, and generally do not handle changing relationships across space, i.e., non-stationarity, as well as do dasymetric methods, for which estimates are locally fitted to each source zone [6].
Geostatistical methods are used to model spatial data to produce estimates where data are unavailable [2, 24,25,26]. Either a smooth prediction surface or a probability surface, created from points derived from source polygons, is aggregated back to target polygons. As with simple areal weighting, geostatistical analysis assumes smooth distribution changes across the landscape, which is not usually the case. In addition, building a valid model can be difficult, as complex geostatistical techniques are often applied inappropriately [27].
The analysis discussed in this paper is part of a larger ecologic research project to determine a possible association between distance to pediatric cancer facilities and cancer mortality among adolescents, ages 15 through 19. Children's Oncology Group (COG) institutions provide specialized cancer care for children through clinical trials and research. Whereas most children 14 years of age and younger are treated in a COG, the majority of adolescents are referred to adult oncology centers that have less access to clinical trials and thus less improvement in survival [28,29,30]. To examine mortality rates by sex, race, and ethnicity within zones at varying distances from these facilities we needed to estimate adolescent cancer mortality rates for four, multipart zones surrounding 191 COG facilities across the United States (Fig. 1). In this paper, we used Georgia adolescent cancer mortality data, examining mortality rates by sex by zone, to test the methods.
Children's Oncology Group Institutions and Zones. The primary study encompasses the entire United States. This paper focuses on the validation of methods using Georgia adolescent cancer mortality data
The four zones represent an effort to define each COG institution's city core, an inner suburban ring, an outer suburban/exurban ring, and the balance of land beyond. Zone A encircles an area within 10 miles of any COG. Zones B and C are concentric rings with distances from a COG of >10 to 25 miles and >25 to 50 miles, respectively. Zone D comprises the remaining United States. Data available for the primary study included census tract level demographic data for rate denominators and U.S.-wide, county-level National Center for Health Statistics (NCHS) Compressed Mortality File (CMF) data for rate numerators. Although the tracts aggregate to counties, the four zones coincide with neither tracts nor counties. For this methods paper, we used residential address-level mortality data from the state of Georgia along with tract population data to evaluate methods to transform county mortalities (source zones), to the four study zones (target zones).
We sought to identify the simplest interpolation method that returned satisfactory mortality estimates. Given the large geographic scope of the primary research, i.e., zones encompassing the entire U.S., we aimed for straightforward methods with workable data requirements. In other words, we required a conceptually simple technique with readily available, statistically robust, nationwide data.
In this paper, we examine and discuss the results of five interpolation methods. Commonly used in research and practice, we explored geographic and population-weighted centroid assignment, simple areal weighting, and geostatistical areal interpolation. We also developed and tested a conceptually simple technique, combined population and areal weighting, which merges a dasymetric population weighting with areal weighting. We chose not to examine regression to estimate mortality because the sole intent of the primary study was to examine the association between adolescent cancer mortality and distance to a COG and we wanted to avoid the complexities of U.S.-wide regression models using multiple covariates. We believe cartographically-focused estimation techniques are more appropriate for this methods paper.
Data sources for the primary study included U.S. Census 2000 and 2010 100% population counts at the tract level as well as 1999–2011 county-level cancer mortality data for those aged 15 through 19 from the NCHS CMF, which are compiled from individual state death certificates [31,32,33]. To preserve confidentiality, NCHS provides mortality data at the county level only, upon a substantiated request and signed data use agreement.Footnote 1 However, some states consider death certificates public record and share residence-level point data. We therefore obtained point-level, adolescent cancer mortality data from Georgia, a state that releases mortality data for research, also upon a substantiated request and signed data use agreement, to assess the accuracy of our methods in this paper [34].
Inasmuch as the four COG study zones, A, B, C, and D, are independent of any standard enumeration unit, we estimated numerators and denominators for each zone. Numerator and denominator estimation were tied to census year because of the differing 2000 and 2010 geographies, particularly at the tract level. Though the census years fell at equal positions along the study's time span of years 1999 through 2011, we could not "split" mortality data for the study's mid-point year, 2005, because we did not have month of death. For that reason, we chose to use 7 years (1999 through 2005) of mortality data with Census 2000 geographies and populations and 6 years (2006 through 2011) of mortality data with Census 2010 geographies and populations. The mortality rate was calculated as the number of deaths over the 13-year study period for a specified population subgroup, such as males (numerator), divided by the total population, or person-years at risk, of that specific subgroup (denominator). We weighted the denominator population by census year:
$$13{ - }year\,death\;total/\left( {\left( {2000\,population*7} \right) + \left( {2010\;population*6} \right)} \right)$$
Denominator (population count) estimation
For our testing, we estimated Georgia mortality for males, females, and the total population, aged 15 through 19. To approximate population for study zones surrounding a COG (i.e. zone A, B, C, and D) for the denominator, we used the Population Estimator tool, developed by CDC's Geospatial Research, Analysis, and Services Program (GRASP), which performs simple areal weighting [35]. The area of overlap of the census tract (source zone) with the study zone was divided by the area of the entire census tract to obtain the proportion, or areal weight, of the tract area within the study zone. The population of interest for each tract (male, female, or overall) was then multiplied by the areal weight for that study zone as follows:
$$E_{pt} = \left( {\frac{{A_{zt} }}{{A_{t} }}} \right)*P_{t}$$
where: E pt is the areal-weighted population estimate for the tract, or tract portion, within the study zone; A zt is the geographic overlap area of the tract and study zone; A t is the geographic area of the entire tract; and P t is the tract population.
The resulting areal-weighted populations were summed to estimate a population total for the study zone for census years 2000 and 2010 (Fig. 2). We then calculated a weighted sum, as expressed in (3) above, to estimate a total 13-year population for the denominator. This process was repeated for each study zone, A, B, C, and D.
Denominator estimation for a hypothetical part of study zone A. The population for those aged 15 through 19 for each tract (P t ) is multiplied by the proportion of the tract, or areal weight (A zt /A t ), in the study zone. The output for each tract (E pt ) in the entire zone is summed to obtain a population estimate for the study zone. Note: For graphic simplicity, only a subset of zones are shown in the figures. Methods are the same for each of the four study zones, A, B, C, and D
Numerator (death count) estimation
Source zones for the numerator were counties with small numbers of deaths relative to the denominator populations. We tested five numerator estimation methods: (1) geographic centroid assignment, (2) population-weighted centroid assignment, (3) simple areal weighting, (4) combined population and areal weighting, and (5) geostatistical areal interpolation. For all five methods, we used Esri's ArcGIS 10.3.1™ software. For the geostatistical method, we also used Esri's Geostatistical Analyst extension in ArcMap.
Method 1: Geographic centroid assignment
For geographic centroid assignment, we attributed Georgia Department of Public Health (GADPH) mortality counts to each county's geographic centroid. County deaths assigned to centroids that fall within a study zone were summed, by sex and year, to estimate the number of deaths for that zone (Fig. 3).
Geographic centroid assignment. Each county centroid is attributed a county mortality count for the population of interest. Mortality counts for centroids falling within each study zone are summed to estimate mortality, as a whole number, by zone. In this hypothetical example, zones A and B are assigned zero deaths, despite the overlap of three counties on zone A (two potential deaths) and five on zone B (four potential deaths). Zone C is assigned four deaths, but has the possibility of more
Method 2: Population-weighted centroid assignment
For population-weighted centroid assignment, we attributed census tract populations of males and females aged 15 through 19, for years 2000 and 2010, to tract centroids. For each of Georgia's 159 counties, we used the tract centroids to calculate mean centers, weighted by the tract-level population of interest, for each year. County deaths assigned to population-weighted centroids that fall within a study zone were summed, by sex and year, to estimate the number of deaths for that zone (Fig. 4).
Population-weighted centroid assignment. Each tract centroid is attributed the population of interest. County centroids are placed using the mean center of tract centroids weighted by the tract population. Mortality counts for centroids falling within each study zone are summed to estimate mortality by zone. Results for zones A and B in this example, zero deaths for both, are the same as those for geographic centroid assignment. Zone C is assigned five deaths because the centroid in the northeast, with a value of "1," is now positioned within zone C
Method 3: Simple areal weighting
Simple areal weighting, is the same technique used for the denominator estimates, as described above. In this case, the area of overlap of the county source zone with the COG target zone was divided by the area of the entire county to obtain the proportion, or areal weight, of the county area within the study zone. The number of deaths for each county was then multiplied by the corresponding areal weight for that county. The resulting areal-weighted mortalities were summed to estimate the number of deaths for the study zone. Figure 2 illustrates the method, with deaths by county source zones instead of population by tract source zones as shown.
Method 4: Combined population and areal weighting
To estimate numerators for each COG study zone we (1) used population weighting, a conceptually dasymetric approach similar to that of Wilson and Mansfield, to disaggregate mortality from county level counts to tract level estimates; then (2) weighted each tract mortality estimate by its geographic area within the study zone; and finally (3) aggregated the combined population- and areal-weighted tract mortality estimates for each zone. In contrast to Wilson and Mansfield, who used population-weighted interpolation to estimate rates for standard enumeration units (CDs), we estimated mortality counts using population subgroup proportions for non-standard study zones. In addition, unlike Wilson and Mansfield who transformed census blocks with 100% hierarchy and fit both from county and to CD, we performed the second step, areal weighting, because our non-standard COG target study zones split the source census tracts.
We detail the combined population and areal weighting process here. Because we had numbers of deaths by county-level only, we took advantage of the county/tract hierarchy and assigned each tract a population-weighted mortality estimate as follows:
$$E_{mt} = \left( {\frac{{P_{t} }}{{P_{c} }}} \right)*M_{c}$$
where: E mt is the population-weighted mortality estimate for the tract; P t is the tract population; P c is the county population; and M c is the number of deaths in the county.
The output of Eq. (5) was multiplied by the geographic proportion of the tract that falls within the study zone, in other words, the areal weight (Fig. 5). This processing assumes an even distribution of tract population. We summed the resulting population and areal-weighted mortalities, by sex and year, to estimate the number of deaths for the zone. Expressed in its entirety, the study zone death count is estimated as:
$$M_{z} = \mathop \sum\nolimits_{{{\text{t}} = 1}}^{\text{n}} \left( {\frac{{ A_{zt} }}{{ A_{t} }}*E_{mt} } \right)$$
where: M z is the study zone mortality count estimate; \(\sum\nolimits_{{{\text{t}} = 1}}^{\text{n}} {}\) sums results for all tracts, or tract portions; A zt is the geographic overlap area of the tract and study zone; A t is the geographic area of the entire tract; and E mt is the population-weighted mortality estimate for the tract.
Combined population and areal weighting. The geographic area of the tract within the zone, the areal weight (A zt /A t ), is multiplied by population-weighted mortality estimate for the tract (E mt ). The output for each tract is then summed to estimate the number of deaths for the zone. We demonstrate, in this example, how estimates for portions of zones A and B are calculated. Note: As illustrated in Figs. 3 and 4, except for two counties, with two deaths each, the remaining counties within zones A and B recorded zero deaths for the population of interest; to simplify the illustration, we omitted counties with zero deaths. Also, because we show only portions of zones A and B, the estimates are technically only a portion of M z for zones A and B
Method 5: Geostatistical areal interpolation
To determine how geostatistical methods of interpolation compared to the cartographic methods described above, Georgia mortality counts were interpolated from county level data using one geostatistical interpolation model from among multiple explored, over-dispersed Poisson areal kriging, as described by Krivoruchko et al. [26], and implemented in ArcMap 10.3.1's Geostatistical Wizard. We interpolated mortality count data for adolescent males and females separately. Using visual variography, we fitted a stable kriging interpolation model to a plot of empirical covariance versus distance, creating a continuous surface depicting the probability of event occurrence in the study area. The geostatistical method we used produced standardized root mean square error values of 1.02 for females and 1.12 for males, for which an ideal value would be 1.0. During variography we used a lattice spacing of 1000 m, a lag size of 5000 m, and 18 lags. The continuous probability surface was then used to estimate the mortality event counts for the COG zones, providing a numerator to determine a mortality rate for each zone based on the previously calculated population.
Statistical analyses to assess the methods included: (1) the distribution of county mortality counts, (2) measures of potential transformation error among numerator, denominator, and zones in terms of degrees of hierarchy and fit, and (3) absolute value arithmetic differences from observed Georgia mortality counts, t-tests on absolute value arithmetic differences among the five methods to check for statistical difference, Pearson's r correlations between Georgia rates and estimated rates, and Bland–Altman plots depicting 95% level of agreement between Georgia mortality rates and those of the five methods [36,37,38,39].
Distribution of adolescent cancer county mortality counts: Georgia versus the U.S
Histograms of the distribution of county mortality counts reveal a pattern in Georgia similar to that of the U.S. (Fig. 6). The histogram of the Georgia mortality counts (N = 238) demonstrates a Poisson distribution, strongly right skewed. Of 159 counties, 80 (50%) record zero mortalities for the 13-year period. Seventy-two counties (45%) report between one and five deaths and fewer than 5% of counties (n = 7) record more than five deaths. The mean number of deaths by county for Georgia is 1.50. The histogram of the U.S. mortality counts (N = 7687) demonstrates a Poisson distribution, strongly right skewed. Of 3143 counties, 1478 (47%) record zero mortalities for the 13-year period. Forty-four percent of counties (1374) report between one and five deaths and 9% of counties record more than five deaths (n = 291). The mean number of deaths by county for the U.S. is 2.45.
Distribution of adolescent cancer county mortality counts. Adolescent cancer mortality counts from the GADPH were appropriate for testing the methods. The distribution of county mortality counts for Georgia mirror those of the U.S. Likewise, patterns of zone values are roughly similar for the state and the nation
Transformation error: degrees of hierarchy and fit
As discussed above, the degree of hierarchy (nesting) and the degree of fit (overlap) are two measures to express the amount of estimation, or error, involved in the transformation from source to target zones, particularly affecting the cartographic methods. The closer the output of either of these measures to 100%, the better the transformation estimate should be. Table 1 shows the degrees of hierarchy and fit, in percentages, for both the Georgia and U.S. denominators, which use census tract source zones for populations, and numerators, which use county source zones for numbers of deaths. Denominator percentages for hierarchy, and particularly for fit, are high, with overall hierarchy at 81.7% for Georgia and 83.7% for the U.S., and overall fit at 96.6% for Georgia and 97% for the U.S. Numerator percentages for all measures are much lower than those for denominators, meaning the error is higher for numerator estimation. Overall hierarchy is 52.2% for Georgia and 45.1% for the U.S. Overall fit is 88.7% for Georgia and 87.2% for the U.S. Of note is the zone A degree of hierarchy for Georgia; a zero value means that none of the counties nest completely within zone A. Patterns of zone values are roughly similar for Georgia and the U.S. For example, most zone D measures indicate less potential for error than those of the other zones, because it is large relative to other zones, with little change-of-support.
Table 1 Measures of potential error: degrees of hierarchy and fit
Comparisons between observed and estimated mortality measures
Table 2 shows comparisons between observed 1999–2011 Georgia adolescent cancer mortality and estimated mortality, by method and zone. For the death counts (i.e., numerators), the "Georgia total" row illustrates the concept of volume preservation. That is, each of the four cartographic methods maintained overall counts, unlike the geostatistical method. The arithmetic differences between the observed counts and those for the methods become apparent in the zone estimations. The mean absolute value arithmetic differences between the observed Georgia mortality counts and their paired count estimates, were 5.50, 5.00, 4.17, 2.84, and 3.43 for each of the five methods, respectively. Standard deviations of these means decrease progressively for the cartographic methods 1 through 4. Geostatistical method 5, however, has a standard deviation higher than method 4, but slightly lower than method 3. The largest absolute arithmetic difference for method 4 was less than five, whereas for methods 1, 2, 3, and 5, the largest arithmetic differences were much greater, at 16, 11, 8.59, and 7.85, respectively. Comparing the methods through paired t-tests of absolute value arithmetic differences, however, showed no statistical difference among the methods, with no method a statistically significantly closer estimator than any other method.
Table 2 Comparisons between observed 1999–2011 Georgia adolescent cancer mortality and estimated mortality, by method and zone
Table 2 also displays the robust denominator estimates as well as rates by method and zone. The mean of arithmetic differences from paired Georgia death rates are −0.12, −0.10, 0.10, 0.01, and 0.15 for the methods, 1 through 5, with method 4 closest to zero and method 5 furthest from zero. As with the counts, the standard deviations of these means decrease progressively for the cartographic methods, with method 4 the lowest at 0.33. For method 5, however, the standard deviation of the mean of the arithmetic differences from paired Georgia rates, at 0.42, falls between those of methods 3 and 4.
We calculated the Pearson product moment correlation coefficients (Pearson's r) for the rates. For methods 1 through 5, the r values were 0.184, 0.191, 0.327, 0.627, and 0.413 respectively. In social science research, methods 1 and 2 demonstrate weak positive correlations, methods 3 and 5 suggest moderate positive correlations, and method 4 a strong positive correlation with the Georgia rates.
For each of the five methods, we used Bland–Altman plots, a tool to compare methods estimating the same variable, to visualize the agreement between arithmetic differences of paired Georgia and method rates (Fig. 7). Usually Bland–Altman plots measure equipment performance against a known standard. We apply them here to assess geographic data processing methods as compared to known data values. The plots display the means of each pair of rate estimates (x value), versus the arithmetic differences between the paired estimates (y value). For example, the estimated Georgia mortality rate for males in zone A is 3.371, whereas for method 1 the estimated rate is 3.984 (see Table 2). The mean of these values is 3.667 and the difference is −0.613. This point (3.667, −0.613) is displayed as the rightmost square on the method 1 plot of Fig. 7. The plots also display the mean of the arithmetic differences between the Georgia estimates and each paired estimate, known as the bias, as a red horizontal line. Limits of agreement, confidence intervals at the 95% confidence level, are drawn as black lines. For the method to be a good match with the Georgia estimated rates, all the plotted points must fall within the limits of agreement, close to the bias. Of the five plots, method 4 most closely replicates the Georgia estimates; all the plotted points are within the limits of agreement, which is also the smallest of the five methods, and the mean of arithmetic differences is closest to zero.
Bland–Altman plots to compare Georgia rates with the five method rates. The Bland–Altman plots compare 1999–2011 Georgia adolescent mortality rate estimates to estimated rates for methods 1 through 5. Method 4 demonstrates the greatest agreement
Among the five methods tested for numerator estimation, method 4, the combined population and areal weighting technique, had the lowest mean absolute value arithmetic difference between the estimation and observed Georgia death counts. Method 4 also generated the only strongly positive correlation with the estimated Georgia rates. However, correlation tests, i.e. Pearson's r, which support the selection of method 4 as the best method, are inadequate to completely assess the accuracy of an estimation method. A strong correlation may exist, but the output measurements could, theoretically, be consistently different. A more definitive measure of method performance is that of agreement. To visualize agreement, we used Bland–Altman plots which display the means of each pair of estimates—the Georgia rates compared to each of the five methods—against the arithmetic difference between the estimates. Method 4 again produced the best results, with each of the eight plotted points falling within small 95% limits of agreement.
Examining the other methods, we observe several reasons for their weaker performance. Although method 1 is easy to perform, the county centroid location is based solely upon the county's geographic center of gravity, with no accounting for the distribution of the study populations. This binary "all or nothing" condition means that mortality assignment could be 100% incorrect (or 100% correct or any percentage in between). Method 1 therefore returned the least accurate results. Population-weighted centroid assignment, method 2, improved centroid placement, but was still limited by the binary nature of the potential error as exemplified in method 1. The two centroid methods generated the highest absolute arithmetic differences from the Georgia counts, weak positive correlations with the Georgia rates, and displayed—via Bland–Altman plots—a lack of agreement with Georgia rates. Method 3, simple areal weighting, is superior to the centroid methods, indicating an intermediate absolute value difference from Georgia counts as well as a moderate positive correlation with the Georgia rates. However, method 3 failed the agreement test, most likely because the affected population was not taken into account inasmuch as simple areal weighting assumes an evenly distributed population.
Geostatistical areal interpolation, method 5, showed slightly stronger positive correlation with the Georgia rates than method 3. However, the geostatistical method still failed the agreement test. This lack of agreement may be the result of the nonstationary nature of the source data. Mortality count data should vary in a similar way to population, which is known to be somewhat nonstationary. The violation of the stationarity assumption makes fitting model parameters much more difficult, and limits the accuracy of the probability surfaces produced. In addition, geostatistical areal interpolation does not preserve volume, as do the cartographic methods tested.
There is also a conceptual problem with method 5. Count data are inherently discrete rather than continuous. As geostatistical methods are surface generating, i.e. they create continuous data, the use of geostatistics to interpolate counts is tenuous. While we would have preferred to interpolate mortality rates, the high number of counties with zero mortalities (80 of the 159 Georgia counties had no adolescent cancer deaths during the time of the study) precluded rate interpolation as the model invariably assigned a rate of zero across the study region. However, in the case of event interpolation, the data structure mismatch is solved by producing a continuous probability surface, rather than a prediction surface, from which to estimate COG zone counts. The surface generated represents the probability of an event occurring based upon the number of times that event occurred in each of the original geographies, mortality count by county in this study. This type of interpolation may be problematic if something other than the underlying distribution of counts affects the probability of observing the event, e.g. if different counties had different reporting practices.
Method 5 also presented a unique challenge that could makes its application difficult for those without expert knowledge of geostatistical methods. Aside from the difficulty associated with the visual variography required when using the Geostatistical Wizard in ArcMap, geostatistical areal interpolation can be sensitive to data structuring. For this project, shapefiles used for the COG target zones had to be preprocessed so that aggregation of the probability surface to the target zones would produce accurate results. Specifically zone D, shown in Fig. 1, posed a problem. In the state of Georgia zone D encompassed an area of roughly 103,000 km2, whereas the next largest zone covered only about 15,000 km2. Although this large land expanse with little change-of-support produces good results for the cartographic methods, the size disparity led to the over estimation of mortality counts and the prediction of a high standard error in zone D when the geostatistical probability surface was aggregated to the COG study zones. To reduce predicted error, we split zone D into nine smaller polygons, bringing the largest individual polygon down in size to roughly 18,000 km2 and reducing the predicted standard error for male mortality counts from 40.99 in the combined zone D to a mean of 3.62 and sum of 32.59 for the nine polygons that make up zone D. Female count corresponding standard error numbers were 35.68, 3.16, and 28.42 respectively. Summing the estimated counts in these nine zones provided reasonably accurate results, shown in Table 2, especially as compared to the estimated counts when zone D was not split (77.86 for males, 56.82 for females). We expect this size disparity between zone D and the other study zones to require even more preprocessing for a national scale geostatistical analysis.
The most effective method, method 4, incorporated ancillary census tract data to weight deaths by the at-risk populations to estimate mortality, the intent being to reduce the error associated with assuming an evenly distributed population across county source zones. In essence, disaggregation using population weighting is analogous to locally fitting the distribution of each source zone. Additionally, in combined population and areal weighting, unlike centroid methods or simple areal weighting, error is distributed across the target zones by allocating "mortality" weighted by population and area. Although it is more processing-intensive than the other cartographic methods described here, the processing can be automated. Further, method 4 is conceptually simple, particularly in contrast to the geostatistical techniques of method 5.
All spatial disaggregation techniques generate error. Because of confidentiality requirements, we were limited to county resolution for the NCHS numerator mortality data as opposed to tract-level resolution for the denominator populations. Denominator estimation was straightforward and stable because the tract source zones were small relative to the larger target zones surrounding the COGs, the degrees of hierarchy and fit were large, and the populations large.
In contrast, numerator estimation was more challenging. The Wilson and Mansfield population-weighting technique, which informed the population-weighting component of our combined population and areal weighting method, transformed mortality rates from one set of standard zones (counties) to another set of standard zones (congressional districts) both built from perfectly nested census blocks with 100% hierarchy and fit. In contrast, we required numerator mortality counts to be transformed from counties to non-standard study zones. We therefore combined population, in a conceptually dasymetric approach, and areal weighting, to estimate numbers of deaths for the numerators of our study zones.
Source zones for the numerator were counties within which census tracts nest hierarchically. Counties were therefore, by definition, larger than the tract source zones used for denominator estimation, with the rare exception of counties consisting of a single tract. Lower degrees of hierarchy and fit reflect this dichotomy between counties and tracts. Small numbers of deaths per county also led to less stable results for numerator estimation. In sum, low hierarchy and fit values for the numerators, along with smaller numerator counts, showed greater error in numerator estimation, in contrast to the high hierarchy and fit measures, as well as much larger counts, for the denominators.
Adolescent cancer mortality counts from the GADPH were appropriate for testing the methods explored. The distribution of county mortality counts for Georgia mirror those of the U.S. Likewise, patterns of zone values are roughly similar for the state and the nation. In terms of area, however, medium-sized Georgia has some of the smallest counties in the country (N = 159) and therefore may not be representative of other U.S. states. As noted, the mean number of mortalities per county is 1.50 versus 2.45 for the U.S. as a whole. It may be that counties with smaller geographic areas return better results than larger counties for the five tested methods. However, as method 4 employs combined weighting, which distributes error across study zones, we would still expect to observe improved estimation over the centroid methods in regions of the country with larger counties. With Georgia's smaller counties, improvements over the other methods in this study should be seen as conservative.
One potential limitation involves the relationship between census tract population and geographic area. The optimal population for a tract is 4000, therefore less densely populated counties are likely to have fewer tracts, though with larger geographic areas. Georgia counties have higher population densities and smaller tracts than many counties in other states, so error cannot be distributed at as fine a level of granularity elsewhere as in Georgia. For our own primary research, however, counties with small numbers of tracts were not a major concern because those counties are located in zone D, which has limited change-of-support.
Another limitation was the small number of statistical data points available, eight (four zones by two sexes) for each method. Examining these four methods in other states would provide additional data points along with an opportunity to study the effects of larger or less densely populated counties on estimation methods. Another approach to increase statistical data points for method validation would be to explore Bland–Altman plots of additional zone configurations within the state of Georgia, e.g. random region delineations.
We chose not to examine regression to estimate mortality because the purpose of the primary study was solely to examine the association between adolescent cancer mortality and distance to a COG. Other than population distribution by sex, we avoided a priori assumptions in our estimation of the COG proximity zone mortality patterns. We also wanted to avoid the complexities of U.S.-wide regression models using multiple covariates. Given the satisfactory results we obtained from population and areal weighting, simple in concept and practice, we did not see the need to include multivariate regression in our preliminary analysis. Nonetheless, race, ethnicity, poverty, and lack of health insurance, among other factors, influence adolescent cancer mortality distribution. These factors vary geographically and will be considered in future exploration of potential explanatory variables in the primary study.
This research demonstrates that combined population and areal weighting, compared to cartographic centroid and simple areal weighting methods, and a geostatistical method, returns more accurate estimates of mortality in transforming small counts by county to aggregated counts for large target zones that do not conform to standard enumeration units. Weighting by ancillary population data to take into account at-risk population, in conjunction with the allocation of weighted mortalities, which eliminates the "all or nothing" problem inherent in centroid methods, distributes error across study zones, thus improving estimates. Furthermore, practitioners without the resources of geospatial statisticians and software, may find this simpler cartographic method more accessible and just as effective in transforming county-level source zone counts to larger, non-standard target zones. This methodology should be of interest to practitioners and researchers limited to analysis of count data for relatively large enumeration source units, such as NCHS county-level mortality counts, among other data sources. We expect to observe increased support for using combined population and areal weighting estimates, particularly over other cartographic overlay methods.
Although NCHS CMF users are permitted to estimate sub-national counts and rates for their own analyses, they cannot report any sub-national count or rate based on totals less than 10. NCHS CMF users, as well as users of any confidential data sets, must ensure they comply with data use agreements.
CMF:
Compressed Mortality File
COG:
Children's Oncology Group
NCHS:
GRASP:
Geospatial Research, Analysis, and Services Program
GADPH:
Georgia Department of Public Health
GA:
Gotway CA, Young LJ. Combining incompatible spatial data. J Am Stat Assoc. 2002;97:632–48.
Gelfand AE, Zhu L, Carlin BP. On the change of support problem for spatio-temporal data. Biostatistics. 2001;2:31–45.
Cai Q, Rushton G, Bhaduri B, Bright E, Coleman P. Estimating small-area populations by age and sex using spatial interpolation and statistical inference methods. Trans GIS. 2006;10:577–98.
Goodchild MF, Lam NS-N. Areal interpolation: a variant of the traditional spatial problem. Geo Process. 1980;1:297–312.
Lam NS-N. Spatial interpolation methods: a review. Am Cartogr. 1983;10:129–49.
Langford M. Obtaining population estimates in non-census reporting zones: an evaluation of the 3-class dasymetric method. Comput Environ Urban Syst. 2006;30:161–80.
Chiang A. Evaluating the performance of a filtered area weighting method in population estimation for public health studies. Atlanta: Georgia State University; 2013. http://scholarworks.gsu.edu/geosciences_theses/62/. Accessed 25 Apr 2016.
Li T, Pullar D, Corcoran J, Stimson R. A comparison of spatial disaggregation techniques as applied to population estimation for South East Queensland (SEQ), Australia. Appl GIS. 2007;3:1–16.
de Smith MJGM, Longley PA. Centroids and centers. In: Geospatial analysis, 5th edn. 2015. http://www.spatialanalysisonline.com/HTML/index.html?centroids_and_centers.htm. Accessed 12 Jan 2017.
Centroid. Wikipedia. Accessed Jan 2017.
Simpson L. Geography conversion tables: a framework for conversion of data between geographical units. Int J Popul Geogr. 2002;8:69–82.
Gregory IN, Ell PS. Breaking the boundaries: Geographical approaches to integrating 200 years of the census. J R Stat Soc Ser A Stat Soc. 2005;168:419–37.
Brown L, Cunningham N. The inner geographies of a migrant gateway: mapping the built environment and the dynamics of caribbean mobility in Manchester, 1951–2011. Soc Sci Hist. 2016;40:93–120.
Norman P, Rees P, Boyle P. Achieving data compatibility over space and time: creating consistent geographical zones. Int J Popul Geogr. 2003;9:365–86.
Maantay JA, Maroko AR, Porter-Morgan H. A new method for mapping population and understanding the spatial dynamics of disease in urban areas: asthma in the Bronx, New York. Urban Geogr. 2008;29:724–38.
Holt JB, Lo CP, Hodler TW. Dasymetric estimation of population density and areal interpolation of census data. Cartogr Geogr Inf Sci. 2004;31:103–21.
Hao Y, Ward EM, Jemal A, Pickle LW, Thun MJ. U.S. congressional district cancer death rates. Int J Health Geogr. 2006;5:1–13.
Wilson JL, Mansfield CJ. Disease, death, and the body politic: an areal interpolation example for political epidemiology. Int J Appl Geospat Res. 2010;1:49–68.
Mennis J. Generating surface models of population using dasymetric mapping. Prof Geogr. 2003;55:31–42.
Mennis J, Hultgren T. Intelligent daysymetric mapping and its application to areal interpolation. Cartogr Geogr Inf Sci. 2006;33:179–94.
Tapp A. Areal interpolation and dasymetric mapping methods using local ancillary data sources. Cartogr Geogr Inf Sci. 2010;37:215–28.
Eicher CL, Brewer CA. Dasymetric mapping and areal interpolation: implementation and evaluation. Cartogr Geogr Inf Sci. 2001;28:125–38.
Zhang X, Holt JB, Lu H, Wheaton AG, Ford ES, Greenlund KJ, Croft JB. Multilevel regression and poststratification for small-area estimation of population health outcomes: a case study of chronic obstructive pulmonary disease prevalence using the behavioral risk factor surveillance system. Am J Epidemiol. 2014;179:1025–33.
Goovaerts P. Geostatistical analysis of disease data: accounting for spatial support and population density in the isolpleth mapping of cancer mortality risk using area-to-point Poisson kriging. Int J Health Geogr. 2006;5. doi:10.1186/1476-072X-5-52.
Goovaerts P. Geostatistical analysis of county-level lung cancer mortality rates in the southeastern United States. Geogr Anal. 2010;42:32–52.
Krivoruchko K, Gribov A, Krause E. Multivariate areal interpolation for continuous and count data. Procedia Environ Sci. 2011;3:14–9.
Diem JE. A critical examination of ozone mapping from a spatial-scale perspective. Environ Pollut. 2003;125:369–83.
Tai E, Buchanan N, Westervelt L, Elimam D, Lawvere S. Treatment setting, clinical trial enrollment, and subsequent outcomes among adolescents with cancer: a literature review. Pediatrics. 2014;133(Suppl 3):S91–7.
Rauck AM, Fremgen AM, Hutchison CL, Grovas AC, Ruymam FB, Menck HR. #50 Adolescent cancers in the United States: a national cancer data base (NCDB) report. J Pediatr Hematol Oncol. 1999;21:310.
Howell DL, Ward KC, Austin HD, Young JL, Woods WG. Access to pediatric cancer care by age, race, and diagnosis, and outcomes of cancer treatment in pediatric and adolescent patients in the state of Georgia. J Clin Oncol. 2007;25:4610–5.
U.S. Census Bureau. 2000 Census, Summary File 1. In: American Factfinder. 2000 ed. American Factfinder: U.S. Census Bureau; 2000. https://factfinder.census.gov/faces/nav/jsf/pages/index.xhtml. Accessed 29 Jan 2016.
National Center for Health Statistics. Compressed mortality file. Hyattsville: NCHS; 1999–2011.
Georgia Department of Public Health. Georgia adolescent cancer mortality data. Atlanta: Georgia Department of Public Health, OHIP; 2015.
Chiang A, Henry J. GRASP population estimator tool. CDC/ATSDR/GRASP; 2014.
Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;327:307–10.
Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8:135–60.
McLaughlin P. Testing agreement between a new method and the gold standard—how do we test? J Biomech. 2013;46:2757–60.
Giavarina D. Understanding bland altman analysis. Biochem Med. 2015;25:141–51.
EH performed spatial and some statistical analysis, produced most of the figures and tables, and drafted the manuscript. ET was the principal investigator on the primary project and edited the manuscript. AB performed geostatistical analysis, contributed some figures, and wrote portions of the methods and discussion sections. BL and GW performed some statistical analyses, produced some figures, and assisted with manuscript development. LP, SG, BF, and NBL assisted in the research design and provided significant editing of the manuscript. All authors read and approved the final manuscript.
Special thanks to Gordon Freymann and Robert Attaway of the Georgia Department of Public Health/Office of Health Indicators for Planning for their assistance.
The ideas expressed in the articles are those of the authors and do not necessarily reflect the official position of the Centers for Disease Control and Prevention. The National Center for Health Statistics, the U.S. Census, and the Georgia Department of Public Health are only responsible for providing initial data. Analyses, interpretations, and conclusions are those of the authors.
The data that support the findings of this study are available upon reasonable request from the National Center for Health Statistics and from the Georgia Department of Public Health. Restrictions apply to the availability of these data, which were used under license for the current study, and so the authors cannot distribute the data directly.
All funding was provided by Centers for Disease Control and Prevention.
Agency for Toxic Substances and Disease Registry, Geospatial Research, Analysis, and Services Program, Centers for Disease Control and Prevention, 4770 Buford Highway, MS F09, Atlanta, GA, 30341-3717, USA
Elaine Hallisey, Andrew Berens, Grete Wilt, Brian Lewis, Shannon Graham & Barry Flanagan
Division of Cancer Prevention and Control, Centers for Disease Control and Prevention, Atlanta, GA, USA
Eric Tai, Lucy Peipins & Natasha Buchanan Lunsford
Elaine Hallisey
Eric Tai
Andrew Berens
Grete Wilt
Lucy Peipins
Shannon Graham
Natasha Buchanan Lunsford
Correspondence to Elaine Hallisey.
Hallisey, E., Tai, E., Berens, A. et al. Transforming geographic scale: a comparison of combined population and areal weighting to other interpolation methods. Int J Health Geogr 16, 29 (2017). https://doi.org/10.1186/s12942-017-0102-z
DOI: https://doi.org/10.1186/s12942-017-0102-z
Areal interpolation
Areal weighting
Population weighting
Disaggregation
Geographic scale
Adolescent cancer | CommonCrawl |
Configuration Synthesis and Performance Analysis of 9-Speed Automatic Transmissions | springerprofessional.de Skip to main content
vorheriger Artikel Design of a Passive Gait-based Ankle-foot Exosk...
nächster Artikel A Comparative Study of Fractional Order Models ...
Configuration Synthesis and Performance Analysis of 9-Speed Automatic Transmissions
Huafeng Ding, Changwang Cai, Ziming Chen, Tao Ke, Bowen Mao
In the automotive industry, vehicles equipped with automatic transmissions (ATs) have significant advantages, including simple operation, smooth gear shift and long service life. Moreover, the corresponding driving performance and ride comfort are significantly improved compared to cars with manual transmissions. The first cars equipped with the Hydra-Matic ATs were developed and put into market by General Motors in 1940s. Since then, cars equipped with automatic transmissions have attracted the vast number of consumers, and quickly occupied the automotive market. This is especially more pronounced in United States, European countries and Japan [ 1 ]. Along with the continuous development of the automotive industry, the innovation and development of ATs is of great significance.
In the past few decades, many innovations have been proposed in the area of automatic transmissions. Nowadays, the epicyclic-type ATs are the most widely applied ATs, which originates from its remarkable superiorities, including compact structure, large gear ratios, strong bearing capacity and long-life operation. The first step in the conceptual design phase of the AT is the selection of kinematic configurations to provide desired gear ratios [ 2 ]. An epicyclic-type AT mechanism typically consists of the hydraulic transmission and the mechanical transmission parts. The hydraulic transmission part mainly includes a torque converter, while the mechanical transmission part includes an epicyclic gear train (EGT) and a set of shifting elements such as clutches and brakes. In order to obtain kinematic configurations of ATs, scholars have proposed synthesis methods for EGTs. Johnson and Towfigh [ 3 ] utilized the synthetic approach of linkage type kinematic chains and proposed a synthesis method for EGTs. Then they synthesized gear mechanisms with one degree of freedom (DOF) with up to 8 links. Moreover, Buchsbaum and Freudenstein [ 4 ] and Freudenstein [ 5 ] applied the graph theory into the synthesis process of geared kinematic chains (GKCs), and obtained epicyclic gear chains with up to 5 links. Ravisankar and Mruthyunjaya [ 6 ] applied the graph theory and matrix to synthesize GKCs and obtained all 1-DOF EGTs with up to 4 fundamental loops. Furthermore, Tsai et al. [ 7 – 10 ] proposed an isomorphism detection method by computing characteristic polynomial of EGTs, the genetic graph approach to synthesize non-isomorphic EGTs with up to 6 links and non-fractionated 2-DOF EGTs with up to 7 links, and the canonical graph representation of GKCs to solve the pseudo isomorphic problem. Kim and Kwak [ 11 ] employed a recursive method and synthesized EGTs with up to 7 links. Hsu and Lam [ 12 ] presented a new graph representation for planetary gear trains (PGTs). Then Hsu [ 13 ] proposed a synthesis method for GKCs based on the new graph representation. Based on certain functional constraints, Castillo [ 14 ] proposed a synthesis method and then he applied the proposed method to synthesize 1-DOF PGTs with up to 9 links. Moreover, Ngo and Yan [ 15 ] generated 48 configurations and synthesized series–parallel hybrid transmissions for vehicles in the public transportation. Rajasri [ 16 ] proposed an approach in accordance with the hamming number and generated EGTs with up to 7 links. Kamesh [ 17 ] applied the vertex incidence polynomial and proposed a synthesis method. Then he reported all 1-DOF non-isomorphic EGTs with up to 6 links. Shanmukhasundaram et al. [ 18 ] applied the concept of kinematic unit in the representation method of rotation and displacement graphs to synthesize EGTs with up to 7 links. Moreover, scholars [ 19 – 21 ] focused on the configurations synthesis of ATs, which is normally used for hybrid electric vehicles. It should be indicated that these devices are expected to play an important role in the energy and environmental conservations in the near future. Reviewing the literature indicates that diverse EGTs have been proposed so far, which provide numerous kinematic configurations for innovative ATs. Nowadays, 6-, 7- and 8-speed ATs are mainly products in the automotive industry, while some 9-speed ATs are under test and further development. Along with the development of the AT technology, EGTs with more links are highly demanded to design ATs with more gears.
On the other hand, it is of significant importance to evaluate the feasibility and performance of AT mechanisms from different aspects, including the kinematic and dynamic analysis. To this end, different methods such as the relative velocity method [ 22 ], the lever analogy method [ 23 ] and the topology-based method [ 24 ] have been proposed. These methods can predict the speed, speed ratio and force condition of the candidate AT as an appropriate evaluation index. Reviewing the literature indicates that the lever analogy method is widely applied to analyze speeds and force condition of EGTs. Then the power flow and power loss are analyzed to obtain another evaluation index, called the transmission efficiency [ 2 , 25 – 28 ]. In fact, the torque method [ 29 ], applying the concept of the ratio sensitivity [ 30 , 31 ], is a simple and universal scheme, which can be applied to all structures.
Ding et al. [ 32 – 36 ] presented unified topological representation models for planar kinematic chains. Then they synthesized EGTs with up to 9 links and established a topological graph database for EGTs. In the present study, it is intended to propose a synthesis method based on the topological graph database. Then the proposed method was applied to investigate the kinematic configurations of different ATs, covering existing designs and new designs. In order to illustrate the synthesis process, 9-speed ATs were synthesized and four mechanisms were proposed. Then, the lever analogy method was applied to conduct the kinematic and mechanic analyses of the proposed configurations, as well as the power flow analysis. Moreover, the transmission efficiencies were calculated through the torque method. In order to evaluate the feasibility and performance of the proposed mechanisms, comparative analysis was carried out. Finally, the prototype of one of mechanisms with the best performance was manufactured and the speed test experiment was conducted.
2 Configuration Synthesis of 9-speed AT Mechanisms
Studies show that as the number of gear ratios in the ATs increases, the fuel consumptions rate reduces and ride comfort improves [ 37 ]. Nowadays, ATs with 6–8 gears are mainstream products in the automotive industry. Meanwhile, 9-speed ATs are under test and development. Figure 1 illustrates configurations of different 9-speed automatic transmissions. Among the presented schemes, the ZF 9HP consists of four deceleration gears, one direct gear, four over-speed gears and one reverse gear, while the Benz 9G-Tronic consists of five deceleration gears, one direct gear, three over-speed gears and one reverse gear. Moreover, the GM 9T50E consists of six deceleration gears, one direct gear, two over-speed gears and one reverse gear.
Configuration of different existing 9-speed ATs
Configuration of EGTs, as the key component of AT mechanism, is of significant importance for the designing ATs. Based on the unified topological presentation models of GKCs proposed by Ding [ 32 ], EGTs used in ATs can be presented by double bicolor graphs (DBGs). For the EGTs with one main shaft, the number of vertices of DBGs is one more than the number of links of corresponding EGTs. Moreover, the mobility of DBGs is one more than that of corresponding EGTs. For example, the structure diagrams of ATs shown in Figure 1 are obtained from Refs. [ 38 – 40 ], and the corresponding EGTs are converted into DBGs, where the results are presented in Figure 2. It is observed that the EGT used in ZF 9HP is 2-DOF and 11-link, while the corresponding DBG is 3-DOF and has 12 vertices.
Structure diagrams of different ATs and the corresponding DBGs
Ding et al. [ 33 ] established a topological graph database for EGTs. They synthesized EGTs with up to 11 links and presented them through corresponding DBGs in the database. EGTs for ZF 9HP, Benz 9G-Tronic and GM 9T50E schemes have 11 links, 12 links and 10 links, respectively. Considering the number of vertices and the mobility of the corresponding DBGs, EGTs used in ZF 9HP and GM 9T50E can be obtained from the topological graph database as shown in Figure 3. It should be indicated that there are some DBGs similar to the existing EGTs in topological features, which may be used to obtain AT mechanisms having similar kinematic structures with the existing AT products.
DBGs of EGTs in different automatic transmissions
It should be indicated that AT mechanisms with certain requirements can be obtained from the scratch through the topological graph database. For EGTs used in AT mechanisms, only coaxial links, namely central gears (sun and ring gears) and carriers, can be used as the input, output, or fixed components to obtain the desired gear ratios [ 41 ]. For a 1-DOF AT mechanism with n c coaxial links, the number of all possible gear ratios, including the direct drive, is presented by m. When the output is specified, m can be calculated through the following expression:
$$m = (n_{c} - 1)(n_{c} - 2) + 1.$$
The mobility of AT mechanism is defined as the number of components that can rotate freely when all of shifting elements are in the separate state. In an AT mechanism, a shifting element is called clutch when applied to connect two different components, or called brake when applied to fix one component to the housing. This shifting element reduces the mobility of AT mechanism from F to ( F−1) and the number of coaxial links from n c to ( n c−1), respectively. Therefore, for an AT mechanism with F degrees of freedom, ( F−1) shifting elements are required to reduce the mobility of the mechanism to 1. Meanwhile, the entire movement of the AT mechanism is determined for a certain input speed so that the desired gear ratio can be obtained [ 42 ].
In order to obtain a 9-speed AT mechanism, the number of desired gear ratios, including the reverse gear, is 10. Then, the number of coaxial links n c in a 1-DOF 9-speed AT mechanism should satisfy the following inequalities:
$$(n_{c} - 1)(n_{c} - 2) + 1 \ge 10,$$
$$n_{c} \ge 5.$$
In order to obtain 9-speed AT mechanisms with F degrees of freedom, the number of coaxial links should be more than (4+ F). It should be indicated that based on the abovementioned definition for the AT mechanism mobility and the Chebychev-Grübler-Kutzbach criterion for planar mechanisms [ 43 ], the mobility of EGTs used in F-DOF AT mechanisms is ( F−1). In the present study, EGTs satisfying the relationship between their mobility and the number of coaxial links are synthesized and shifting elements are added to obtain the corresponding configurations for designing 9-speed AT mechanisms.
For convenience of control and smooth shifting operation, two shifting elements are simultaneously applied in the mechanism. Moreover, the "clutch-clutch" shifting principle is considered. In this scheme, one shifting element maintains the contact unchanged, while the other shifting element disconnects. At the same time, one of the remaining shifting elements is applied to be working state. In the present study, 9-speed AT mechanisms with 3-DOF are considered to illustrate the synthesis process. To this end, the number of coaxial links should exceed 7.
At the first step, the mobility and the number of vertices of the corresponding DBGs are set in accordance with the graph database. Then the possible link assortments are obtained. For example, when 2-DOF EGTs with 11 links are considered, the corresponding 3-DOF DBGs have 12 vertices. In this case, the number of vertex and mobility in the topological graph database are set to 12 and 3, respectively. And 103 possible link assortments are obtained. Figure 4 illustrates the obtained link assortments.
Link assortments of 12-vertex 3-DOF DBGs obtained from the topological graph database
At the second step, the maximum value of degree of vertices of DBGs should be detected in each link assortment. When the detected value is equal to or more than (4+ F), DBGs in the link assortment are synthesized. Figure 5 illustrates some of obtained DBGs for 11-link 2-DOF EGTs with 7 coaxial links.
Obtained DBGs for 11-link 2-DOF EGTs with 7 coaxial links
Considering the characteristics of topology graphs and practical applications of EGTs, DBGs with the following conditions should be eliminated.
DBGs with rigid sub-chains;
Non-planar DBGs;
DBGs with vertices connecting to a hollow vertex through dash lines;
DBGs with vertices, which are only adjacent to dash lines;
When the number of dash lines adjacent to a vertex representing a planet gear is less than two.
DBGs satisfying the foregoing constraints are eliminated. Then, four DBGs are selected as examples as shown in Figure 6.
Four DBGs selected from link assortments [2; 9; 0; 0; 0; 1] and [3; 7; 1; 0; 0; 1]
At the third step, in order to conduct the performance analysis and the arrangement of shifting elements, it is necessary to draw function diagrams of EGTs corresponding to the DBGs. In the present study, the Buchsbaum-Freudenstein method [ 4 ] is applied for converting DBGs to function diagrams of EGTs. Then, the carriers, planet gears, sun gears and ring gears are determined in accordance with the correspondences between DBGs and EGTs. The DBGs shown in Figure 6 are converted to the function diagrams as shown in Figure 7.
Function diagrams of four DBGs presented in Figure 6
Figure 7 indicates that EGTs (1) and (2) are structurally similar. Furthermore, EGTs (3) and (4) are structurally similar. It should be indicated that structurally similar EGTs for any existing AT can be found from the database.
Finally, configurations of 9-speed AT mechanisms can be obtained by adding shifting elements through the lever analogy method [ 23 ]. This can be carried out as following:
Transform the function diagrams of EGTs into equivalent lever diagrams through the lever analogy method.
Determine the output member, which is normally the carrier or the ring gear.
Determine the input member, which is normally the sun gear or the carrier.
Determine the position of clutches. Clutches have two operating modes: The first mode connects components of two different planetary gear sets (PGSs), namely the simple planetary gear train defined by Lévai [ 22 ]. The second mode connects two different components of one PGS to form an atresia. The main purpose of the first mode is to reduce the mobility of EGTs, while the second mode is always employed to achieve the direct gear. When the second mode is applied, all components of the PGS rotate as a whole, called atresia.
Determine the position of brakes. Brakes cannot be added on input member and output member.
In the present study, foregoing steps are applied for the conversion. Presented DBGs in Figure 6 are converted into AT mechanisms and the corresponding structure diagrams are shown in Figure 8.
The structure diagrams of AT mechanisms converted from DBGs presented in Figure 6
Reviewing the literature indicates that these schemes have not been proposed before. Therefore, they have been or going to be patented [ 44 – 47 ].
3 Performance Analysis of Proposed AT Mechanisms
Studies show that the variation range of gear ratios, interval of gear ratios and the transmission efficiency are important indexes to evaluate the performance of AT mechanisms. On the other hand, the mechanical performance of AT mechanisms has an important influence on the performance and service life of the system. In this section, it is intended to analyze the performance of the proposed AT mechanisms shown in Figure 8. To this end, the AT mechanism (1) is taken as the example to illustrate the analyzing process. Figure 9 illustrates the structure of the AT mechanism (1) in detail.
Structural diagram of the AT mechanism (1)
Figure 9 indicates that the AT mechanism mainly consists of the hydraulic transmission and the mechanical transmission parts. It is observed that these two parts are installed in the housing. Moreover, the hydraulic transmission part mainly consists of a torque converter, while the mechanical transmission part consists of an EGT and six shifting elements. The EGT is made from four PGSs (called PGS 1-PGS 4) and five interconnecting components (called IC 1-IC 5). Furthermore, shifting elements include two clutches (i.e., A, B) and four brakes (i.e., C, D, E, F). Power from input shaft (I) transmits through hydraulic transmission part and mechanical transmission part to the output shaft (O) so that the vehicle moves with the expected speed.
Figure 9 indicates that each PGS consists of four members, including the sun gear (S), ring gear (R), planet gear (P) and the carrier (PC). These members are connected to each other through ICs and shifting elements. Each IC operates like a coupling and connects two members between two PGSs to form a component all the time. On the other hand, a shifting element connects or separates the components in different gears. Different gear ratios can be obtained through connecting or separating different shifting elements selectively. It should be indicated that the proposed AT mechanism has ten different gear ratios (including a reverse gear), so that it is categorized as a 9-speed transmission.
3.1 Kinematic Analysis
The kinematic analysis mainly includes the calculation of the gear ratio and the rotational speeds of the moving components at each gear so that the range and interval of gear ratios can be obtained. The gear ratio of the AT mechanism refers to the ratio of the input shaft speed to the output shaft speed. The absolute value of the gear ratio indicates the size, while the corresponding sign indicates the correlation between the rotation direction of the input and the output shafts.
At present, the relative velocity method and the lever analogy are widely applied to analyze the AT from the kinematic points of view. The lever analogy employs the equivalent lever diagram which is more intuitive and is beneficial to the arrangement of gear ratios. In the present study, the lever analogy is applied to analyze the proposed AT mechanisms. Figure 10 illustrates the equivalent lever diagram of the proposed AT mechanism.
Equivalent lever diagram of the proposed AT mechanism (1)
Figure 10 indicates that there are six shifting elements in the proposed AT mechanism, where two of them should contact simultaneously to get a certain gear ratio. Considering the practical application of the AT system, for each PGS, the number of engaged brakes should not exceed one. Meanwhile, there are ten combination modes of clutches and brakes in the proposed AT mechanism. The characteristic parameters of four PGSs, which is equal to the ratio between teeth number of the ring gear and sun gear, are presented by K n ( n = 1, 2, 3, 4).
3.1.1 Combination Mode 1: D and F Engaged
In this mode, components R 1R 2 and PC 3R 4 are connected to the housing, while brakes D and F are engaged. Meanwhile, R 1R 2 and PC 3R 4 are stationary so that the corresponding velocities are 0. By overlapping the fulcrums with the same speed into a fulcrum, the AT mechanism is transformed into an equivalent lever with six fulcrums as shown in Figure 11.
Equivalent lever speed diagram when D and F engaged
Fulcrums (1), (2) and (3) denote the input member S 1S 2, member PC 1 and the component PC 2S 3, respectively. Moreover, fulcrums (4), (5) and (6) represent component R 1R 2PC 3R 4, output member R 3PC 4 and the member S 4, respectively. The rectangular coordinate system O- XY is established as shown in Figure 11. The axis of the equivalent lever with six fulcrums is considered as the Y-axis, where the positive direction is from fulcrum (1) to fulcrum (6). Select an arbitrary point below the fulcrum (1) on the Y-axis as the origin point O, and then the straight line perpendicular to the Y-axis passing from the origin point O is considered as the X-axis. In this case, the right direction is the positive direction.
The fulcrums on the Y-axis represent components of the EGT. For convenience of calculation, it is assumed that the length between fulcrum (4) and (5) is 1 mm. Subsequently, the length between other fulcrums can be obtained and presented by the characteristic parameters K n ( n = 1, 2, 3, 4).
The X-axis represents the rotational speed of each component. For convenience of calculation, it is assumed that the rotational speed of the input member (i.e., fulcrum (1)) is 1 r/min, and the coordinate of point a is (1, Y(1)). Moreover, the rotational speed of the fixed member (i.e., fulcrum (4)) is set to 0 r/min. Then the speed line ab of the AT mechanism at this time is obtained by connecting point (1, Y(1)) to point (0, Y(4)). Point b is the intersection of the speed line and the horizontal line passing the fulcrum (5). X-coordinates of the speed line intersections with horizontal lines passing fulcrums represent the rotational speeds of the components represented by fulcrums. The positive and the negative signs represent the same and the opposite rotating direction of the components compared to the input member, respectively. For example, the X-coordinate of the b point is negative, indicating that the rotating direction of the output member R 3PC 4 is opposite to that of the input member.
According to the basic properties of similar triangles, expressions of the rotational speed can be obtained for all components. For example, the triangle consisting of fulcrums (4), (5) and point b is similar to the triangle consisting of fulcrums (4), (1) and point a. The X-coordinate of the b point, namely the rotational speed of the output member R 3PC 4, can be derived accordingly:
$$\frac{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }}{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} }} }} = \frac{{x_{b} }}{{x_{a} }}{ = } - \frac{{l_{45} }}{{l_{14} }} = - \frac{1}{{(1 + K_{2} )K_{3} }},$$
$$n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} = x_{b} = - \frac{1}{{(1 + K_{2} )K_{3} }},$$
where \(n_{{{\text{S}}_{1} {\text{S}}_{2} }}\) and \(n_{{{\text{R}}_{3} {\text{PC}}_{4} }}\) denote the rotational speed of the input member S 1S 2 and the output member R 3PC 4, respectively. Moreover, x b and x a are X-coordinate of b and a points, respectively. Finally, l ij ( i, j = 1, 2, …, 6) denotes the distance between fulcrums ( i) and ( j).
Similarly, the rotational speed of other components can be obtained. Calculated speeds are presented in Table 1.
Rotational speed of each component when D and F engaged (r/min)
S 1S 2
PC 2S 3
R 1R 2PC 3R 4
R 3PC 4
\(\frac{1}{{1 + K_{1} }}\)
\(- \frac{1}{{(1 + K_{2} )K_{3} }}\)
\(- \frac{{1 + K_{4} }}{{(1 + K_{2} )K_{3} }}\)
X-coordinates of points a and b represent the rotational speed of the input and the output members, respectively. Based on the defined parameters, the gear ratio can be mathematically expressed as follows:
$$i_{\text{DF}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }}{ = } - (1 + K_{2} )K_{3} ,$$
where \(i_{\text{DF}}\) denotes the gear ratio of the AT mechanism when D and F are engaged.
3.1.2 Combination Mode 2: C and F Engaged
In this mode, the member PC 1 and component PC 3R 4 are connected to the housing when brakes C and F are engaged. Meanwhile, PC 1 and PC 3R 4 are stationary so that their speeds are set to 0. Figure 12 illustrates the equivalent lever with six fulcrums transformed by the AT mechanism.
Equivalent lever speed diagram when C and F engaged
In this figure, the fulcrums (1), (2) and (3) denote the input member S 1S 2, member S 4 and output member R 3PC 4, respectively. Moreover, fulcrums (4), (5) and (6) represent components PC 1PC 3R 4, PC 2S 3 and R 1R 2, respectively. Figure 12 indicates that the instantaneous speed line ab of the AT mechanism can be obtained by connecting the coordinates (1, Y(1)) and (0, Y(4)).
Then expressions of the rotational speeds can be obtained for the components through the basic properties of similar triangles. Table 2 illustrates the calculated results.
Rotational speed of each component when C and F engaged (r/min)
PC 1PC 3R 4
R 1R 2
\(\frac{{(1 + K_{4} )(K_{2} - K_{1} )}}{{(1 + K_{2} )K_{1} K_{3} }}\)
\(\frac{{K_{2} - K_{1} }}{{(1 + K_{2} )K_{1} K_{3} }}\)
\(- \frac{{K_{2} - K_{1} }}{{(1 + K_{2} )K_{1} }}\)
\(- \frac{1}{{K_{1} }}\)
Based on the defined parameters, the gear ratio in this mode can be mathematically expressed as the following:
$$i_{\text{CF}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }} = \frac{{(1 + K_{2} )K_{1} K_{3} }}{{K_{2} - K_{1} }},$$
where \(i_{\text{CF}}\) denotes the gear ratio of the AT mechanism when C and F are engaged.
3.1.3 Combination Mode 3–6: A Engaged
In this mode, the clutch A is engaged and member S 4 and component S 1S 2 are connected to each other. Figure 13 illustrates the equivalent lever with six fulcrums transformed by the AT mechanism.
Equivalent lever speed diagram when the clutch A engaged
Figure 13 indicates that fulcrums (1), (2) and (3) denote input member S 1S 2S 4, output member R 3PC 4 and component PC 3R 4, respectively. Moreover, fulcrums (4), (5) and (6) represent member PC 1, component PC 2S 3 and component R 1R 2, respectively.
There are four modes corresponding for engagements of different brakes. Therefore, four different gear ratios can be obtained accordingly.
In Figure 13, lines a 1 b 1 and a 2 b 2 represent the speed lines of the AT mechanism when brakes F and C, respectively are engaged. Moreover, a 3 b 3 and a 4 b 4 denote the speed lines of the AT mechanism when brakes E and D, respectively are engaged. Table 3 shows that the expressions of the rotational speeds for components when different brakes are engaged can be obtained by the basic properties of the similar triangle.
Rotational speed of each component when A engaged (r/min)
S 1S 2S 4
\(1 - \frac{{(1 + K_{1} )K_{2} K_{4} }}{{(1 + K_{3} + K_{4} )(1 + K_{2} )K_{1} }}\)
\(\frac{{1 + K_{3} }}{{1 + K_{3} + K_{4} }}\)
\(1 - \frac{{K_{2} K_{4} }}{{(1 + K_{3} + K_{4} )(1 + K_{2} )}}\)
PC 3R 4
\(1 - \frac{{(1 + K_{1} )(1 + K_{4} )K_{2} }}{{(1 + K_{3} + K_{4} )(1 + K_{2} )K_{1} }}\)
\(\frac{{K_{3} }}{{1 + K_{3} + K_{4} }}\)
\(\frac{{1 + K_{3} + K_{4} + K_{2} K_{3} }}{{(1 + K_{3} + K_{4} )(1 + K_{2} )}}\)
\(1 - \frac{{(1 + K_{3} + K_{4} )(1 + K_{2} )K_{1} }}{{(1 + K_{1} )(1 + K_{4} )K_{2} }}\)
\(\frac{{K_{2} - K_{1} }}{{(1 + K_{1} )K_{2} }}\)
\(- \frac{{K_{3} }}{{1 + K_{4} }}\)
\(\frac{{1 + K_{3} + K_{4} + K_{2} K_{3} }}{{(1 + K_{4} )K_{2} }}\)
Component S 1S 2S 4 is the input member, while R 3PC 4 is the output member. According to the definition of the gear ratio, the expressions are described as follows.
Gear ratio expression when brake F is engaged is described as follows:
$$i_{\text{AF}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }} = 1 + K_{4} .$$
Gear ratio expression when brake C is engaged is described as follows:
$$i_{\text{AC}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }}{ = }\frac{{(1 + K_{3} + K_{4} )(1 + K_{2} )K_{1} }}{{((1 + K_{2} )(1 + K_{3} ) + K_{4} )K_{1} - K_{2} K_{4} }}.$$
Gear ratio expression when brake E is engaged is described as follows:
$$i_{\text{AE}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }}{ = 1 + }\frac{{K_{4} }}{{1 + K_{3} }}.$$
Gear ratio expression when brake D is engaged is described as follows:
$$i_{\text{AD}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }} = \frac{{(1 + K_{2} )(1 + K_{3} + K_{4} )}}{{1 + K_{2} + K_{3} + K_{4} + K_{2} K_{3} }},$$
where \(i_{\text{AF}}\), \(i_{\text{AC}}\), \(i_{\text{AE}}\) and \(i_{\text{AD}}\) denote the gear ratio of the AT mechanism when A and F are engaged, gear ratio when A and C are engaged, gear ratio when A and E are engaged and the gear ratio when A and D are engaged, respectively. Moreover, \(n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }}\) represents the rotational speed of the input member S 1S 2S 4.
3.1.4 Combination Mode 7–9: B Engaged
In this mode, the clutch B is engaged and components PC 3R 4 and S 1S 2 are connected to each other. Figure 14 shows the equivalent lever with six fulcrums transformed by the AT mechanism.
Equivalent lever speed diagram when clutch B engaged
It should be indicated that the fulcrums (1), (2) and (3) represent the member S 4, output member R 3PC 4 and input member S 1S 2PC 3R 4, respectively. Moreover, the fulcrums (4), (5) and (6) represent the member PC 1, component PC 2S 3, and component R 1R 2, respectively.
Different gear ratios are obtained by engaging different brakes. Figure 14 shows that the lines a 1 b 1, a 2 b 2 and a 3 b 3 represent the speed lines of the AT mechanism when brakes D, E and C, respectively are engaged. Moreover, Table 4 shows that the expressions of the rotational speeds for the components when different brakes are engaged can be obtained by the basic properties of the similar triangle.
Rotational speed of each component when B engaged (r/min)
\(\frac{{(1 + K_{4} )K_{2} }}{{(1 + K_{2} )K_{3} }} + 1\)
\(\frac{{1 + K_{4} }}{{K_{3} }} + 1\)
\(1 + \frac{{(1 + K_{1} )(1 + K_{4} )K_{2} }}{{(1 + K_{2} )K_{1} K_{3} }}\)
\(\frac{{K_{2} }}{{(1 + K_{2} )K_{3} }} + 1\)
\(\frac{1}{{K_{3} }} + 1\)
\(1 + \frac{{(1 + K_{1} )K_{2} }}{{(1 + K_{2} )K_{1} K_{3} }}\)
S 1S 2PC 3R 4
Component S 1S 2PC 3R 4 is the input member, while R 3PC 4 is the output member. According to the definition of the gear ratio, the expressions can be obtained as follows.
When the brake D is engaged, the corresponding gear ratio can be expressed as the following:
$$i_{\text{BD}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{PC}}_{3} {\text{R}}_{ 4} }} }}{{n_{{{\text{R}}_{3} {\text{PC}}_{4} }} }} = \frac{{(1 + K_{2} )K_{3} }}{{K_{2} + K_{3} (1 + K_{2} )}}.$$
When the brake E is engaged, the mathematical expression for the gear ratio is:
$$i_{\text{BE}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{PC}}_{3} {\text{R}}_{ 4} }} }}{{n_{{{\text{R}}_{3} {\text{PC}}_{4} }} }} = \frac{{K_{3} }}{{1 + K_{3} }}.$$
When the brake C is engaged, the expression for the gear ratio is in the form below:
$$i_{\text{BC}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{PC}}_{3} {\text{R}}_{ 4} }} }}{{n_{{{\text{R}}_{3} {\text{PC}}_{4} }} }} = \frac{{(1 + K_{2} )K_{1} K_{3} }}{{(1 + K_{1} )K_{2} + (1 + K_{2} )K_{1} K_{3} }},$$
where \(i_{\text{BD}}\), \(i_{\text{BE}}\) and \(i_{\text{BC}}\) denote the gear ratio of the AT mechanism when B and D, B and E, and B and C are engaged, respectively. Moreover, \(n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{PC}}_{3} {\text{R}}_{4} }}\) represents the rotational speed of the input member S 1S 2PC 3R 4.
3.1.5 Combination Mode 10: A and B Engaged
In this mode, components PC 3R 4, S 1S 2 and member S 4 are connected to each other when clutches A and B are engaged together. Meanwhile, the PGS 4 moves alone to form an atresia, which transmits the input movement to the output member directly, realizing the direct gear. Figure 15 indicates that connecting the coordinates (1, Y(1)) and (1, Y(3)) and the speed line a 1 a 2 of the AT mechanism can be obtained.
Equivalent lever speed diagram when A and B engaged together
In this mode, rotational speeds of all components are the same, which is equal to that of the input member, namely, 1 r/min. This can be mathematically expressed as the following:
$$n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }} = n_{{{\text{R}}_{3} {\text{PC}}_{4} }} = n_{{{\text{PC}}_{3} {\text{R}}_{ 4} }} = n_{{{\text{PC}}_{ 1} }} = n_{{{\text{PC}}_{ 2} {\text{S}}_{3} }} = n_{{{\text{R}}_{ 1} {\text{R}}_{ 2} }} = 1.$$
X-coordinates of points a 1 and a 2 represent the rotational speed of the input member at the same time. Moreover, the X-coordinate of point b represents the rotational speed of the output member. Then, the gear ratio can be obtained as follows:
$$i_{\text{AB}} = \frac{{n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }} }}{{n_{{{\text{R}}_{ 3} {\text{PC}}_{ 4} }} }} = 1,$$
where \(i_{\text{AB}}\) and \(n_{{{\text{S}}_{ 1} {\text{S}}_{ 2} {\text{S}}_{4} }}\) denote the gear ratio of the AT mechanism when A and B are engaged and the rotational speed of the input member S 1S 2S 4, respectively.
3.2 Mechanical Analysis
The mechanical analysis of the AT mechanism refers to calculating torques of gear meshing points, including the external torque of the EGT mechanism and the internal torque of PGSs. As the key part of the AT mechanism, the torque applied on members of the mechanical transmission remarkably affects the working performance and service life [ 48 ]. In order to perform the analysis, it is assumed that there is no friction in the AT mechanism, and it has a uniform motion. The mechanical analysis of the AT mechanism can be simplified into the problem of equilibrium equations of parallel forces, which can be resolved by the lever analogy method.
If a member is applied to transmit power, the PGS including the member is said to be active. Take the force condition under reverse gear as example. Then, two PGSs are active, while the other two PGSs do not participate in the power transmission when the brakes D and F are engaged. Figure 16 shows the force condition. It should be indicated that the input and output members are the sun gear S 2 and the ring gear R 3, respectively. Moreover, the fixed members are the ring gear R 2 and the carrier PC 3.
Torque analysis diagram when D and F engaged
In Figure 16, T I, T O and T b denote the input torque, output torque and the brake torque, respectively.
3.2.1 External Torque Analysis
Generally, the input torque T I is given and acts on the input member S 2. Moreover, the output torque T O acting on the output member R 3 can be calculated in the form below, based on the gear ratio obtained in Section 3.1.
$$T_{\text{O}} = - i_{\text{DF}} T_{\text{I}} = (1 + K_{2} )K_{3} T_{\text{I}} .$$
The brake torque T b acting on fixed members R 2 and PC 3 can be calculated in accordance with the balance of external torques from the horizontal direction:
$$T_{\text{b}} = - T_{\text{I}} - T_{\text{O}} = - (1 + K_{3} + K_{2} K_{3} )T_{\text{I}} .$$
3.2.2 Internal Torque Analysis
The internal torque of PGS refers to the applied torque by the planet gear to central gears meshing with it or the carrier supporting it. According to the equilibrium equation of torques applied by central gears and carrier on the planet gear in a PGS, the following correlation holds between torques:
$$\frac{{T_{\text{S}} }}{1} = \frac{{T_{\text{R}} }}{K} = \frac{{T_{\text{PC}} }}{ - (1 + K)}.$$
It should be indicated that the member with the certain external torque and only one internal torque is initially analyzed. In fact, this specific member is only involved in the motion of one PGS. According to the Newton's third law, the internal torque of the member is equal in magnitude and opposite in direction compared with the external torque acting on it.
Figure 16 shows that the input member S 2 only participates in the motion of PGS 2. Then, the internal torque of S 2 can be obtained as the following:
$$T_{{{\text{S}}_{2} }} = - T_{\text{I}} .$$
Based on Eq. ( 19), the internal torque of R 2 can be obtained as:
$$T_{{{\text{R}}_{2} }} = K_{2} T_{{{\text{S}}_{2} }} = - K_{2} T_{\text{I}} .$$
Then, the internal torque of PC 2 can be expressed as the following:
$$T_{{{\text{PC}}_{2} }} = - T_{{{\text{S}}_{2} }} - T_{{{\text{R}}_{2} }} = (1 + K_{2} )T_{\text{I}} .$$
On the other hand, the output member R 3 only participates in the motion of PGS 3 so that the internal torque of R 3 can be obtained as follows:
$$T_{{{\text{R}}_{3} }} = - T_{\text{O}} = - (1 + K_{2} )K_{3} T_{\text{I}} .$$
Based on Eq. ( 19), the internal torque of S 3 can be expressed as:
$$T_{{{\text{S}}_{3} }} = \frac{1}{{K_{3} }}T_{{{\text{R}}_{3} }} = - (1 + K_{2} )T_{\text{I}}.$$
Then, the internal torque of PC 3 can be obtained in the form below:
$$T_{{{\text{PC}}_{3} }} = - T_{{{\text{S}}_{3} }} - T_{{{\text{R}}_{3} }} = (1 + K_{2} )(1 + K_{3} )T_{\text{I}} .$$
The analysis process of other gears is similar to that of the reverse gear. Therefore, they are not discussed one by one in this article. Under the condition of given input torque or certain load, the torque of each member can be calculated to check the working state of members and evaluate the performance and service life.
3.3 Power Flow Analysis
For a certain combination mode, the path of the power transmission inside the AT mechanism can be described clearly by the power flow analysis, which is beneficial for the observation of the circulating power and plays an important role in the accurate efficiency evaluation [ 25 ]. The rotational speed and internal torque of each member are derived based on the kinematics and mechanical analysis. Then, the power transmitted by each member is described by the following equation:
$$P_{\text{X}} = T_{\text{X}} \frac{{2{{\pi }}n_{\text{X}} }}{60} = \frac{{{\pi }}}{30}T_{\text{X}} n_{\text{X}},$$
where P X, T X and n X denote the power, internal torque of the member X and the rotational speed of the member X, respectively.
Assuming the direction of the input torque and input rotational speed as the positive direction, the power direction is judged by the following rules:
If P X > 0, power flows into the member X so that the member X is the driven member;
If P X < 0, power flows out of the member X so that the member X is the driving member;
If P X = 0, power flows through the member X.
For members in a PGS, the power flows from the driving member to the driven member. Moreover, for the members connected by ICs or shifting elements, the power flows from the driven member to the driving member. It should be indicated that arrows are applied in the equivalent lever diagram to indicate the direction of the power. Then, the power flow diagrams under each gear ratio can be obtained to express the paths of the power transmission.
If the power transmitted through any component exceeds the input power, circulating power occurs. The circulating power is harmful and reduces the transmission efficiency, especially when the circulating power is too high. Therefore, the circulating power should be considered at the design stage of AT mechanisms.
3.4 Transmission Efficiency Analysis
The transmission efficiency of the AT mechanism is an important parameter to evaluate the performance of the mechanism. For the convenience of calculation, the following assumptions are made in the analysis [ 42 ]:
Only the gear meshing loss is considered and other losses, such as bearing loss and splash loss are ignored.
Assume that there is no loss caused by the implicated motion. Moreover, implicated motion does not cause the gear mesh transmission.
Assume that the total transmission loss of the PGS is caused by the gear meshing loss in the relative motion. Moreover, the gear meshing loss caused by the relative motion is the same as the fix axle transmission.
Based on the abovementioned assumptions, the gear meshing loss is actually the torque loss caused by the friction at gear pairs. Therefore, the torque method is used to solve the transmission efficiency in the present study. The torque method is suitable for all structures of AT mechanisms and the derivation process is simple. The calculation equation is as follows:
$$\eta = \frac{{\hat{i}}}{i},$$
where η denotes the transmission efficiency of the AT mechanism. Moreover, \(i = f(K_{1} ,K_{2} , \cdots ,K_{n} )\) denotes the ideal gear ratio, where n denotes the number of PGS in the mechanism. \(\hat{i} = f(K_{1} \eta_{\text{c}}^{{x_{1} }} ,K_{2} \eta_{\text{c}}^{{x_{2} }} , \cdots ,K_{n} \eta_{\text{c}}^{{x_{n} }} )\) denotes the real torque transformation, where η c indicates the efficiency of the PGS when the carrier is fixed with a value of 0.97. The value of x m ( m = 1, 2, …, n) involves the power flow directions of the PGS m and it is calculated by the following equation:
$$x_{m} = {\text{sign}}\left( {\frac{\partial \ln i}{{\partial K_{m} }}} \right).$$
Namely, x m = + 1 when \(\frac{\partial \ln i}{{\partial K_{m} }} > 0\), while x m = − 1 when \(\frac{\partial \ln i}{{\partial K_{m} }} < 0\).
3.5 Numerical Example and Comparative Analysis
Considering the rationality of the radial size, the range of the characteristic parameter K of single-planet PGS is generally 4/3-4. In order to improve the shift comfort of AT mechanisms, the interval of gear ratios should be as small as possible in the range of 1.1–1.6. It should be indicated that the lower limit of the transmission efficiency of the forward gears is not less than 0.925. However, it is allowed not to be less than 0.87 for the rarely used gears, such as the first gear and reverse gear [ 49 ].
3.5.1 Numerical Example
In order to obtain a series of characteristic parameters and corresponding gear ratio sets, characteristic parameters of the 4 PGSs are considered as variables, while the variation range of characteristic parameters and the interval of gear ratios are considered as the cyclic interval and the constraint condition, respectively. Take one of the characteristic parameters sets as example, K 1 = 1.4, K 2 = 3, K 3 = 1.4 and K 4 = 2.2. Table 5 shows the gear ratios, interval of gear ratios and gear ranking of the AT mechanism.
Gear ratios, interval of gear ratio and gear ranking of the novel AT mechanism (1)
Combination mode
Interval of gear ratio
− 5.60
1st gear
2nd gear
3rd gear
4th gear
Table 5 shows that the interval of gear ratios between each gear approaches the empirical value, and the characteristic parameters meet the application requirements.
Assume that the rotational speed of the input member is 1 r/min and the external torque applied to input member is 1 N·m. Table 6 shows that by ignoring the power loss, the power passing through each member can be obtained based on Eq. ( 26).
Power passing through each member of the novel AT mechanism (1) (W)
− 1
− 1.875
− 0.3125
The power flow diagrams under each gear ratio can be obtained according to the sign of powers shown in Table 6. Taking the reverse gear and 1st gear as examples, the power flow diagrams are shown in Figure 17.
Power flow diagrams under the reverse gear and the 1st gear
It is observed that the circulating power occurs under the 1st gear. The value of the circulating power is equal to the power of member S 2, namely, 0.875 W, which is not too high. Therefore, the AT mechanism is still available.
The transmission efficiency of the AT mechanism can be calculated based on the torque method. Taking the reverse gear as example and according to Eqs. ( 6) and ( 28), the following equations are obtained:
$$\frac{{\partial \ln ( - (1 + K_{2} )K_{3} )}}{{\partial K_{2} }} = \frac{1}{{1 + K_{2} }} = \frac{1}{4} > 0,$$
$$\frac{{\partial \ln ( - (1 + K_{2} )K_{3} )}}{{\partial K_{3} }} = \frac{1}{{K_{3} }} = \frac{1}{1.4} > 0.$$
According to Eqs. ( 28)–( 30), it is calculated that x 2 = + 1, x 3 = + 1. Then, the real torque transformation can be obtained as the following:
$$\hat{i} = - (1 + K_{2} \times 0.97^{{x_{2} }} ) \times (K_{3} \times 0.97^{{x_{3} }} ) = - 5. 3 1.$$
Then, the transmission efficiency is obtained as the following:
$$\eta = \frac{{\hat{i}}}{i} = \frac{ - 5.31}{ - 5.6} = 0.9482.$$
Similarly, the transmission efficiencies of other gears are calculated, and the calculation results are shown in Table 7.
Transmission efficiency under each gear
Transmission efficiency
Table 7 shows that the transmission efficiencies approach the empirical value. Therefore, the transmission efficiencies of the reverse gear, 1st gear and 3rd gear are a little lower.
Moreover, the kinematics and transmission efficiency of the other three novel AT mechanisms are analyzed. Tables 8, 9 and 10 show the gear ratios, interval of gear ratios and the transmission efficiency of each AT mechanism, respectively.
Gear ratios, interval of gear ratio and transmission efficiency of the novel AT mechanism (2)
Interval of
3.5.2 Comparative Analysis
The gear ratios and intervals of gear ratios of the existing AT mechanisms are obtained from Refs. [ 38 – 40 ]. Moreover, the transmission efficiencies of the existing AT mechanisms are calculated based on the lever analogy method and the torque method, which are shown in Tables 11, 12 and 13.
The gear ratios, interval of gear ratio and transmission efficiency of ZF 9HP
The gear ratios, interval of gear ratio and transmission efficiency of Benz 9G-Tronic
The gear ratios, interval of gear ratio and transmission efficiency of GM 9T50E
Figure 18 presents the comparative analysis of ranges of gear ratios of the seven AT mechanisms. It is observed that the range of gear ratios of the 9T50E is the lowest, while that of the novel AT mechanism (3) is the highest. Moreover, it is found that there is no big difference in the range of gear ratios between the four novel ATs and the three existing ATs, which means that the ranges of gear ratios of the novel ATs meet the practical application requirements.
Comparative analysis of the ranges of gear ratios of the seven AT mechanisms
Figure 19 shows the comparative analysis of the intervals of gear ratios of the seven AT mechanisms. It is observed that the intervals of gear ratios of the novel ATs (3) and (4) fluctuate greatly, which means that the shift performance is poor. There are intervals of gear ratios bigger than 1.6 and smaller than 1.1 for novel ATs (2), (3), (4) and existing ATs of 9HP and 9G-Tronic. Therefore, the intervals of gear ratios of novel AT (1) and 9T50E are in completely consistent with the empirical value of 1.1–1.6, which has a significant influence on the smooth shifting and comfortable driving.
Comparative analysis of the intervals of gear ratios of the seven AT mechanisms
Figure 20 shows the comparative analysis of transmission efficiencies of the seven AT mechanisms. The transmission efficiencies of all ATs satisfy the constraint conditions. Besides the reverse gear and 1st gear, the transmission efficiencies of other gears of the novel ATs are relatively high, which are not very different with the existing ATs.
Comparative analysis of the transmission efficiencies of the seven AT mechanisms
The comparative analysis of the range of gear ratios, interval of gear ratios and transmission efficiencies show that the four novel ATs are not very different from the existing ATs, which means that they are suitable for the practical application. It should be indicated that the transmission efficiency under the reverse gear and 1st gear will be optimized in the future work. During the novel ATs, only the novel AT mechanism (1) meets all of the constraint conditions completely. Therefore, the novel AT mechanism (1) is further analyzed and its prototype is manufactured for the speed test.
4 Prototype Test
Table 14 shows that the teeth numbers of gears are determined according to the definition of characteristic parameters of PGSs and the selected values described at Section 3.4.1. It should be indicated that standard spur gears are used. The modulus and the pressure angle of the gears are 2 and 20°, respectively.
Teeth number of gears in the PGSs
PGS 1
Planet gear
Figure 21 shows the 3D model and the prototype of the AT mechanism. The structure of the prototype is simplified and the shifting elements are replaced with a simple device, which has the same function. For instance, the clutch plate has the same function as the clutch A when working with link A 1. Moreover, it has the same function as the clutch B when working with link B 1. Furthermore, the component C 2 working with link C 1 has the same function as brake C, the component D 2 working with link D 1 has the same function as brake D, the component E 2 working with link E 1 has the same function as brake E and the component F 2 working with link F 1 has the same function as brake F. It should be indicated that the function of shifting gears of the prototype is hand-actuated provisionally.
3D model and prototype of the novel AT mechanism (1)
All of the shifting elements are separate at the beginning of the prototype test. The rotational speed of the input member is 18 r/min. Table 5 shows the gear ranking. It is observed that the corresponding brakes and clutches are engaged and separated to achieve from the reverse gear to the ninth gear in turn. Figure 22 shows the working conditions of the prototype under each gear.
Working conditions of the prototype under each gear
It should be indicated that the rotational speed of the output member under each gear is measured by a photoelectric velocimeter. Figure 23 shows that the measurement results are basically consistent with the theoretical computing results. The obtained results prove the accuracy of the theoretical calculation and the feasibility of the AT mechanism.
Comparative analysis of computing results and test results for the rotational speed of the output member
In the present study, 9-speed ATs are synthesized based on the topological graph database of EGTs. Moreover, four novel AT mechanisms, which can achieve nine forward gears and one reverse gear are proposed. Their kinematics are analyzed theoretically based on the lever analogy method, and the gear ratios are obtained. The mechanical analysis is carried out according to the lever analogy method, which provides a theoretical basis for the fault diagnosis and development of the subsequent product. Furthermore, the power flow analysis is conducted based on the results of the kinematics and mechanical analysis. It should be indicated that the transmission efficiencies are calculated based on the torque method. The comparative analysis of the range of gear ratios, the interval of gear ratios and transmission efficiency between the four novel mechanisms and three existing ones are carried out. The obtained results show that the four novel mechanisms have the potential to be used in automatic transmissions. The novel AT mechanism with better performance is selected. The 3D model is established by taking a characteristic parameter set as example, and a prototype is manufactured. A speed test experiment is conducted. It is observed that the results are basically consistent with the theoretical computing results, which proves the accuracy of the theoretical calculation and the feasibility of the novel AT mechanisms. The dynamic analysis and optimal design for the prototype will be considered and conducted in the future work.
Zurück zum Zitat W Gründler, H Mozer, F Sauter. Efficient torque converter automatic transmission for commercial vehicles. ATZ Worldwide, 2017, 119(6): 36–39. CrossRef W Gründler, H Mozer, F Sauter. Efficient torque converter automatic transmission for commercial vehicles. ATZ Worldwide, 2017, 119(6): 36–39. CrossRef
Zurück zum Zitat A Kahraman, H Ligata, K Kienzle, et al. A kinematics and power flow analysis methodology for automatic transmission planetary gear trains. Journal of Mechanical Design, 2004, 126(6): 1071–1081. CrossRef A Kahraman, H Ligata, K Kienzle, et al. A kinematics and power flow analysis methodology for automatic transmission planetary gear trains. Journal of Mechanical Design, 2004, 126(6): 1071–1081. CrossRef
Zurück zum Zitat R C Johnson, K Towfigh. Creative design of epicyclic gear trains using number synthesis. Journal of Engineering for Industry, 1967, 89(2): 309–314. CrossRef R C Johnson, K Towfigh. Creative design of epicyclic gear trains using number synthesis. Journal of Engineering for Industry, 1967, 89(2): 309–314. CrossRef
Zurück zum Zitat F Buchsbaum, F Freudenstein. Synthesis of kinematic structure of geared kinematic chains and other mechanisms. Journal of Mechanisms, 1970, 5(3): 357–392. CrossRef F Buchsbaum, F Freudenstein. Synthesis of kinematic structure of geared kinematic chains and other mechanisms. Journal of Mechanisms, 1970, 5(3): 357–392. CrossRef
Zurück zum Zitat F Freudenstein. An application of Boolean algebra to the motion of epicyclic drives. Journal of Engineering for Industry, 1971, 93(1): 176–182. CrossRef F Freudenstein. An application of Boolean algebra to the motion of epicyclic drives. Journal of Engineering for Industry, 1971, 93(1): 176–182. CrossRef
Zurück zum Zitat R Ravisankar, T S Mruthyunjaya. Computerized synthesis of the structure of geared kinematic chains. Mechanism and Machine Theory, 1985, 20(5): 367–387. CrossRef R Ravisankar, T S Mruthyunjaya. Computerized synthesis of the structure of geared kinematic chains. Mechanism and Machine Theory, 1985, 20(5): 367–387. CrossRef
Zurück zum Zitat L W Tsai. An application of the linkage characteristic polynomial to the topological synthesis of epicyclic gear trains. Journal of Mechanisms, Transmissions, and Automation in Design, 1987, 109(3): 329–336. CrossRef L W Tsai. An application of the linkage characteristic polynomial to the topological synthesis of epicyclic gear trains. Journal of Mechanisms, Transmissions, and Automation in Design, 1987, 109(3): 329–336. CrossRef
Zurück zum Zitat L W Tsai, C C Lin. The creation of nonfractionated, two-degree-of-freedom epicyclic gear trains. Journal of Mechanisms, Transmissions, and Automation in Design, 1989, 111(4): 524–529. CrossRef L W Tsai, C C Lin. The creation of nonfractionated, two-degree-of-freedom epicyclic gear trains. Journal of Mechanisms, Transmissions, and Automation in Design, 1989, 111(4): 524–529. CrossRef
Zurück zum Zitat G Chatterjee, L W Tsai. Enumeration of epicyclic-type automatic transmission gear trains. SAE Technical Paper 941012, 1994: 153–164. G Chatterjee, L W Tsai. Enumeration of epicyclic-type automatic transmission gear trains. SAE Technical Paper 941012, 1994: 153–164.
Zurück zum Zitat L W Tsai. Mechanism design: enumeration of kinematic structures according to function. CRC Press, London, 2000. L W Tsai. Mechanism design: enumeration of kinematic structures according to function. CRC Press, London, 2000.
Zurück zum Zitat J U Kim, B M Kwak. Application of edge permutation group to structural synthesis of epicyclic gear trains. Mechanism and Machine Theory, 1990, 25(5): 563–574. CrossRef J U Kim, B M Kwak. Application of edge permutation group to structural synthesis of epicyclic gear trains. Mechanism and Machine Theory, 1990, 25(5): 563–574. CrossRef
Zurück zum Zitat C H Hsu, K T Lam. A new graph representation for the automatic kinematic analysis of planetary spur-gear trains. Journal of Mechanical Design, 1992, 114(1): 196–200. CrossRef C H Hsu, K T Lam. A new graph representation for the automatic kinematic analysis of planetary spur-gear trains. Journal of Mechanical Design, 1992, 114(1): 196–200. CrossRef
Zurück zum Zitat C H Hsu, J J Hsu. An efficient methodology for the structural synthesis of geared kinematic chains. Mechanism and Machine Theory, 1997, 32(8): 957–973. CrossRef C H Hsu, J J Hsu. An efficient methodology for the structural synthesis of geared kinematic chains. Mechanism and Machine Theory, 1997, 32(8): 957–973. CrossRef
Zurück zum Zitat J M del Castillo. Enumeration of 1-DOF planetary gear train graphs based on functional constraints. Journal of Mechanical Design, 2002, 124(4): 723–732. CrossRef J M del Castillo. Enumeration of 1-DOF planetary gear train graphs based on functional constraints. Journal of Mechanical Design, 2002, 124(4): 723–732. CrossRef
Zurück zum Zitat H T Ngo, H S Yan. Configuration synthesis of series–parallel hybrid transmissions. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2015, 230(5): 664–678. H T Ngo, H S Yan. Configuration synthesis of series–parallel hybrid transmissions. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2015, 230(5): 664–678.
Zurück zum Zitat I Rajasri, A V S S K S Guptha, Y V D Rao. Generation of egts: hamming number approach. Procedia Engineering, 2016, 144: 537–542. CrossRef I Rajasri, A V S S K S Guptha, Y V D Rao. Generation of egts: hamming number approach. Procedia Engineering, 2016, 144: 537–542. CrossRef
Zurück zum Zitat V V Kamesh, K MallikarjunaRao, A B Srinivasa Rao. Topological synthesis of epicyclic gear trains using vertex incidence polynomial. Journal of Mechanical Design, 2017, 139(6): 062304. V V Kamesh, K MallikarjunaRao, A B Srinivasa Rao. Topological synthesis of epicyclic gear trains using vertex incidence polynomial. Journal of Mechanical Design, 2017, 139(6): 062304.
Zurück zum Zitat V R Shanmukhasundaram, Y V D Rao, S P Regalla. Enumeration of displacement graphs of epicyclic gear train from a given rotation graph using concept of building of kinematic units. Mechanism and Machine Theory, 2019, 134: 393–424. CrossRef V R Shanmukhasundaram, Y V D Rao, S P Regalla. Enumeration of displacement graphs of epicyclic gear train from a given rotation graph using concept of building of kinematic units. Mechanism and Machine Theory, 2019, 134: 393–424. CrossRef
Zurück zum Zitat T Barhoumi, D Kum. Automatic enumeration of feasible kinematic diagrams for split hybrid configurations with a single planetary gear. Journal of Mechanical Design, 2017, 139(8): 083301. CrossRef T Barhoumi, D Kum. Automatic enumeration of feasible kinematic diagrams for split hybrid configurations with a single planetary gear. Journal of Mechanical Design, 2017, 139(8): 083301. CrossRef
Zurück zum Zitat X Y Xu, H Q Sun, Y F Liu, et al. Automatic enumeration of feasible configuration for the dedicated hybrid transmission with multi-degree-of-freedom and multiplanetary gear set. Journal of Mechanical Design, 2019, 141(9): 093301. CrossRef X Y Xu, H Q Sun, Y F Liu, et al. Automatic enumeration of feasible configuration for the dedicated hybrid transmission with multi-degree-of-freedom and multiplanetary gear set. Journal of Mechanical Design, 2019, 141(9): 093301. CrossRef
Zurück zum Zitat T T Ho, S J Hwang. Configuration synthesis of two-mode hybrid transmission systems with nine-link mechanisms. Mechanism and Machine Theory, 2019, 142: 103615. CrossRef T T Ho, S J Hwang. Configuration synthesis of two-mode hybrid transmission systems with nine-link mechanisms. Mechanism and Machine Theory, 2019, 142: 103615. CrossRef
Zurück zum Zitat Z Lévai. Structure and analysis of planetary gear trains. Journal of Mechanisms, 1968, 3(3): 131–148. CrossRef Z Lévai. Structure and analysis of planetary gear trains. Journal of Mechanisms, 1968, 3(3): 131–148. CrossRef
Zurück zum Zitat H L Benford, M B Leising. The lever analogy: a new tool in transmission analysis. SAE Technical Paper 810102, 1981: 1–10. H L Benford, M B Leising. The lever analogy: a new tool in transmission analysis. SAE Technical Paper 810102, 1981: 1–10.
Zurück zum Zitat M F Gao, J B Hu. Kinematic analysis of planetary gear trains based on topology. Journal of Mechanical Design, 2018, 140(1): 012302. CrossRef M F Gao, J B Hu. Kinematic analysis of planetary gear trains based on topology. Journal of Mechanical Design, 2018, 140(1): 012302. CrossRef
Zurück zum Zitat F C Yang, J X Feng, H C Zhang. Power flow and efficiency analysis of multi-flow planetary gear trains. Mechanism and Machine Theory, 2015, 92: 86–99. CrossRef F C Yang, J X Feng, H C Zhang. Power flow and efficiency analysis of multi-flow planetary gear trains. Mechanism and Machine Theory, 2015, 92: 86–99. CrossRef
Zurück zum Zitat F Yang, J Feng, F Du. Design and power flow analysis for multi-speed automatic transmission with hybrid gear trains. International Journal of Automotive Technology, 2016, 17(4): 629–637. CrossRef F Yang, J Feng, F Du. Design and power flow analysis for multi-speed automatic transmission with hybrid gear trains. International Journal of Automotive Technology, 2016, 17(4): 629–637. CrossRef
Zurück zum Zitat E L Esmail, E Pennestrì, A Hussein Juber. Power losses in two-degrees-of-freedom planetary gear trains: a critical analysis of Radzimovsky's formulas. Mechanism and Machine Theory, 2018, 128: 191–204. E L Esmail, E Pennestrì, A Hussein Juber. Power losses in two-degrees-of-freedom planetary gear trains: a critical analysis of Radzimovsky's formulas. Mechanism and Machine Theory, 2018, 128: 191–204.
Zurück zum Zitat Y H Cui, J Gao, X M Ji, et al. The multi-attribute topological graph method and its application on power flow analysis in closed planetary gear trains. Advances in Mechanical Engineering, 2018, 10(8): 1–9. CrossRef Y H Cui, J Gao, X M Ji, et al. The multi-attribute topological graph method and its application on power flow analysis in closed planetary gear trains. Advances in Mechanical Engineering, 2018, 10(8): 1–9. CrossRef
Zurück zum Zitat K Arnaudov, D Karaivanov. The torque method used for studying coupled two-carrier planetary gear trains. Transactions of FAMENA, 2013, 37(1): 49–61. K Arnaudov, D Karaivanov. The torque method used for studying coupled two-carrier planetary gear trains. Transactions of FAMENA, 2013, 37(1): 49–61.
Zurück zum Zitat R A Lloyd. Power flow and ratio sensitivity in differential systems. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 1991, 205(1): 59–67. CrossRef R A Lloyd. Power flow and ratio sensitivity in differential systems. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 1991, 205(1): 59–67. CrossRef
Zurück zum Zitat J M del Castillo. The analytical expression of the efficiency of planetary gear trains. Mechanism and Machine Theory, 2002, 37(2): 197–214. MATHCrossRef J M del Castillo. The analytical expression of the efficiency of planetary gear trains. Mechanism and Machine Theory, 2002, 37(2): 197–214. MATHCrossRef
Zurück zum Zitat H F Ding, J Zhao, Z Huang. Unified topological representation models of planar kinematic chains. Journal of Mechanical Design, 2009, 131(11): 114503. CrossRef H F Ding, J Zhao, Z Huang. Unified topological representation models of planar kinematic chains. Journal of Mechanical Design, 2009, 131(11): 114503. CrossRef
Zurück zum Zitat H F Ding, S Liu, P Huang, et al. Automatic structural synthesis of epicyclic gear trains with one main shaft. ASME 2015 International Design Engineering Technical Conference & Computers and Information in Engineering Conference, Boston, Massachusetts, USA, 2–5 August 2015. H F Ding, S Liu, P Huang, et al. Automatic structural synthesis of epicyclic gear trains with one main shaft. ASME 2015 International Design Engineering Technical Conference & Computers and Information in Engineering Conference, Boston, Massachusetts, USA, 2–5 August 2015.
Zurück zum Zitat W J Yang, H F Ding, B Zi, et al. New graph representation for planetary gear trains. Journal of Mechanical Design, 2018, 140(1): 012303. CrossRef W J Yang, H F Ding, B Zi, et al. New graph representation for planetary gear trains. Journal of Mechanical Design, 2018, 140(1): 012303. CrossRef
Zurück zum Zitat W J Yang, H F Ding. The perimeter loop-based method for the automatic isomorphism detection in planetary gear trains. Journal of Mechanical Design, 2018, 140(12): 123302. CrossRef W J Yang, H F Ding. The perimeter loop-based method for the automatic isomorphism detection in planetary gear trains. Journal of Mechanical Design, 2018, 140(12): 123302. CrossRef
Zurück zum Zitat W J Yang, H F Ding. The complete set of one-degree-of-freedom planetary gear trains with up to nine links. Journal of Mechanical Design, 2019, 141(4): 043301. CrossRef W J Yang, H F Ding. The complete set of one-degree-of-freedom planetary gear trains with up to nine links. Journal of Mechanical Design, 2019, 141(4): 043301. CrossRef
Zurück zum Zitat J X Liu, L D Yu, Q L Zeng, et al. Synthesis of multi-row and multi-speed planetary gear mechanism for automatic transmission. Mechanism and Machine Theory, 2018, 128: 616–627. CrossRef J X Liu, L D Yu, Q L Zeng, et al. Synthesis of multi-row and multi-speed planetary gear mechanism for automatic transmission. Mechanism and Machine Theory, 2018, 128: 616–627. CrossRef
Zurück zum Zitat M F You, G Q Hou, M Y Wang. The analysis of the transmission scheme of the 9 speed automatic transmission based on the expend lever method. Advanced Materials Research, 2014, 945–949: 811–817. CrossRef M F You, G Q Hou, M Y Wang. The analysis of the transmission scheme of the 9 speed automatic transmission based on the expend lever method. Advanced Materials Research, 2014, 945–949: 811–817. CrossRef
Zurück zum Zitat C Dörr, H Kalczynski, A Rink, et al. Nine-speed automatic transmission 9G-Tronic by Mercedes-Benz. ATZ Worldwide, 2014, 116(1): 20–25. CrossRef C Dörr, H Kalczynski, A Rink, et al. Nine-speed automatic transmission 9G-Tronic by Mercedes-Benz. ATZ Worldwide, 2014, 116(1): 20–25. CrossRef
Zurück zum Zitat T Martin, J Hendrickson. General Motors Hydra-Matic 9T50 automatic transaxle. SAE Technical Paper 2018-01-0391, 2018: 1–9. T Martin, J Hendrickson. General Motors Hydra-Matic 9T50 automatic transaxle. SAE Technical Paper 2018-01-0391, 2018: 1–9.
Zurück zum Zitat L W Tsai, E R Maki, T Liu, et al. The categorization of planetary gear trains for automatic transmissions according to kinematic topology. SAE Technical Paper Series 885062, 1988: 1513–1521. L W Tsai, E R Maki, T Liu, et al. The categorization of planetary gear trains for automatic transmissions according to kinematic topology. SAE Technical Paper Series 885062, 1988: 1513–1521.
Zurück zum Zitat Z Y Huang. Theory and design for modern AT. Shanghai: Tongji University Press, 2006. (in Chinese) Z Y Huang. Theory and design for modern AT. Shanghai: Tongji University Press, 2006. (in Chinese)
Zurück zum Zitat P Johansen, D B Roemer, T O Andersen, et al. Morphological topology generation of a digital fluid power displacement unit using Chebychev-Grübler-Kutzbach constraint. IEEE International Conference on Fluid Power and Mechatronics, Harbin, China, 5–7 August 2015: 227–230. P Johansen, D B Roemer, T O Andersen, et al. Morphological topology generation of a digital fluid power displacement unit using Chebychev-Grübler-Kutzbach constraint. IEEE International Conference on Fluid Power and Mechatronics, Harbin, China, 5–7 August 2015: 227–230.
Zurück zum Zitat H F Ding, C W Cai, W J Han, et al. Improved nine-gear transmission: CN, 201510424619.X. 2015-07-17. H F Ding, C W Cai, W J Han, et al. Improved nine-gear transmission: CN, 201510424619.X. 2015-07-17.
Zurück zum Zitat H F Ding, P Huang, C K Zhang, et al. Nine-gear speed changer: CN, 201510424662.6. 2015-07-17. H F Ding, P Huang, C K Zhang, et al. Nine-gear speed changer: CN, 201510424662.6. 2015-07-17.
Zurück zum Zitat H F Ding, C W Cai, H B Li. Nine-gear speed changer: CN, 201910138581.8. 2019-02-25. H F Ding, C W Cai, H B Li. Nine-gear speed changer: CN, 201910138581.8. 2019-02-25.
Zurück zum Zitat H F Ding, C W Cai, H B Li. A nine-gear speed changer: CN, 201910137992.5. 2019-02-25. H F Ding, C W Cai, H B Li. A nine-gear speed changer: CN, 201910137992.5. 2019-02-25.
Zurück zum Zitat M Li, L Y Xie, H Y Li, et al. Life distribution transformation model of planetary gear system. Chinese Journal of Mechanical Engineering, 2018, 31(1): 24–31. CrossRef M Li, L Y Xie, H Y Li, et al. Life distribution transformation model of planetary gear system. Chinese Journal of Mechanical Engineering, 2018, 31(1): 24–31. CrossRef
Zurück zum Zitat Y Q Wan, T L Liu. Planetary transmission scheme selection theory and optimization. Beijing: National Defense Industry Press, 1997. (in Chinese) Y Q Wan, T L Liu. Planetary transmission scheme selection theory and optimization. Beijing: National Defense Industry Press, 1997. (in Chinese)
Huafeng Ding
Changwang Cai
Ziming Chen
Tao Ke
Bowen Mao
https://doi.org/10.1186/s10033-020-00466-y
An Overview of Bearing Candidates for the Next Generation of Reusable Liquid Rocket Turbopumps
Parallel Distributed Compensation /H∞ Control of Lane-keeping System Based on the Takagi-Sugeno Fuzzy Model
Anomalies in Special Permutation Flow Shop Scheduling Problems
Running-In Behavior of Wet Multi-plate Clutches: Introduction of a New Test Method for Investigation and Characterization
Analysis of the Microstructure and Mechanical Properties during Inertia Friction Welding of the Near-α TA19 Titanium Alloy
Cavitation of a Submerged Jet at the Spherical Valve Plate/Cylinder Block Interface for Axial Piston Pump | CommonCrawl |
David Stutz
sciscore: 2.622
PhD student at Max Planck Institute for Informatics; working on adversarial robustness; blog davidstutz.de.
My Summaries 177
papers.nips.cc
Thwarting Adversarial Examples: An L_0-Robust Sparse Fourier Transform
Bafna, Mitali and Murtagh, Jack and Vyas, Nikhil
Neural Information Processing Systems Conference - 2018 via Local Bibsonomy
[link] Summary by David Stutz 1 year ago
Bafna et al. show that iterative hard thresholding results in $L_0$ robust Fourier transforms. In particular, as shown in Algorithm 1, iterative hard thresholding assumes a signal $y = x + e$ where $x$ is assumed to be sparse, and $e$ is assumed to be sparse. This translates to noise $e$ that is bounded in its $L_0$ norm, corresponding to common adversarial attacks such as adversarial patches in computer vision. Using their algorithm, the authors can provably reconstruct the signal, specifically the top-$k$ coordinates for a $k$-sparse signal, which can subsequently be fed to a neural network classifier. In experiments, the classifier is always trained on sparse signals, and at test time, the sparse signal is reconstructed prior to the forward pass. This way, on MNIST and Fashion-MNIST, the algorithm is able to recover large parts of the original accuracy.
https://i.imgur.com/yClXLoo.jpg
Algorithm 1 (see paper for details): The iterative hard thresholding algorithm resulting in provable robustness against $L_0$ attack on images and other signals.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Low Frequency Adversarial Perturbation
Chuan Guo and Jared S. Frank and Kilian Q. Weinberger
Abstract: Adversarial images aim to change a target model's decision by minimally perturbing a target image. In the black-box setting, the absence of gradient information often renders this search problem costly in terms of query complexity. In this paper we propose to restrict the search for adversarial images to a low frequency domain. This approach is readily compatible with many existing black-box attack frameworks and consistently reduces their query cost by 2 to 4 times. Further, we can circumvent image transformation defenses even when both the model and the defense strategy are unknown. Finally, we demonstrate the efficacy of this technique by fooling the Google Cloud Vision platform with an unprecedented low number of model queries.
Guo et al. propose to augment black-box adversarial attacks with low-frequency noise to obtain low-frequency adversarial examples as shown in Figure 1. To this end, the boundary attack as well as the NES attack are modified to sample from a low-frequency Gaussian distribution instead from Gaussian noise directly. This is achieved through an inverse discrete cosine transform as detailed in the paper.
https://i.imgur.com/fejvuw7.jpg
Figure 1: Example of a low-frequency adversarial example.
Semantic Adversarial Examples
Hossein Hosseini and Radha Poovendran
Conference and Computer Vision and Pattern Recognition - 2018 via Local CrossRef
Hosseini and Poovendran propose semantic adversarial examples by randomly manipulating hue and saturation of images. In particular, in an iterative algorithm, hue and saturation are randomly perturbed and projected back to their valid range. If this results in mis-classification the perturbed image is returned as the adversarial example and the algorithm is finished; if not, another iteration is run. The result is shown in Figure 1. As can be seen, the structure of the images is retained while hue and saturation changes, resulting in mis-classified images.
https://i.imgur.com/kFcmlE3.jpg
Figure 1: Examples of the computed semantic adversarial examples.
proceedings.mlr.press
LaVAN: Localized and Visible Adversarial Noise
Karmon, Danny and Zoran, Daniel and Goldberg, Yoav
International Conference on Machine Learning - 2018 via Local Bibsonomy
Karmon et al. propose a gradient-descent based method for obtaining adversarial patch like localized adversarial examples. In particular, after selecting a region of the image to be modified, several iterations of gradient descent are run in order to maximize the probability of the target class and simultaneously minimize the probability in the true class. After each iteration, the perturbation is masked to the patch and projected onto the valid range of [0,1] for images. On ImageNet, the authors show that these adversarial examples are effective against a normal, undefended network.
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Li, Juncheng and Schmidt, Frank R. and Kolter, J. Zico
arXiv e-Print archive - 2019 via Local Bibsonomy
Li et al. propose camera stickers that when computed adversarially and physically attached to the camera leads to mis-classification. As illustrated in Figure 1, these stickers are realized using circular patches of uniform color. These individual circular stickers are computed in a gradient-descent fashion by optimizing their location, color and radius. The influence of the camera on these stickers is modeled realistically in order to guarantee success.
https://i.imgur.com/xHrqCNy.jpg
Figure 1: Illustration of adversarial stickers on the camera (left) and the effect on the taken photo (right).
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks
Muzammal Naseer and Salman Khan and Fatih Porikli
2019 IEEE Winter Conference on Applications of Computer Vision (WACV) - 2019 via Local CrossRef
Naseer et al. propose to smooth local gradients as defense against adversarial patches. In particular, as illustrated in Figure 1, the local image gradient is computed through convolution. Then, in local, overlapping windows, the gradients are set to zero if the total sum of absolute gradient values exceeds a specific threshold. The remaining gradient map is supposed to indicate regions where it is likely that adversarial patches can be found. Using this gradient map, the image is smoothed, i.e., blurred, afterwards. In experiments, the authors show that this reduces the impact of adversarial patches.
www.usenix.org
Exploiting the Inherent Limitation of L0 Adversarial Examples
Zuo, Fei and Yang, Bokai and Li, Xiaopeng and Zeng, Qiang
USENIX Association RAID - 2019 via Local Bibsonomy
Zuo et al. propose a two-stage system for detecting $L_0$ adversarial examples. Their system is based on the following two observations: (a) $L_0$ adversarial examples often result in very drastic changes of individual pixels and (b) these pixels are usually isolated and scattered over the image. Thus, they propose to train a siamese network to detect adversarial examples. To this end, they use a pre-processor and train the network to detect adversarial examples by taking the input and the pre-processed input. The pre-processing is assumed to influence benign images only slightly. In their case, an inpainting mechanism is used. Specifically, pixels where one color channel exhibits extremely small or large values are inpainted using any state-of-the-art approach, as shown in Figure 1. The siamese network learns to detect adversarial examples based on the differences in input images and inpainted images.
https://i.imgur.com/gsgWuin.jpg
Figure 1: Examples of inpainted $L_0$ adversarial examples.
openreview.net
Towards Robust, Locally Linear Deep Networks
Lee, Guang-He and Alvarez-Melis, David and Jaakkola, Tommi S.
International Conference on Learning Representations - 2019 via Local Bibsonomy
Lee et al. propose a regularizer to increase the size of linear regions of rectified deep networks around training and test points. Specifically, they assume piece-wise linear networks, in its most simplistic form consisting of linear layers (fully connected layers, convolutional layers) and ReLU activation functions. In these networks, linear regions are determined by activation patterns, i.e., a pattern indicating which neurons have value greater than zero. Then, the goal is to compute, and later to increase, the size $\epsilon$ such that the $L_p$-ball of radius $\epsilon$ around a sample $x$, denoted $B_{\epsilon,p}(x)$ is contained within one linear region (corresponding to one activation pattern). Formally, letting $S(x)$ denote the set of feasible inputs $x$ for a given activation pattern, the task is to determine
$\hat{\epsilon}_{x,p} = \max_{\epsilon \geq 0, B_{\epsilon,p}(x) \subset S(x)} \epsilon$.
For $p = 1, 2, \infty$, the authors show how $\hat{\epsilon}_{x,p}$ can be computed efficiently. For $p = 2$, for example, it results in
$\hat{\epsilon}_{x,p} = \min_{(i,j) \in I} \frac{|z_j^i|}{\|\nabla_x z_j^i\|_2}$.
Here, $z_j^i$ corresponds to the $j$th neuron in the $i$th layer of a multi-layer perceptron with ReLU activations; and $I$ contains all the indices of hidden neurons. This analytical form can then used to add a regularizer to encourage the network to learn larger linear regions:
$\min_\theta \sum_{(x,y) \in D} \left[\mathcal{L}(f_\theta(x), y) - \lambda \min_{(i,j) \in I} \frac{|z_j^i|}{\|\nabla_x z_j^i\|_2}\right]$
where $f_\theta$ is the neural network with paramters $\theta$. In the remainder of the paper, the authors propose a relaxed version of this training procedure that resembles a max-margin formulation and discuss efficient computation of the involved derivatives $\nabla_x z_j^i$ without too many additional forward/backward passes.
https://i.imgur.com/jSc9zbw.jpg
Figure 1: Visualization of locally linear regions for three different models on toy 2D data.
On toy data and datasets such as MNIST and CalTech-256, it is shown that the training procedure is effective in the sense that larger linear regions around training and test points are learned. For example, on a 2D toy dataset, Figure 1 visualizes the linear regions for the optimal regularizer as well as the proposed relaxed version.
DPATCH: An Adversarial Patch Attack on Object Detectors
Liu, Xin and Yang, Huanrui and Liu, Ziwei and Song, Linghao and Chen, Yiran and Li, Hai
Liu et al. propose DPatch, adversarial patches against state-of-the-art object detectors. Similar to existing adversarial patches, where a patch with fixed pixels is placed in an image in order to evade (or change) classification, the authors compute their DPatch using an optimization procedure. During optimization, the patch to be optimized is placed in random locations on all images of, e.g. on PASCAL VOC 2007, and the pixels are updated in order to maximize the loss of the classifier (either in a targeted setting or in an untargeted setting). In experiments, this approach is able to fool several different detectors. Using small $40\times40$ pixel patches as illustrated in Figure 1.
https://i.imgur.com/ma6hGNO.jpg
Figure 1: Illustration of the use case of DPatch.
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Salman, Hadi and Li, Jerry and Razenshteyn, Ilya P. and Zhang, Pengchuan and Zhang, Huan and Bubeck, Sébastien and Yang, Greg
Salman et al. combined randomized smoothing with adversarial training based on an attack specifically designed against smoothed classifiers. Specifically, they consider the formulation of randomized smoothing by Cohen et al. [1]; here, Gaussian noise around the input (adversarial or clean) is sampled and the classifier takes a simple majority vote. In [1], Cohen et al. show that this results in good bounds on robustness. In this paper, Salman et al. propose an adaptive attack against randomized smoothing. Essentially, they use a simple PGD attack to attack a smoothed classifier, i.e., maximize the cross entropy loss of the smoothed classifier. To make the objective tractable, Monte Carlo samples are used in each iteration of the PGD optimization. Based on this attack, they do adversarial training, with adversarial examples computed against the smoothed (and adversarially trained) classifier. In experiments, this approach outperforms the certified robustness by Cohen et al. on several datasets.
[1] Jeremy M. Cohen, Elan Rosenfeld and J. Zico Kolter. Certified Adversarial Robustness via Randomized Smoothing. ArXiv, 1902.02918, 2019.
Interpolated Adversarial Training: Achieving Robust Neural Networks Without Sacrificing Too Much Accuracy
Lamb, Alex and Verma, Vikas and Kannala, Juho and Bengio, Yoshua
ACM AISec@CCS - 2019 via Local Bibsonomy
Lamb et al. propose interpolated adversarial training to increase robustness against adversarial examples. Particularly, a $50\%/50\%$ variant of adversarial training is used, i.e., in each iteration the batch consists of $50\%$ clean and $50\%$ adversarial examples. The loss is then computed on these both parts, encouraging the network to predict the correct labels on the adversarial examples, and averaged afterwards. In interpolated adversarial training, the loss is adapted according to the Mixup strategy. Here, instead of computing the loss on the selected input-output pair, a second input-output pair is selected at random from the dataset. Then, a random linear interpolation between both inputs is considered; this means that the loss is computed as
$\lambda \mathcal{L}(f(x'), y_i) + (1 - \lambda)\mathcal{L}(f(x'), y_j)$
where $f$ is the neural network, $x'$ the interpolated input $x' = \lambda x_i + (1 - \lambda)x_j$ corresponding to the two input-output pairs $(x_i, y_i)$ and $(x_j, y_j)$. In a variant called Manifold Mixup, the interpolation is performed within a hidden layer instead of the input space. This strategy is applied on both the clean and the adversarial examples and leads, accoridng to the experiments, to the same level of robustness while improving the test accuracy.
nips.djvuzone.org
For Valid Generalization the Size of the Weights is More Important than the Size of the Network
Bartlett, Peter L.
Barlett shows that lower generalization bounds for multi-layer perceptrons with limited sizes of the weights can be found using the so-called fat-shattering dimension. Similar to the classical VC dimensions, the fat shattering dimensions quantifies the expressiveness of hypothesis classes in machine learning. Specifically, considering a sequence of points $x_1, \ldots, x_d$, a hypothesis class $H$ is said to shatter this sequence if, for any label assignment $b_1, \ldots, b_d \in \{-1,1\}$, a function $h \in H$ exists that correctly classifies the sequence, i.e. $\text{sign}(h(x_i)) = b_i$. The VC dimension is the largest $d$ for which this is possible. The VC dimension has been studied for a wide range of machine learning models (i.e., hypothesis classes). Thus, it is well known that multi-layer perceptrons with at least two layers have infinite VC dimension – which seems natural as two-layer perceptrons are universal approximators. As a result, most bounds on the generalization performance of multi-layer networks (and, thus, also of more general deep networks) do not apply as the VC dimension is infinite.
The fat-shattering dimension, in contrast, does not strictly require the sequence $x_1,\ldots, x_d$ to be correctly classified into the labels $b_1,\ldots, b_d$. Instead, the sequence is said to be $\gamma$-shattered if real values $r_1,\ldots,r_d$ exist such that for every labeling, $b_1,\ldots,b_d$, some some $h \in H$ satisfies $(h(x_i) – r_i)b_i \geq \gamma$. Note that the values $r_i$ are fixed across labelings, i.e., are chosen "before" knowing the labels. The fat-shattering dimension is the largest $d$ for which this is possible. As a result, the fat-shattering dimension relaxes the VC dimension in that the models in $H$ are allowed some "slack" (in lack of a better word). Note that $H$ contains real-valued functions.
Based on this definition, Barlett shows that multi-layer perceptrons in which all layers have weights $w$ constrained as $\|w\|_1 \leq A$ scales with $A^{l(l + 1)}$. More importantly, however, the fat-shattering dimension is finite. Thus, generalization bounds based on the fat-shattering dimensions apply and are discussed by Barlett; I refer to the paper for details on the bound.
Exploring the Hyperparameter Landscape of Adversarial Robustness
Duesterwald, Evelyn and Murthi, Anupama and Venkataraman, Ganesh and Sinn, Mathieu and Vijaykeerthy, Deepak
- 2019 via Local Bibsonomy
Keywords: adversarial, robustness
Duesterwald et al. study the influence of hyperparameters on adversarial training and its robustness as well as accuracy. As shown in Figure 1, the chosen parameters, the ratio of adversarial examples per batch and the allowed perturbation $\epsilon$, allow to control the trade-off between adversarial robustness and accuracy. Even for larger $\epsilon$, at least on MNIST and SVHN, using only few adversarial examples per batch increases robustness significantly while only incurring a small loss in accuracy.
https://i.imgur.com/nMZNpFB.jpg
Figure 1: Robustness (red) and accuracy (blue) depending on the two hyperparameters $\epsilon$ and ratio of adversarial examples per batch. Robustness is measured in adversarial accuracy.
CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks
Marchisio, Alberto and Nanfa, Giorgio and Khalid, Faiq and Hanif, Muhammad Abdullah and Martina, Maurizio and Shafique, Muhammad
Marchisio et al. propose a black-box adversarial attack on Capsule Networks. The main idea of the attack is to select pixels based on their local standard deviation. Given a window of allowed pixels to be manipulated, these are sorted based on standard deviation and possible impact on the predicted probability (i.e., gap between target class probability and maximum other class probability). A subset of these pixels is then manipulated by a fixed noise value $\delta$. In experiments, the attack is shown to be effective for CapsuleNetworks and other networks.
The Space of Transferable Adversarial Examples
Tramèr, Florian and Papernot, Nicolas and Goodfellow, Ian J. and Boneh, Dan and McDaniel, Patrick D.
Tramer et al. study adversarial subspaces, subspaces of the input space that are spanned by multiple, orthogonal adversarial examples. This is achieved by iteratively searching for orthogonal adversarial examples, relative to a specific test example. This can, for example, be done using classical second- or first-order optimization methods for finding adversarial examples with the additional constraint of finding orthogonal adversarial examples. However, the authors also consider different attack strategies that work on discrete input features. In practice, on MNIST, this allows to find, on average, 44 orthogonal directions per test example. This finding indicates that adversarial examples indeed span large adversarial subspaces. Additionally, adversarial examples from the subspaces seem to transfer reasonably well to other models. The remainder of the paper links this ease of transferability to a similarity in decision boundaries learnt by different models from the same hypotheses set.
Efficient Evaluation-Time Uncertainty Estimation by Improved Distillation
Englesson, Erik and Azizpour, Hossein
Englesson and Azizpour propose an adapted knowledge distillation version to improve confidence calibration on out-of-distribution examples including adversarial examples. In contrast to vanilla distillation, they make the following changes: First, high capacity student networks are used, for example, by increasing depth or with. Then, the target distribution is "sharpened" using the true label by reducing the distributions overall entropy. Finally, for wrong predictions of the teacher model, they propose an alternative distribution with maximum mass on the correct class, while not losing the information provided on the incorrect label.
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Hendrycks, Dan and Dietterich, Thomas G.
Hendrycks and Dietterich propose ImageNet-C and ImageNet-P benchmarks for corruption and perturbation robustness evaluation. Both datasets come in various sizes, and corruptions always come in different difficulties. The used corruptions include many common, realistic noise types such as various types of blur and random noise, brightness changes and compression artifacts. ImageNet-P differs from ImageNet-C in that sequences of perturbations are generated. This means, for a specific perturbation type, 30 different frames are generated; thus, less corruption types in total are used. The remainder of the paper introduces various evaluation metrics; these are usually based on the fact that the label of the corrupted image did not change. Finally, they also highlight some approaches to obtain more "robust" models against these corruptions. The list includes a variant of histogram equalization that is used to normalize the input images, the use of multi-scale or feature aggregation architectures and, surprisingly, adversarial logit pairing. Examples of ImageNet-C images can be found in Figure 1.
https://i.imgur.com/YRBOzrH.jpg
Figure 1: Examples of images in ImageNet-C.
On Norm-Agnostic Robustness of Adversarial Training
Li, Bai and Chen, Changyou and Wang, Wenlin and Carin, Lawrence
Li et al. evaluate adversarial training using both $L_2$ and $L_\infty$ attacks and proposes a second-order attack. The main motivation of the paper is to show that adversarial training cannot increase robustness against both $L_2$ and $L_\infty$ attacks. To this end, they propose a second-order adversarial attack and experimentally show that ensemble adversarial training can partly solve the problem.
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Lopes, Raphael Gontijo and Yin, Dong and Poole, Ben and Gilmer, Justin and Cubuk, Ekin D.
Lopes et al. propose patch-based Gaussian data augmentation to improve accuracy and robustness against common corruptions. Their approach is intended to be an interpolation between Gaussian noise data augmentation and CutOut. During training, random patches on images are selected and random Gaussian noise is added to these patches. With increasing noise level (i.e., its standard deviation) this results in CutOut; with increasing patch size, this results in regular Gaussian noise data augmentation. On ImageNet-C and Cifar-C, the authors show that this approach improves robustness against common corruptions while also improving accuracy slightly.
MNIST-C: A Robustness Benchmark for Computer Vision
Mu, Norman and Gilmer, Justin
Mu and Gilmer introduce MNIST-C, an MNIST-based corruption benchmark for out-of-distribution evaluation. The benchmark includes various corruption types including random noise (shot and impulse noise), blur (glass and motion blur), (affine) transformations, "striping" or occluding parts of the image, using Canny images or simulating fog. These corruptions are also shown in Figure 1. The transformations have been chosen to be semantically invariant, meaning that the true class of the image does not change. This is important for evaluation as model's can easily be tested whether they still predict the correct labels on the corrupted images.
https://i.imgur.com/Y6LgAM4.jpg
Figure 1: Examples of the used corruption types included in MNIST-C.
Bayesian Uncertainty Estimation for Batch Normalized Deep Networks
Teye, Mattias and Azizpour, Hossein and Smith, Kevin
Teye et al. show that neural networks with batch normalization can be used to give uncertainty estimates through Monte Carlo sampling. In particular, instead of using the test mode of batch normalization, where the statistics (mean and variance) of each batch normalization layer are fixed, these statistics are computed per batch, as in training mode. To this end, for a specific query image, random batches from the training set are sampled, and prediction uncertainty is estimated using Monte Carlo sampling to compute mean and variance. This is summarized in Algorithm 1, depicting the proposed Monte Carlo Batch Normalization method. In the paper, this approach is further interpreted as approximate inference in Bayesian models.
https://i.imgur.com/nRdOvzs.jpg
Algorithm 1: Monte Carlo approach for using batch normalization for uncertainty estimation.
Layer Normalization
Jimmy Lei Ba and Jamie Ryan Kiros and Geoffrey E. Hinton
Keywords: stat.ML, cs.LG
Abstract: Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This significantly reduces the training time in feed-forward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural networks. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empirically, we show that layer normalization can substantially reduce the training time compared with previously published techniques.
Ba et al. propose layer normalization, normalizing the activations of a layer by its mean and standard deviation. In contrast to batch normalization, this scheme does not depend on the current batch; thus, it performs the same computation at training and test time. The general scheme, however, is very similar. Given the $l$-th layer of a multi-layer perceptron,
$a_i^l = (w_i^l)^T h^l$ and $h_i^{l + 1} = f(a_i^l + b_i^l)$
with $W^l$ being the weight matrix, the activations $a_i^l$ are normalized by mean $\mu_i^l$ and standard deviation $\sigma_i^l$. For batch normalization these are estimated over the current mini batch:
$\mu_i^l = \mathbb{E}_{p(x)} [a_i^l]$ and $\sigma_i^l = \sqrt{\mathbb{E}_{p(x)} [(a_i^l - \mu_i^l)^2}$.
However, this estimation depends heavily on the batch size; additionally, models change during training and test time (at test time, these statistics are estimated over the training set). For layer normalization, instead, these statistics are evaluated over the activations in the same layer:
$\mu^l = \frac{1}{H}\sum_{i = 1}^H a_i^l$ and $\sigma^l = \sqrt{\frac{1}{H}\sum_{i = 1}^H (a_i^l - \mu^l)^2}$.
Thus, the normalization is not depending on the batch size anymore. Additionally, layer normalization is invariant to scaling and shifts of the weight matrix (for batch normalization, this only holds for the columns of the matrix). In experiments, this approach is shown to work well for a variety of tasks including models with attention mechanisms and recurrent neural networks. For convolutional neural networks, the authors state that layer normalization does not outperform batch normalization, but performs better than using no normalization at all.
Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak and Yasaman Bahri and Daniel A. Abolafia and Jeffrey Pennington and Jascha Sohl-Dickstein
Keywords: stat.ML, cs.AI, cs.LG, cs.NE
Abstract: In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and generalization through an extensive empirical exploration of two natural metrics of complexity related to sensitivity to input perturbations. Our experiments survey thousands of models with various fully-connected architectures, optimizers, and other hyper-parameters, as well as four different image classification datasets. We find that trained neural networks are more robust to input perturbations in the vicinity of the training data manifold, as measured by the norm of the input-output Jacobian of the network, and that it correlates well with generalization. We further establish that factors associated with poor generalization $-$ such as full-batch training or using random labels $-$ correspond to lower robustness, while factors associated with good generalization $-$ such as data augmentation and ReLU non-linearities $-$ give rise to more robust functions. Finally, we demonstrate how the input-output Jacobian norm can be predictive of generalization at the level of individual test points.
Novak et al. study the relationship between neural network sensitivity and generalization. Here, sensitivity is measured in terms of the Frobenius gradient of the network's probabilities (resulting in a Jacobian matrix, not depending on the true label) or based on a coding scheme of activations. The latter is intended to quantify transitions between linear regions of the piece-wise linear model. To this end, all activations are assigned either $0$ or $1$ depending on their ReLU output. Based on a path between two or more input examples, the difference in this coding scheme is an estimator of how many linear regions have been "traversed". Both metrics are illustrated in Figure 1, showing that they are low for test and training examples, or in regions within the same class, and high otherwise. The second metric is also illustrated in Figure 2. Based on these metrics, the authors show that these metrics correlate with the generalization gap, meaning that the sensitivity of the network and its generalization performance seem to be inherently connected.
https://i.imgur.com/iRt3ADe.jpg
Figure 1: For a network trained on MNIST, illustrations of a possible trajectory (left) and the corresponding sensitivity metrics (middle and right). I refer to the paper for details.
https://i.imgur.com/0G8su3K.jpg
Figure 2: Linear regions for a random 2-dimensional slice of the pre-logit space before and after training.
Instance Normalization: The Missing Ingredient for Fast Stylization
Dmitry Ulyanov and Andrea Vedaldi and Victor Lempitsky
Abstract: It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github.
In the context of stylization, Ulyanov et al. propose to use instance normalization instead of batch normalization. In detail, instance normalization does not compute the mean and standard deviation used for normalization over the current mini-batch in training. Instead, these statistics are computed per instance individually. This also has the benefit of having the same training and test procedure, meaning that normalization is the same in both cases – in contrast to batch normalization.
Group Normalization
Yuxin Wu and Kaiming He
Keywords: cs.CV, cs.LG
Abstract: Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.
Wu and He propose group normalization as alternative to batch normalization. Instead of computing the statistics used for normalization based on the current mini-batch, group normalization computes these statistics per instance but in groups of channels (for convolutional layers). Specifically, given activations $x_i$ with $i = (i_N, i_C, i_H, i_W)$ indexing along batch size, channels, height and width, batch normalization computes
$\mu_i = \frac{1}{|S|}\sum_{k \in S} x_k$ and $\sigma_i = \sqrt{\frac{1}{|S|} \sum_{k \in S} (x_k - \mu_i)^2 + \epsilon}$
with the set $S$ holds all indices for a specific channel (i.e. across samples, height and width). For group normalization, in contrast, $S$ holds all indices of the current instance and group of channels. Meaning the statistics are computed across height, width and the current group of channels. Here, all channels can be divided into groups arbitrarily. In the paper, on ImageNet, groups of $32$ channels are used. Then, Figure 1 shows that for a batch size of 32, group normalization performs en-par with batch normalization – although the validation error is slightly larger. This is attributed to the stochastic element of batch normalization that leads to regularization. Figure 2 additionally shows the influence of the batch size of batch normalization and group normalization.
https://i.imgur.com/lwP5ycw.jpg
Figure 1: Training and validation error for different normalization schemes on ImageNet.
https://i.imgur.com/0c3CnEX.jpg
Figure 2: Validation error for different batch sizes.
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
Zhang, Zhilu and Sabuncu, Mert R.
Zhang and Sabuncu propose a generalized cross entropy loss for robust learning on noisy labels. The approach is based on the work by Gosh et al. [1] showing that the mean absolute error can be robust to label noise. Specifically, they show that a symmetric loss, under specific assumptions on the label noise, is robust. Here, symmetry corresponds to
$\sum_{j=1}^c \mathcal{L}(f(x), j) = C$ for all $x$ and $f$
where $c$ is the number of classes and $C$ some constant. The cross entropy loss is not symmetric, while the mean absolute error is. The mean absolute error however, usually results in slower learning and may reach lower accuracy. As alternative, the authors propose
$\mathcal{L}(f(x), e_j) = \frac{(1 – f_j(x)^q)}{q}$.
Here, $f$ is the classifier which is assumed to contain a softmax layer at the end. For $q \rightarrow 0$ this reduces to the cross entropy and for $q = 1$ it reduces to the mean absolute error. As shown in Figure 1, this loss (or a slightly adapted version, see paper, respectively) may obtain better performance on noisy labels. To this end, the label noise is assumed to be uniform, meaning that $p(\tilde{y} = k|y = j, x)= 1 - \eta$ where $\tilde{y}$ is the perturbed label.
https://i.imgur.com/HRQ84Zv.jpg
Figure 1: Performance of the proposed loss for different $q$ and noise rate $\eta$ on Cifar-10. A ResNet-34 is used.
[1] Aritra Gosh, Himanshu Kumar, PS Sastry. Robust loss functions under label noise for deep neural networks. AAAI, 2017.
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks
Goodfellow, Ian J.
Goodfellow motivates the use of dynamical models as "defense" against adversarial attacks that violate both the identical and independent assumptions in machine learning. Specifically, he argues that machine learning is mostly based on the assumption that the data is samples identically and independently from a data distribution. Evasion attacks, meaning adversarial examples, mainly violate the assumption that they come from the same distribution. Adversarial examples computed within an $\epsilon$-ball around test examples basically correspond to an adversarial distribution the is larger (but entails) the original data distribution. In this article, Goodfellow argues that we should also consider attacks violating the independence assumption. This means, as a simple example, that the attacker can also use the same attack over and over again. This yields the idea of correlated attacks as mentioned in the paper's title. Against this more general threat model, Goodfellow argues that dynamic models are required; meaning the model needs to change (or evolve) – be a moving target that is harder to attack.
On Correlation of Features Extracted by Deep Neural Networks
Babajide O. Ayinde and Tamer Inanc and Jacek M. Zurada
2019 International Joint Conference on Neural Networks (IJCNN) - 2019 via Local CrossRef
Ayinde et al. study the impact of network architecture and weight initialization on learning redundant features. To empirically estimate the number of redundant features, the authors use an agglomerative clustering approach to cluster features based on their cosine similarity. Essentially, given a set of features, these are merged as long as their (average) cosine similarity is within some threshold $\tau$. Then, this number is compared across network architectures. Figure 1, for example, shows the number of redundant features for different depths of the network and using different activation functions on MNIST. As can be seen, ReLU activations avoid redundant features, while depth of the network usually encourages redundant features.
https://i.imgur.com/ICcCL2u.jpg
Figure 1: Number of redundant features $n_r$ for networks with $n' = 1000$ hidden units computed suing the given threshold $\tau$ for computing $n_r$. Experiments with different depths and activation functions are shown.
Sharp Minima Can Generalize For Deep Nets
Dinh, Laurent and Pascanu, Razvan and Bengio, Samy and Bengio, Yoshua
Dinh et al. show that it is unclear whether flat minima necessarily generalize better than sharp ones. In particular, they study several notions of flatness, both based on the local curvature and based on the notion of "low change in error". The authors show that the parameterization of the network has a significant impact on the flatness; this means that functions leading to the same prediction function (i.e., being indistinguishable based on their test performance) might have largely varying flatness around the obtained minima, as illustrated in Figure 1. In conclusion, while networks that generalize well usually correspond to flat minima, it is not necessarily true that flat minima generalize better than sharp ones.
https://i.imgur.com/gHfolEV.jpg
Figure 1: Illustration of the influence of parameterization on the flatness of the obtained minima.
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Ford, Nic and Gilmer, Justin and Carlini, Nicholas and Cubuk, Ekin Dogus
Ford et al. show that the existence of adversarial examples can directly linked to test error on noise and other types of random corruption. Additionally, obtaining model robust against random corruptions is difficult, and even adversarially robust models might not be entirely robust against these corruptions. Furthermore, many "defenses" against adversarial examples show poor performance on random corruption – showing that some defenses do not result in robust models, but make attacking the model using gradient-based attacks more difficult (gradient masking).
Adversarially Robust Distillation
Goldblum, Micah and Fowl, Liam and Feizi, Soheil and Goldstein, Tom
Goldblum et al. show that distilling robustness is possible, however, depends on the teacher model and the considered dataset. Specifically, while classical knowledge distillation does not convey robustness against adversarial examples, distillation with a robust teacher model might increase robustness of the student model – even if trained on clean examples only. However, this seems to depend on both the dataset as well as the teacher model, as pointed out in experiments on Cifar100. Unfortunately, from the paper, it does not become clear in which cases robustness distillation does not work. To overcome this limitation, the authors propose to combine adversarial training and distillation and show that this recovers robustness; the student model's robustness might even exceed the teacher model's robustness. This, however, might be due to the additional adversarial examples used during distillation.
A Spectral View of Adversarially Robust Features
Garg, Shivam and Sharan, Vatsal and Zhang, Brian Hu and Valiant, Gregory
Garg et al. propose adversarially robust features based on a graph interpretation of the training data. In this graph, training points are connected based on their distance in input space. Robust features are obtained using the eigenvectors of the Laplacian of the graph. It is theoretically shown that these features are robust, based on some assumptions on the graph. For example, the bound obtained on robustness depends on the gap between second and third eigenvalue.
Regularizing by the Variance of the Activations' Sample-Variances
Littwin, Etai and Wolf, Lior
Littwin and Wolf propose a activation variance regularizer that is shown to have a similar, even better, effect than batch normalization. The proposed regularizer is based on an analysis of the variance of activation values; the idea is that the measured variance of these variances is low if the activation values come from a distribution with few modes. Thus, the intention of the regularizer is to encourage distributions of activations with only few modes. This is achieved using the regularizers
$\mathbb{E}[(1 - \frac{\sigma_s^2}{\sigma^2})^2]$
where $\sigma_s^2$ is the measured variance of activation values and $\sigma^2$ is the true variance of activation values. The estimate $\sigma^2_s$ is mostly influenced by the mini-batch used for training. In practice, the regularizer is replaced by
$(1 - \frac{\sigma_{s_1}^2}{\sigma_{s_2}^2 + \beta})^2$
which can be estimated on two different batches, $s_1$ and $s_2$, during training and $\beta$ is a parameter that can be learned and mainly handles the case where the variance is close to zero. In the paper, the authors provide some theoretical bounds and also make a connection to batch normalization and in which cases and why the regularizer might be a better alternative. These claims are supported by experiments on Cifar and Tiny ImageNet.
Robustness and generalization
Huan Xu and Shie Mannor
Machine Learning - 2012 via Local CrossRef
[link] Summary by David Stutz 2 years ago
Xu and Mannor provide a theoretical paper on robustness and generalization where their notion of robustness is based on the idea that the difference in loss should be small for samples that are close. This implies that, e.g., for a test sample close to a training sample, the loss on both samples should be similar. The authors formalize this notion as follows:
Definition: Let $A$ be a learning algorithm and $S \subset Z$ be a training set such that $A(S)$ denotes the model learned on $S$ by $A$; the algorithm $A$ is $(K, \epsilon(S))$-robust if $Z$ can be partitioned into $K$ disjoint sets, denoted $C_i$ such that $\forall s \in S$ it holds:
$s,z \in C_i \rightarrow |l(A(S), s) – l(A(S), z)| \leq \epsilon(S)$.
In words, this means that we can partition the space $Z$ (which is $X \times Y$ for a supervised problem) into a finite set of subsets and whenever a sample falls into the same partition as a training sample, the learned model should have nearly the same loss on both samples. Note that this notion does not entirely match the notion of adversarial robustness as commonly referred to nowadays. The main difference is that the partition can be chosen, while for adversarial robustness, the "partition" (usually in form of epsilon-balls around training and testing samples) is fixed.
Based on the above notion of robustness, the authors provide PAC bounds for robust algorithms, i.e. generalization performance of $A$ is linked to its generalization. Furthermore, in several examples, common machine learning techniques such as SVMs and neural networks are shown to be robust under specific conditions. For neural networks, for example, an upper bound on the $L_1$ norm of weights and the requirement of Lipschitz continuity is enough. This actually related to work on adversarial robustness, where Lipschitz continuity and weight regularization is also studied.
Second-Order Adversarial Attack and Certifiable Robustness
Li et al. propose an adversarial attack motivated by second-order optimization and uses input randomization as defense. Based on a Taylor expansion, the optimal adversarial perturbation should be aligned with the dominant eigenvector of the Hessian matrix of the loss. As the eigenvectors of the Hessian cannot be computed efficiently, the authors propose an approximation; this is mainly based on evaluating the gradient under Gaussian noise. The gradient is then normalized before taking a projected gradient step. As defense, the authors inject random noise on the input (clean example or adversarial example) and compute the average prediction over multiple iterations.
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lecuyer and Vaggelis Atlidakis and Roxana Geambasu and Daniel Hsu and Suman Jana
Keywords: stat.ML, cs.AI, cs.CR, cs.LG
Abstract: Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks, but they either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired formalism, that provides a rigorous, generic, and flexible foundation for defense.
Lecuyer et al. propose a defense against adversarial examples based on differential privacy. Their main insight is that a differential private algorithm is also robust to slight perturbations. In practice, this amounts to injecting noise in some layer (or on the image directly) and using Monte Carlo estimation for computing the expected prediction. The approach is compared to adversarial training against the Carlini+Wagner attack.
ImageNet-trained {CNN}s are biased towards texture; increasing shape bias improves accuracy and robustness
Geirhos, Robert and Rubisch, Patricia and Michaelis, Claudio and Bethge, Matthias and Wichmann, Felix A. and Brendel, Wieland
Keywords: deep-learning, machine-learning, stable, foundations, robustness, theory
Geirhos et al. show that state-of-the-art convolutional neural networks put too much importance on texture information. This claim is confirmed in a controlled study comparing convolutional neural network and human performance on variants of ImageNet image with removed texture (silhouettes) or on edges. Additionally, networks only considering local information can perform nearly as well as other networks. To avoid this bias, they propose a stylized ImageNet variant where textured are replaced randomly, forcing the network to put more weight on global shape information.
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Brendel, Wieland and Bethge, Matthias
Brendel and Bethge show empirically that state-of-the-art deep neural networks on ImageNet rely to a large extent on local features, without any notion of interaction between them. To this end, they propose a bag-of-local-features model by applying a ResNet-like architecture on small patches of ImageNet images. The predictions of these local features are then averaged and a linear classifier is trained on top. Due to the locality, this model allows to inspect which areas in an image contribute to the model's decision, as shown in Figure 1. Furthermore, these local features are sufficient for good performance on ImageNet. Finally, they show, on scrambled ImageNet images, that regular deep neural networks also rely heavily on local features, without any notion of spatial interaction between them.
https://i.imgur.com/8NO1w0d.png
Figure 1: Illustration of the heap maps obtained using BagNets, the bag-of-local-features model proposed in the paper. Here, different sizes for the local patches are used.
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Zhang, Huan and Chen, Hongge and Xiao, Chaowei and Li, Bo and Boning, Duane S. and Hsieh, Cho-Jui
Zhang et al. combine interval bound propagation and CROWN, both approaches to obtain bounds on a network's output, to efficiently train robust networks. Both interval bound propagation (IBP) and CROWN allow to bound a network's output for a specific set of allowed perturbations around clean input examples. These bounds can be used for adversarial training. The motivation to combine BROWN and IBP stems from the fact that training using IBP bounds usually results in instabilities, while training with CROWN bounds usually leads to over-regularization.
Efficient Neural Network Robustness Certification with General Activation Functions
Zhang, Huan and Weng, Tsui-Wei and Chen, Pin-Yu and Hsieh, Cho-Jui and Daniel, Luca
Zhang et al. propose CROWN, a method for certifying adversarial robustness based on bounding activations functions using linear functions. Informally, the main result can be stated as follows: if the activation functions used in a deep neural network can be bounded above and below by linear functions (the activation function may also be segmented first), the network output can also be bounded by linear functions. These linear functions can be computed explicitly, as stated in the paper. Then, given an input example $x$ and a set of allowed perturbations, usually constrained to a $L_p$ norm, these bounds can be used to obtain a lower bound on the robustness of networks.
Generalization in Deep Networks: The Role of Distance from Initialization
Nagarajan, Vaishnavh and Kolter, J. Zico
Nagarajan and Kolter show that neural networks are implicitly regularized by stochastic gradient descent to have small distance from their initialization. This implicit regularization may explain the good generalization performance of over-parameterized neural networks; specifically, more complex models usually generalize better, which contradicts the general trade-off between expressivity and generalization in machine learning. On MNIST, the authors show that the distance of the network's parameters to the original initialization (as measured using the $L_2$ norm on the flattened parameters) reduces with increasing width, and increases with increasing sample size. Additionally, the distance increases significantly when fitting corrupted labels, which may indicate that memorization requires to travel a larger distance in parameter space.
On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Sven Gowal and Krishnamurthy Dvijotham and Robert Stanforth and Rudy Bunel and Chongli Qin and Jonathan Uesato and Relja Arandjelovic and Timothy Mann and Pushmeet Kohli
Keywords: cs.LG, cs.CR, stat.ML
Abstract: Recent work has shown that it is possible to train deep neural networks that are verifiably robust to norm-bounded adversarial perturbations. Most of these methods are based on minimizing an upper bound on the worst-case loss over all possible adversarial perturbations. While these techniques show promise, they remain hard to scale to larger networks. Through a comprehensive analysis, we show how a careful implementation of a simple bounding technique, interval bound propagation (IBP), can be exploited to train verifiably robust neural networks that beat the state-of-the-art in verified accuracy. While the upper bound computed by IBP can be quite weak for general networks, we demonstrate that an appropriate loss and choice of hyper-parameters allows the network to adapt such that the IBP bound is tight. This results in a fast and stable learning algorithm that outperforms more sophisticated methods and achieves state-of-the-art results on MNIST, CIFAR-10 and SVHN. It also allows us to obtain the first verifiably robust model on a downscaled version of ImageNet.
Gowal et al. propose interval bound propagation to obtain certified robustness against adversarial examples. In particular, given a neural network consisting of linear layers and monotonic increasing activation functions, a set of allowed perturbations is propagated to obtain upper and lower bounds at each layer. These lead to bounds on the logits of the network; these are used to verify whether the network changes its prediction on the allowed perturbations. Specifically, Gowal et al. consider an $L_\infty$ ball around input examples; the initial bounds are, thus, $\underline{z}_0 = x - \epsilon$ and $\overline{z}_0 = x + \epsilon$. For each layer, the bounds are defined as
$\underline{z}_{k,i} = \min_{\underline{z}_{k – 1} \leq z_{k – 1} \leq \overline{z}_{k-1}} e_i^T h_k(z_{k – 1})$
and the analogous maximization problem for the upper bound; here, $h$ denotes the applied layer. For Linear layers and monotonic activation functions, this is easy to solve, as shown in the paper. Moreover, computing these bounds is very efficient, only needing roughly two times the computation of one forward pass. During training, a combination of a clean loss and adversarial loss is used:
$\kappa l(z_K, y) + (1 - \kappa) l(\hat{z}_K, y)$
where $z_K$ are the logits of the input $x$, and $\hat{z}_K$ are the adversarial logits computed as
$\hat{Z}_{K,y'} = \begin{cases} \overline{z}_{K,y'} & \text{if } y' \neq y\\\underline{z}_{K,y} & \text{otherwise}\end{cases}$
Both $\epsilon$ and $\kappa$ are annealed during training. In experiments, it is shown that this method results in quite tight bounds on robustness.
Batch Normalization is a Cause of Adversarial Vulnerability
Galloway, Angus and Golubeva, Anna and Tanay, Thomas and Moussa, Medhat and Taylor, Graham W.
Galloway et al. argue that batch normalization reduces robustness against noise and adversarial examples. On various vision datasets, including SVHN and ImageNet, with popular self-trained and pre-trained models they empirically demonstrate that networks with batch normalization show reduced accuracy on noise and adversarial examples. As noise, they consider Gaussian additive noise as well as different noise types included in the Cifar-C dataset. Similarly, for adversarial examples, they consider $L_\infty$ and $L_2$ PGD and BIM attacks; I refer to the paper for details and hyper parameters. On noise, all networks perform worse with batch normalization, even though batch normalization increases clean accuracy slightly. Against PGD attacks, the provided experiments also suggest that batch normalization reduces robustness; however, the attacks only include 20 iterations and do not manage to reduce the adversarial accuracy to near zero, as is commonly reported. Thus, it is questionable whether batch normalization makes indeed a significant difference regarding adversarial robustness. Finally, the authors argue that replacing batch normalization by weight decay can recover some of the advantage in terms of accuracy and robustness.
Radial basis function neural networks: a topical state-of-the-artsurvey
Dash, Ch. Sanjeev Kumar and Behera, Ajit Kumar and Dehuri, Satchidananda and Cho, Sung-Bae
Open Computer Science - 2016 via Local Bibsonomy
Dash et al. present a reasonably recent survey on radial basis function (RBF) networks. RBF networks can be understood as two-layer perceptrons, consisting of an input layer, a hidden layer and an output layer. Instead of using a linear operation for computing the hidden layers, RBF kernels are used; as simple example the hidden units are computed as
$h_i = \phi_i(x) = \exp\left(-\frac{\|x - \mu_i\|^2}{2\sigma_i^2}\right)$
where $\mu_i$ and $\sigma_i^2$ are parameters of the kernel. In a clustering interpretation, the $\mu_i$'s correspond to the kernel's center and the $\sigma_i^2$'s correspond to the kernels bandwidth. The hidden units are then summed with weights $w_i$; for one output $y \in \mathbb{R}$ this can be written as
$y_i = \sum_i w_i h_i$.
Originally, RBF networks were trained in a "clustering"-fashion in order to find the centers $\mu_i$; the bandwidths are often treated as hyper-parameters. Dash et al. show several alternative approaches based on clustering or orthogonal least squares; I refer to the paper for details.
How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
Ahmad, Subutai and Scheinkman, Luiz
Ahmad and Scheinkman propose a simple sparse layer in order to improve robustness against random noise. Specifically, considering a general linear network layer, i.e.
$\hat{y}^l = W^l y^{l-1} + b^l$ and $y^l = f(\hat{y}^l$
where $f$ is an activation function, the weights are first initialized using a sparse distribution; then, the activation function (commonly ReLU) is replaced by a top-$k$ ReLU version where only the top-$k$ activations are propagated. In experiments, this is shown to improve robustness against random noise on MNIST.
Deep-RBF Networks Revisited: Robust Classification with Rejection
Zadeh, Pourya Habib and Hosseini, Reshad and Sra, Suvrit
Zadeh et al. propose a layer similar to radial basis functions (RBFs) to increase a network's robustness against adversarial examples by rejection. Based on a deep feature extractor, the RBF units compute
$d_k(x) = \|A_k^Tx + b_k\|_p^p$
with parameters $A$ and $b$. The decision rule remains unchanged, but the output does not resemble probabilities anymore. The full network, i.e., feature extractor and RBF layer, is trained using an adapted loss that resembles a max margin loss:
$J = \sum_i (d_{y_i}(x_i) + \sum_{j \neq y_i} \max(0, \lambda – d_j(x_i)))$
where $(x_i, y_i)$ is a training examples including label. The loss essentially minimizes the output corresponding to the true class while maximizing the output for all other classes up to a specified margin. Additionally, noise examples are injected during training. For these noise examples,
$\sum_j \max(0, \lambda – d_j(x))$
is maximized to enforce that these examples are treated as negatives in a rejection setting where samples not corresponding to the data distribution (or adversarial examples) can be rejected by the model. In experiments, the proposed method seems to be more robust against FGSM and iterative attacks (as evaluated on Foolbox).
Neural Networks with Structural Resistance to Adversarial Attacks
Luca de Alfaro
Keywords: stat.ML, cs.CR, cs.LG, cs.NE
Abstract: In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified. The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are misclassified. In standard neural networks used for deep learning, attackers can craft adversarial examples from most input to cause a misclassification of their choice. We introduce a new type of network units, called RBFI units, whose non-linear structure makes them inherently resistant to adversarial attacks. On permutation-invariant MNIST, in absence of adversarial attacks, networks using RBFI units match the performance of networks using sigmoid units, and are slightly below the accuracy of networks with ReLU units. When subjected to adversarial attacks, networks with RBFI units retain accuracies above 90% for attacks that degrade the accuracy of networks with ReLU or sigmoid units to below 2%. RBFI networks trained with regular input are superior in their resistance to adversarial attacks even to ReLU and sigmoid networks trained with the help of adversarial examples. The non-linear structure of RBFI units makes them difficult to train using standard gradient descent. We show that networks of RBFI units can be efficiently trained to high accuracies using pseudogradients, computed using functions especially crafted to facilitate learning instead of their true derivatives. We show that the use of pseudogradients makes training deep RBFI networks practical, and we compare several structural alternatives of RBFI networks for their accuracy.
De Alfaro proposes a deep radial basis function (RBF) network to obtain robustness against adversarial examples. In contrast to "regular" RBF networks, which usually consist of only one hidden layer containing RBF units, de Alfaro proposes to stack multiple layers with RBF units. Specifically, a Gaussian unit utilizing the $L_\infty$ norm is used:
$\exp\left( - \max_i(u_i(x_i – w_i))^2\right)$
where $u_i$ and $w_i$ are parameters and $x_i$ are the inputs to the unit – so the network inputs or the outputs of the previous hidden layer. This unit can be understood as computing a soft AND operation; therefore, an alternative OR operation
$1 - \exp\left( - \max_i(u_i(x_i – w_i))^2\right)$
is used as well. These two units are used alternatingly in hidden layers in the conducted experiments. Based on these units, de Alfaro argues that the model is less sensitive to adversarial examples, compared to linear operations as commonly used in ReLU networks.
For training a deep RBF-network, pseudo gradients are used for both the maximum operation and the exponential function. This is done for simplifying training; I refer to the paper for details.
In their experiments, on MNIST, a multi-layer perceptron with the proposed RBF units is used. The network consists of 512 AND units, 512 OR units, 512 AND units and finally 10 OR units. Robustness against FGSM and I-FGSM as well as PGD attacks seems to improve. However, the used PGD attack seems to be weaker than usually, it does not manage to reduce adversarial accuracy of a normal networks to near-zero.
Adversarial Examples Are Not Bugs, They Are Features
Ilyas, Andrew and Santurkar, Shibani and Tsipras, Dimitris and Engstrom, Logan and Tran, Brandon and Madry, Aleksander
Keywords: adversarial
Ilyas et al. present a follow-up work to their paper on the trade-off between accuracy and robustness. Specifically, given a feature $f(x)$ computed from input $x$, the feature is considered predictive if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}[y f(x)] \geq \rho$;
similarly, a predictive feature is robust if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}\left[\inf_{\delta \in \Delta(x)} yf(x + \delta)\right] \geq \gamma$.
This means, a feature is considered robust if the worst-case correlation with the label exceeds some threshold $\gamma$; here the worst-case is considered within a pre-defined set of allowed perturbations $\Delta(x)$ relative to the input $x$. Obviously, there also exist predictive features, which are however not robust according to the above definition. In the paper, Ilyas et al. present two simple algorithms for obtaining adapted datasets which contain only robust or only non-robust features. The main idea of these algorithms is that an adversarially trained model only utilizes robust features, while a standard model utilizes both robust and non-robust features. Based on these datasets, they show that non-robust, predictive features are sufficient to obtain high accuracy; similarly training a normal model on a robust dataset also leads to reasonable accuracy but also increases robustness. Experiments were done on Cifar10. These observations are supported by a theoretical toy dataset consisting of two overlapping Gaussians; I refer to the paper for details.
Bit-Flip Attack: Crushing Neural Network withProgressive Bit Search
Rakin, Adnan Siraj and He, Zhezhi and Fan, Deliang
Rakin et al. introduce the bit-flip attack aimed to degrade a network's performance by flipping a few weight bits. On Cifar10 and ImageNet, common architectures such as ResNets or AlexNet are quantized into 8 bits per weight value (or fewer). Then, on a subset of the validation set, gradients with respect to the training loss are computed and in each layer, bits are selected based on their gradient value. Afterwards, the layer which incurs the maximum increase in training loss is selected. This way, a network's performance can be degraded to chance level with as few as 17 flipped bits (on ImageNet, using AlexNet).
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle and Michael Carbin
Keywords: cs.LG, cs.AI, cs.NE
Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Frankle and Carbin discover so-called winning tickets, subset of weights of a neural network that are sufficient to obtain state-of-the-art accuracy. The lottery hypothesis states that dense networks contain subnetworks – the winning tickets – that can reach the same accuracy when trained in isolation, from scratch. The key insight is that these subnetworks seem to have received optimal initialization. Then, given a complex trained network for, e.g., Cifar, weights are pruned based on their absolute value – i.e., weights with small absolute value are pruned first. The remaining network is trained from scratch using the original initialization and reaches competitive performance using less than 10% of the original weights. As soon as the subnetwork is re-initialized, these results cannot be reproduced though. This suggests that these subnetworks obtained some sort of "optimal" initialization for learning.
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M Cohen and Elan Rosenfeld and J. Zico Kolter
Keywords: cs.LG, stat.ML
Abstract: We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm. This "randomized smoothing" technique has been proposed recently in the literature, but existing guarantees are loose. We prove a tight robustness guarantee in $\ell_2$ norm for smoothing with Gaussian noise. We use randomized smoothing to obtain an ImageNet classifier with e.g. a certified top-1 accuracy of 49% under adversarial perturbations with $\ell_2$ norm less than 0.5 (=127/255). No certified defense has been shown feasible on ImageNet except for smoothing. On smaller-scale datasets where competing approaches to certified $\ell_2$ robustness are viable, smoothing delivers higher certified accuracies. Our strong empirical results suggest that randomized smoothing is a promising direction for future research into adversarially robust classification. Code and models are available at http://github.com/locuslab/smoothing.
Cohen et al. study robustness bounds of randomized smoothing, a region-based classification scheme where the prediction is averaged over Gaussian samples around the test input. Specifically, given a test input, the predicted class is the class whose decision region has the largest overlap with a normal distribution of pre-defined variance. The intuition of this approach is that, for small perturbations, the decision regions of classes can't vary too much. In practice, randomized smoothing is applied using samples. In the paper, Cohen et al. show that this approach conveys robustness against radii R depending on the confidence difference between the actual class and the "runner-up" class. In practice, the radii also depend on the number of samples used.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang and Yixuan Li and R. Srikant
Abstract: We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%.
Liang et al. propose a perturbation-based approach for detecting out-of-distribution examples using a network's confidence predictions. In particular, the approaches based on the observation that neural network's make more confident predictions on images from the original data distribution, in-distribution examples, than on examples taken from a different distribution (i.e., a different dataset), out-distribution examples. This effect can further be amplified by using a temperature-scaled softmax, i.e.,
$ S_i(x, T) = \frac{\exp(f_i(x)/T)}{\sum_{j = 1}^N \exp(f_j(x)/T)}$
where $f_i(x)$ are the predicted logits and $T$ a temperature parameter. Based on these softmax scores, perturbations $\tilde{x}$ are computed using
$\tilde{x} = x - \epsilon \text{sign}(-\nabla_x \log S_{\hat{y}}(x;T))$
where $\hat{y}$ is the predicted label of $x$. This is similar to "one-step" adversarial examples; however, in contrast of minimizing the confidence of the true label, the confidence in the predicted label is maximized. This, applied to in-distribution and out-distribution examples is illustrated in Figure 1 and meant to emphasize the difference in confidence. Afterwards, in- and out-distribution examples can be distinguished using simple thresholding on the predicted confidence, as shown in various experiment, e.g., on Cifar10 and Cifar100.
https://i.imgur.com/OjDVZ0B.png
Figure 1: Illustration of the proposed perturbation to amplify the difference in confidence between in- and out-distribution examples.
Adding Gradient Noise Improves Learning for Very Deep Networks
Arvind Neelakantan and Luke Vilnis and Quoc V. Le and Ilya Sutskever and Lukasz Kaiser and Karol Kurach and James Martens
Abstract: Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. This success is partially attributed to architectural innovations such as convolutional and long short-term memory networks. The main motivation for these architectural innovations is that they capture better domain knowledge, and importantly are easier to optimize than more basic architectures. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we discuss a low-overhead and easy-to-implement technique of adding gradient noise which we find to be surprisingly effective when training these very deep architectures. The technique not only helps to avoid overfitting, but also can result in lower training loss. This method alone allows a fully-connected 20-layer deep network to be trained with standard gradient descent, even starting from a poor initialization. We see consistent improvements for many complex models, including a 72% relative reduction in error rate over a carefully-tuned baseline on a challenging question-answering task, and a doubling of the number of accurate binary multiplication models learned across 7,000 random restarts. We encourage further application of this technique to additional complex modern architectures.
Neelakantan et al. study gradient noise for improving neural network training. In particular, they add Gaussian noise to the gradients in each iteration:
$\tilde{\nabla}f = \nabla f + \mathcal{N}(0, \sigma^2)$
where the variance $\sigma^2$ is adapted throughout training as follows:
$\sigma^2 = \frac{\eta}{(1 + t)^\gamma}$
where $\eta$ and $\gamma$ are hyper-parameters and $t$ the current iteration. In experiments, the authors show that gradient noise has the potential to improve accuracy, especially given optimization.
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Lee, Kimin and Lee, Honglak and Lee, Kibok and Shin, Jinwoo
Lee et al. propose a generative model for obtaining confidence-calibrated classifiers. Neural networks are known to be overconfident in their predictions – not only on examples from the task's data distribution, but also on other examples taken from different distributions. The authors propose a GAN-based approach to force the classifier to predict uniform predictions on examples not taken from the data distribution. In particular, in addition to the target classifier, a generator and a discriminator are introduced. The generator generates "hard" out-of-distribution examples; ideally these examples are close to the in-distribution, i.e., the data distribution of the actual task. The discriminator is intended to distinguish between out- and in-distribution. The overall algorithm, including the necessary losses, is given in Algorithm 1. In experiments, the approach is shown to allow detecting out-distribution examples nearly perfectly. Examples of the generated "hard" out-of-distribution samples are given in Figure 1.
https://i.imgur.com/NmF0fpN.png
Algorithm 1: The proposed joint training scheme of out-distribution generator $G$, the in-/out-distribution discriminator $G$ and the original classifier providing $P_\theta$(y|x)$ with parameters $\theta$.
https://i.imgur.com/kAclSQz.png
Figure 1: A comparison of a regular GAN (a and c) to the proposed framework (c and d). Clearly, the proposed approach generates out-of-distribution samples (i.e., no meaningful digits) close to the original data distribution.
The Limitations of Adversarial Training and the Blind-Spot Attack
Zhang, Huan and Chen, Hongge and Song, Zhao and Boning, Duane S. and Dhillon, Inderjit S. and Hsieh, Cho-Jui
Zhang et al. search for "blind spots" in the data distribution and show that blind spot test examples can be used to find adversarial examples easily. On MNIST, the data distribution is approximated using kernel density estimation were the distance metric is computed in dimensionality-reduced feature space (of an adversarially trained model). For dimensionality reduction, t-SNE is used. Blind spots are found by slightly shifting pixels or changing the gray value of the background. Based on these blind spots, adversarial examples can easily be found for MNIST and Fashion-MNIST.
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
Beilun Wang and Ji Gao and Yanjun Qi
Keywords: cs.LG, cs.CR, cs.CV
Abstract: Most machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples. Such inputs are typically generated by adding small but purposeful modifications that lead to incorrect outputs while imperceptible to human eyes. The goal of this paper is not to introduce a single method, but to make theoretical steps towards fully understanding adversarial examples. By using concepts from topology, our theoretical analysis brings forth the key reasons why an adversarial example can fool a classifier ($f_1$) and adds its oracle ($f_2$, like human eyes) in such analysis. By investigating the topological relationship between two (pseudo)metric spaces corresponding to predictor $f_1$ and oracle $f_2$, we develop necessary and sufficient conditions that can determine if $f_1$ is always robust (strong-robust) against adversarial examples according to $f_2$. Interestingly our theorems indicate that just one unnecessary feature can make $f_1$ not strong-robust, and the right feature representation learning is the key to getting a classifier that is both accurate and strong-robust.
Wang et al. discuss an alternative definition of adversarial examples, taking into account an oracle classifier. Adversarial perturbations are usually constrained in their norm (e.g., $L_\infty$ norm for images); however, the main goal of this constraint is to ensure label invariance – if the image didn't change notable, the label didn't change either. As alternative formulation, the authors consider an oracle for the task, e.g., humans for image classification tasks. Then, an adversarial example is defined as a slightly perturbed input, whose predicted label changes, but where the true label (i.e., the oracle's label) does not change. Additionally, the perturbation can be constrained in some norm; specifically, the perturbation can be constrained on the true manifold of the data, as represented by the oracle classifier. Based on this notion of adversarial examples, Wang et al. argue that deep neural networks are not robust as they utilize over-complete feature representations.
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González and Battista Biggio and Ambra Demontis and Andrea Paudice and Vasin Wongrassamee and Emil C. Lupu and Fabio Roli
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec '17 - 2017 via Local CrossRef
Munoz-Gonzalez et al. propose a multi-class data poisening attack against deep neural networks based on back-gradient optimization. They consider the common poisening formulation stated as follows:
$ \max_{D_c} \min_w \mathcal{L}(D_c \cup D_{tr}, w)$
where $D_c$ denotes a set of poisened training samples and $D_{tr}$ the corresponding clea dataset. Here, the loss $\mathcal{L}$ used for training is minimized as the inner optimization problem. As result, as long as learning itself does not have closed-form solutions, e.g., for deep neural networks, the problem is computationally infeasible. To resolve this problem, the authors propose using back-gradient optimization. Then, the gradient with respect to the outer optimization problem can be computed while only computing a limited number of iterations to solve the inner problem, see the paper for detail. In experiments, on spam/malware detection and digit classification, the approach is shown to increase test error of the trained model with only few training examples poisened.
MagNet: A Two-Pronged Defense against Adversarial Examples
Meng, Dongyu and Chen, Hao
ACM ACM Conference on Computer and Communications Security - 2017 via Local Bibsonomy
Meng and Chen propose MagNet, a combination of adversarial example detection and removal. At test time, given a clean or adversarial test image, the proposed defense works as follows: First, the input is passed through one or multiple detectors. If one of these detectors fires, the input is rejected. To this end, the authors consider detection based on the reconstruction error of an auto-encoder or detection based on the divergence between probability predictions (on adversarial vs. clean example). Second, if not rejected, the input is passed through a reformed. The reformer reconstructs the input, e.g., through an auto-encoder, to remove potentially undetected adversarial noise.
UPSET and ANGRI : Breaking High Performance Image Classifiers
Sarkar, Sayantan and Bansal, Ankan and Mahbub, Upal and Chellappa, Rama
Sarkar et al. propose two "learned" adversarial example attacks, UPSET and ANGRI. The former, UPSET, learns to predict universal, targeted adversarial examples. The latter, ANGRI, learns to predict (non-universal) targeted adversarial attacks. For UPSET, a network takes the target label as input and learns to predict a perturbation, which added to the original image results in mis-classification; for ANGRI, a network takes both the target label and the original image as input to predict a perturbation. These networks are then trained using a mis-classification loss while also minimizing the norm of the perturbation. To this end, the target classifier needs to be differentiable – i.e., UPSET and ANGRI require white-box access.
On the importance of single directions for generalization
Ari S. Morcos and David G. T. Barrett and Neil C. Rabinowitz and Matthew Botvinick
Abstract: Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network's reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyperparameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.
Morcos et al. study the influence of ablating single units as a proxy to generalization performance. On Cifar10, for example, a 11-layer convolutional network is trained on the clean dataset, as well as on versions of Cifar10 where a fraction of $p$ samples have corrupted labels. In the latter cases, the network is forced to memorize examples, as there is no inherent structure in the labels assignment. Then, it is experimentally shown that these memorizing networks are less robust to setting whole feature maps to zero, i.e., ablating them. This is shown in Figure 1. Based on this result, the authors argue that the area under this ablation curve (AUC) can be used as proxy for generalization performance. For example, early stopping or hyper-parameter selection can be done based on this AUC value. Furthermore, they show that batch normalization discourages networks to rely on these so-called single-directions, i.e., single units or feature maps. Specifically, batch normalization seems to favor units holding information about multiple classes/concepts.
https://i.imgur.com/h2JwLUF.png
Figure 1: Classification accuracy (y-axis) over the number of units that are ablated (x-axis) for networks trained on Cifar10 with various degrees of corrupted labels. The same experiments (left and right) for MNIST and ImageNet.
Improving Transferability of Adversarial Examples with Input Diversity
Cihang Xie and Zhishuai Zhang and Yuyin Zhou and Song Bai and Jianyu Wang and Zhou Ren and Alan Yuille
Keywords: cs.CV, cs.LG, stat.ML
Abstract: Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM.
Xie et al. propose to improve the transferability of adversarial examples by computing them based on transformed input images. In particular, they adapt I-FGSM such that, in each iteration, the update is computed on a transformed version of the current image with probability $p$. When, at the same time attacking an ensemble of networks, this is shown to improve transferability.
Improving Network Robustness against Adversarial Attacks with Compact Convolution
Ranjan, Rajeev and Sankaranarayanan, Swami and Castillo, Carlos D. and Chellappa, Rama
Ranjan et al. propose to constrain deep features to lie on hyperspheres in order to improve robustness against adversarial examples. For the last fully-connected layer, this is achieved by the L2-softmax, which forces the features to lie on the hypersphere. For intermediate convolutional or fully-connected layer, the same effect is achieved analogously, i.e., by normalizing inputs, scaling them and applying the convolution/weight multiplication. In experiments, the authors argue that this improves robustness against simple attacks such as FGSM and DeepFool.
Regularizing Neural Networks by Penalizing Confident Output Distributions
Gabriel Pereyra and George Tucker and Jan Chorowski and Łukasz Kaiser and Geoffrey Hinton
Keywords: cs.NE, cs.LG
Abstract: We systematically explore regularizing neural networks by penalizing low entropy output distributions. We show that penalizing low entropy output distributions, which has been shown to improve exploration in reinforcement learning, acts as a strong regularizer in supervised learning. Furthermore, we connect a maximum entropy based confidence penalty to label smoothing through the direction of the KL divergence. We exhaustively evaluate the proposed confidence penalty and label smoothing on 6 common benchmarks: image classification (MNIST and Cifar-10), language modeling (Penn Treebank), machine translation (WMT'14 English-to-German), and speech recognition (TIMIT and WSJ). We find that both label smoothing and the confidence penalty improve state-of-the-art models across benchmarks without modifying existing hyperparameters, suggesting the wide applicability of these regularizers.
Pereyra et al. propose an entropy regularizer for penalizing over-confident predictions of deep neural networks. Specifically, given the predicted distribution $p_\theta(y_i|x)$ for labels $y_i$ and network parameters $\theta$, a regularizer
$-\beta \max(0, \Gamma – H(p_\theta(y|x))$
is added to the learning objective. Here, $H$ denotes the entropy and $\beta$, $\Gamma$ are hyper-parameters allowing to weight and limit the regularizers influence. In experiments, this regularizer showed slightly improved performance on MNIST and Cifar-10.
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer
Hsueh-Ti Derek Liu and Michael Tao and Chun-Liang Li and Derek Nowrouzezahrai and Alec Jacobson
Keywords: cs.LG, cs.CV, cs.GR, stat.ML
Abstract: Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose the direct perturbation of physical parameters that underly image formation: lighting and geometry. As such, we propose a novel evaluation measure, parametric norm-balls, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
Liu et al. propose adversarial attacks on physical parameters of images, which can be manipulated efficiently through differentiable renderer. In particular, they propose adversarial lighting and adversarial geometry; in both cases, an image is assumed to be a function of lighting and geometry, generated by a differentiable renderer. By directly manipulating these latent variables, more realistic looking adversarial examples can be generated for synthetic images as shown in Figure 1.
https://i.imgur.com/uh2pj9w.png
Figure 1: Comparison of the proposed attack with known attacks applied to large perturbations, $L_\infty \approx 0.82$.
Enhanced Attacks on Defensively Distilled Deep Neural Networks
Liu, Yujia and Zhang, Weiming and Li, Shaohua and Yu, Nenghai
Liu et al. propose a white-box attack against defensive distillation. In particular, the proposed attack combines the objective of the Carlini+Wagner attack [1] with a slightly different reparameterization to enforce an $L_\infty$-constraint on the perturbation. In experiments, defensive distillation is shown to no be robust.
[1] Nicholas Carlini, David A. Wagner: Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57
Zhou, Yan and Kantarcioglu, Murat and Xi, Bowei
Zhou et al. study transferability of adversarial examples against ensembles of randomly perturbed networks. Specifically, they consider randomly perturbing the weights using Gaussian additive noise. Using an ensemble of these perturbed networks, the authors show that transferability of adversarial examples decreases significantly. However, the authors do not consider adapting their attack to this defense scenario.
Cost-Sensitive Robustness against Adversarial Examples
Zhang, Xiao and Evans, David
Thang and Evanse propose cost-sensitive certified robustness where different adversarial examples can be weighted based on their actual impact for the application. Specifically, they consider the certified robustness formulation (and the corresponding training scheme) by Wong and Kolter. This formulation is extended by acknowledging that different adversarial examples have different impact for specific applications; this is formulized through a cost matrix which quantifies which source-target label combinations of adversarial examples are actually harmful. Based on this cost matrix, cost-sensitive certified robustness as well as the corresponding training scheme is proposed and evaluated in experiments.
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim and Martin Wattenberg and Justin Gilmer and Carrie Cai and James Wexler and Fernanda Viegas and Rory Sayres
Keywords: stat.ML
Abstract: The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state. In addition, many systems, such as image classifiers, operate on low-level features rather than high-level concepts. To address these challenges, we introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result--for example, how sensitive a prediction of "zebra" is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.
Kim et al. propose Concept Activation Vectors (CAV) that represent the direction of features corresponding to specific human-interpretable concepts. In particular, given a network for a classification task, a concept is defined as a set of images with that concept. A linear classifier is then trained to distinguish images with concept from random images without the concept based on a chosen feature layer. The normal of the obtained linear classification boundary corresponds to the learned Concept Activation Vector (CAV). By considering the directional derivative along this direction for a given input allows to quantify how well the input aligns with the chosen concept. This way, images can be ranked and the model' sensitivity to particular concepts can be quantified. The idea is also illustrated in Figure 1.
https://i.imgur.com/KOqPeag.png
Figure 1: Process of constructing Concept Activation Vectors (CAVs).
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas and Logan Engstrom and Anish Athalye and Jessy Lin
Keywords: cs.CV, cs.CR, stat.ML
Abstract: Current neural network-based classifiers are susceptible to adversarial examples even in the black-box setting, where the attacker only has query access to the model. In practice, the threat model for real-world systems is often more restrictive than the typical black-box model where the adversary can observe the full output of the network on arbitrarily many chosen inputs. We define three realistic threat models that more accurately characterize many real-world classifiers: the query-limited setting, the partial-information setting, and the label-only setting. We develop new attacks that fool classifiers under these more restrictive threat models, where previous methods would be impractical or ineffective. We demonstrate that our methods are effective against an ImageNet classifier under our proposed threat models. We also demonstrate a targeted black-box attack against a commercial classifier, overcoming the challenges of limited query access, partial information, and other practical issues to break the Google Cloud Vision API.
Ilyas et al. propose three query-efficient black-box adversarial example attacks using distribution-based gradient estimation. In particular, their simplest attacks involves estimating the gradient locally using a search distribution:
$ \nabla_x \mathbb{E}_{\pi(\theta|x)} [F(\theta)] = \mathbb{E}_{\pi(\theta|x)} [F(\theta) \nabla_x \log(\pi(\theta|x))]$
where $F(\cdot)$ is a loss function – e.g., using the cross-entropy loss which is maximized to obtain an adversarial example. The above equation, using a Gaussian noise search distribution leads to a simple approximator for the gradient:
$\nabla \mathbb{E}[F(\theta)] = \frac{1}{\sigma n} \sum_{i = 1}^n \delta_i F(\theta + \sigma \delta_i)$
where $\sigma$ is the search variance and $\delta_i$ are sampled from a unit Gaussian. This scheme can then be applied as part of the projected gradient descent white-box attacks to obtain adversarial examples.
The above attack assumes that the black-box network provides probability outputs in order to compute the loss $F$. In the remainder of the paper, the authors also generalize this approach to the label-only case, where the network only provides the top $k$ labels for each input. In experiments, the attacks is shown to be effective while rarely requiring more than $50$k queries on ImageNet.
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks
Ambra Demontis and Marco Melis and Maura Pintor and Matthew Jagielski and Battista Biggio and Alina Oprea and Cristina Nita-Rotaru and Fabio Roli
Keywords: cs.LG, cs.CR, stat.ML, 68T10, 68T45
Abstract: Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model. Studying transferability of attacks has gained interest in the last years due to the deployment of cyber-attack detection services based on machine learning. For these applications of machine learning, service providers avoid disclosing information about their machine-learning algorithms. As a result, attackers trying to bypass detection are forced to craft their attacks against a surrogate model instead of the actual target model used by the service. While previous work has shown that finding test-time transferable attack samples is possible, it is not well understood how an attacker may construct adversarial examples that are likely to transfer against different models, in particular in the case of training-time poisoning attacks. In this paper, we present the first empirical analysis aimed to investigate the transferability of both test-time evasion and training-time poisoning attacks. We provide a unifying, formal definition of transferability of such attacks and show how it relates to the input gradients of the surrogate and of the target classification models. We assess to which extent some of the most well-known machine-learning systems are vulnerable to transfer attacks, and explain why such attacks succeed (or not) across different models. To this end, we leverage some interesting connections highlighted in this work among the adversarial vulnerability of machine-learning models, their regularization hyperparameters and input gradients.
Demontis et al. study transferability of adversarial examples and data poisening attacks in the light of the targeted models gradients. In particular, they experimentally validate the following hypotheses: First, susceptibility to these attacks depends on the size of the model's gradients; the higher the gradient, the smaller is the perturbation needed to increase the loss. Second, the size of the gradient depends on regularization. And third, the cosine between the target model's gradients and the surrogate model's gradients (trained to compute transferable attacks) influences vulnerability. These insights hold for both evasion and poisening attacks and are motivated by a simple linear Taylor expansion of the target model's loss.
Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples
Tao, Guanhong and Ma, Shiqing and Liu, Yingqi and Zhang, Xiangyu
Tao et al. propose Attacks Meet Interpretability, an adversarial example detection scheme based on the interpretability of individual neurons. In the context of face recognition, in a first step, the authors identify neurons that correspond to specific face attributes. This is achieved by constructing sets of images were only specific attributes change, and then investigating the firing neurons. In a second step, all other neurons, i.e., neurons not corresponding to any meaningful face attribute, are weakened in order to improve robustness against adversarial examples. The idea is that adversarial examples make use of these non-interpretable neurons to fool the networks. Unfortunately, this defense has been shown not to be effective in [1].
[1] Nicholas Carlini. Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples? ArXiv.org, abs/1902.02322, 2019.
Adversarial Dropout for Supervised and Semi-Supervised Learning
Park, Sungrae and Park, Jun-Keon and Shin, Su-Jin and Moon, Il-Chul
Park et al. introduce adversarial dropout, a variant of adversarial training based on adversarially computing dropout masks. Specifically, instead of training on adversarial examples, the authors propose an efficient method to compute adversarial dropout masks during training. In experiments, this approach seems to improve generalization performance in semi-supervised settings.
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Liu, Kang and Dolan-Gavitt, Brendan and Garg, Siddharth
Springer RAID - 2018 via Local Bibsonomy
Liu et al. propose fine-pruning, a combination of weight pruning and fine-tuning to defend against backdoor attacks on neural networks. Specifically, they consider a setting where training is outsourced to a machine learning service; the attacker has access to the network and training set, however, any change in network architecture would be easily detected. Thus, the attacker tries to inject backdoors through data poisening. As defense against such attacks, the authors propose to identify and prune weights that are not used for the actual tasks but only for the backdoor inputs. This defense can then be combined with fine-tuning and, as shown in experiments, is able to make backdoor attacks less effective – even when considering an attacker aware of this defense.
On the Geometry of Adversarial Examples
Khoury, Marc and Hadfield-Menell, Dylan
Khoury and Hadfield-Menell provide two important theoretical insights regarding adversarial robustness: it is impossible to be robust in terms of all norms, and adversarial training is sample inefficient. Specifically, they study robustness in relation to the problem's codimension, i.e., the difference between the dimensionality of the embedding space (e.g., image space) and the dimensionality of the manifold (where the data is assumed to actually live on). Then, adversarial training is shown to be sample inefficient in high codimensions.
The Limitations of Model Uncertainty in Adversarial Settings
Grosse, Kathrin and Pfaff, David and Smith, Michael T. and Backes, Michael
Grosse et al. show that Gaussian Processes allow to reject some adversarial examples based on their confidence and uncertainty; however, attacks maximizing confidence and minimizing uncertainty are still successful. While some state-of-the-art adversarial examples seem to result in significantly different confidence and uncertainty estimates compared to benign examples, Gaussian Processes can still be fooled through particularly crafted adversarial examples. To this end, the confidence is explicitly maximized and, additionally, the uncertainty is constrained to not be larger than the uncertainty of the corresponding benign test example. In experiments, this attack is shown to successfully fool Gaussian Processes while resulting in imperceptible perturbations.
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples
Dong, Yinpeng and Bao, Fan and Su, Hang and Zhu, Jun
Dong et al. study interpretability in the context of adversarial examples and propose a variant of adversarial training to improve interpretability. First the authors argue that neurons do not preserve their interpretability on adversarial examples; e.g., neurons corresponding to high-level concepts such as "bird" or "dog" do not fire consistently on adversarial examples. This result is also validated experimentally, by considering deep representations at different layers. To improve interpretability, the authors propose adversarial training with an additional regularizer enforcing similar features on true and adversarial training examples.
The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets
Carlini, Nicholas and Liu, Chang and Kos, Jernej and Erlingsson, Úlfar and Song, Dawn
Carlini et al. propose several attacks to extract secrets form trained black-box models. Additionally, they show that state-of-the-art neural networks memorize secrets early during training. Particularly on the Penn treebank, after inserting a secret of specific format, the authors validate that the secret can be identified based on the models output probabilities (i.e., black-box access). Several metrics based on the log-perplexity of the secret show that secrets are memorized early during training and memorization happens for all popular architectures and training strategies; additionally, memorization also works for multiple secrets. Furthermore, the authors propose several attacks to extract secrets, most notably through shortest path search. Here, starting with an empty secret, the characters of the secret are identified sequentially in order to minimize log-perplexity. Using this attack, secrets such as credit card numbers are extractable from popular mail datasets.
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao and Neil Zhenqiang Gong
Proceedings of the 33rd Annual Computer Security Applications Conference on - ACSAC 2017 - 2017 via Local CrossRef
Cao and Gong introduce region-based classification as defense against adversarial examples. In particular, given an input (benign test input or adversarial example), the method samples random point in the neighborhood and classifies the test sample according to the majority vote of the obtained labels.
Curriculum Adversarial Training
Qi-Zhi Cai and Chang Liu and Dawn Song
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence - 2018 via Local CrossRef
Cai et al. propose so-called curriculum adversarial training where adversarial training is applied to increasingly strong attacks. Specifically, considering a gradient-based, iterative attack such as projected gradient descent, a common proxy for the strength of the attack is the number of iterations. To avoid issues with forgetting old adversarial examples and reduced accuracy, the authors propose to apply adversarial training with different numbers of iterations. In each turn (called lesson in the paper), the network is trained adversarially for a given number of iterations until the network has high accuracy against this attack; then, the number of iterations is increased and another "lesson" is started. In experiments, this method is shown to outperform standard adversarial training.
doi.ieeecomputersociety.org
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
Gehr, Timon and Mirman, Matthew and Drachsler-Cohen, Dana and Tsankov, Petar and Chaudhuri, Swarat and Vechev, Martin T.
IEEE Computer Society IEEE Symposium on Security and Privacy - 2018 via Local Bibsonomy
Gehr et al. propose a method based on abstract interpretations in order to verify robustness guarantees of neural networks. First of all, I want to note that (in contrast to most work in adversarial robustness) the proposed method is not intended to improve robustness, but to get robustness certificates. Without going into details, abstract interpretations allow to verify conditions (e.g., robustness) of a function (e.g., a neural network) based on abstractions of the input. In particular, by abstracting a norm-ball around a test sample (as is typically considered in adversarial robustness) using box constraints or polyhedra, leading to an over-approximation of the norm-ball, and transforming these abstractions according to the layers of a network, the network's output can be checked against robustness conditions without running the network on all individual points in the norm-ball. As a result, if the proposed method certifies robustness for a given input sample and an area around it, the network s indeed robust in this area (soundness). If not, the network might indeed not be robust, or robustness could not be certified due to the method's over-approximation. For details, I refer to the paper, as well as follow-up work [1] and [2].
[1] Matthew Mirman, Timon Gehr, Martin T. Vechev: Differentiable Abstract Interpretation for Provably Robust Neural Networks. ICML 2018: 3575-3583
[2] Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin T. Vechev: Fast and Effective Robustness Certification. NeurIPS 2018: 10825-10836
Towards Robust Interpretability with Self-Explaining Neural Networks
Alvarez-Melis, David and Jaakkola, Tommi S.
Alvarez-Melis and Jaakkola propose three requirements for self-explainable models, explicitness, faithfulness and stability, and construct a self-explainable, generalized linear model optimizing for these properties. In particular, the proposed model has the form
$f(x) = \theta(x)^T h(x)$
where $\theta(x)$ are features (e.g., from a deep network) and $h(x)$ are interpretable features/concepts. In practice, these concepts are learned using an auto-encoder from the raw input while the latent code, which represents $h(x)$, is regularized to learn concept under weak supervision. Additionally, the classifier is regularized to be locally difference-bounded by the concept function $h(x)$. This means that for each point $x_0$ it holds
$\|f(x) – f(x_0)\| \leq L \|h(x) – h(x_0)\|$ for all $\|x – x_0\|_\delta$
for some $\delta$ and $L$. This condition leads to some stability of interpretations with respect to the concepts $h(x)$. In practice, this is enforced through a regularizer.
In experiments, the authors argue that this class of models has advantages regarding the following three properties of self-explainable models: explicitness, i.e., whether explanations are actually understandable, faithfulness, i.e. whether estimated importance of features reflects true relevance, and stability, i.e., robustness of interpretations against small perturbations. For some of these conditions, the authors propose quantitative metrics; robustness, for example, can be evaluated using
$\arg\max_{\|x' - x\|\leq\epsilon} \frac{\|f(x) – f(x')}{\|h(x) – h(x')\|}$
which is very similar to practically evaluating adversarial robustness.
Efficient Repair of Polluted Machine Learning Systems via Causal Unlearning
Yinzhi Cao and Alexander Fangxiao Yu and Andrew Aday and Eric Stahl and Jon Merwine and Junfeng Yang
Proceedings of the 2018 on Asia Conference on Computer and Communications Security - ASIACCS '18 - 2018 via Local CrossRef
Cao et al. propose KARMA, a method to defend against data poisening in an online learning system where training examples are obtained through crowdsourcing. The setting, however, is somewhat constrained and can be described as human-in-the-loop. In particular, there is the system, which is maintained by an administrator, and there are users – among them there might be users with malicious intents, i.e. attackers. KARMA consists of two steps: identifying (possibly polluted) training examples that cause mis-classification of samples within a small oracle set; and then correcting these problems by removing clusters of polluted samples.
SoK: Science, Security and the Elusive Goal of Security as a Scientific Pursuit
Herley, Cormac and van Oorschot, Paul C.
Herley and van Oorschot explore how to make security research more scientific. In particular, they discuss different historic notions of what "scientific" means and related these insights to current practices in security research. I want to discuss only two points that I found very insightful. First, there seems to be a misalignment between formal methods, and empirical methods. While some researchers argue for more mathematically verifiable security methods, others claim that attackers do not care about mathematical proofs – and even provably secure systems can be implemented insecurely. And second, security is often based on unfalsifiable claims. This is problematic, as research findings that cannot be refuted by any observable event are generally assumed to be "unscientific". In security, however, it can easily be shown if a system/method is insecure, while there is no possible observation allowing to determine security.
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji and Xinyang Zhang and Shouling Ji and Xiapu Luo and Ting Wang
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security - CCS '18 - 2018 via Local CrossRef
Ji et al. propose a model-reuse, or trojaning, attack against neural networks by deliberately manipulating specific weights. In particular, given a specific input, the attacker intends to manipulate the model into mis-classifying this input. This is achieved by first generating semantic neighbors of the input, e.g. through transformations or noise, and then identifying salient features for these inputs. These features are correlated to the classifiers output, i.e. some of them have positive impact on classification, some of them have negative impact. The model is fine-tuned by actively adapting the identifying features until the target input is mis-classified.
Playing the Game of Universal Adversarial Perturbations
Julien Perolat and Mateusz Malinowski and Bilal Piot and Olivier Pietquin
Keywords: cs.LG, cs.CV, stat.ML
Abstract: We study the problem of learning classifiers robust to universal adversarial perturbations. While prior work approaches this problem via robust optimization, adversarial training, or input transformation, we instead phrase it as a two-player zero-sum game. In this new formulation, both players simultaneously play the same game, where one player chooses a classifier that minimizes a classification loss whilst the other player creates an adversarial perturbation that increases the same loss when applied to every sample in the training set. By observing that performing a classification (respectively creating adversarial samples) is the best response to the other player, we propose a novel extension of a game-theoretic algorithm, namely fictitious play, to the domain of training robust classifiers. Finally, we empirically show the robustness and versatility of our approach in two defence scenarios where universal attacks are performed on several image classification datasets -- CIFAR10, CIFAR100 and ImageNet.
Pérolat et al. propose a game-theoretic variant of adversarial training on universal adversarial perturbations. In particular, in each training iteration, the model is trained for a specific number of iterations on the current training set. Afterwards, a universal perturbation is found (and the corresponding test images) that fools the network. The found adversarial examples are added to the training set. In the next iteration, the network is trained on the new training set which includes adversarial examples. Overall, this leads to a network being trained on a sequence of universal adversarial perturbations corresponding to earlier versions of that network.
Secure Kernel Machines against Evasion Attacks
Paolo Russu and Ambra Demontis and Battista Biggio and Giorgio Fumera and Fabio Roli
Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security - ALSec '16 - 2016 via Local CrossRef
Russu et al. discuss robustness of linear and non-linear kernel machines through regularization. In particular, they show that linear classifiers can easily be regularized to be robust. In fact, robustness against $L_\infty$-bounded adversarial examples can be achieved through $L_1$ regularization on the weights. More generally, robustness against $L_p$ attacks are countered by $L_q$ regularization of the weights, with $\frac{1}{p} + \frac{1}{q} = 1$. These insights are generalized to the case of non-linear kernel machines; I refer to the paper for details.
Progressive Neural Networks
Andrei A. Rusu and Neil C. Rabinowitz and Guillaume Desjardins and Hubert Soyer and James Kirkpatrick and Koray Kavukcuoglu and Razvan Pascanu and Raia Hadsell
Keywords: cs.LG
Abstract: Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy.
Rusu et al. Propose progressive networks, sets of networks allowing transfer learning over multiple tasks without forgetting. The key idea of progressive networks is very simple. Instead of fine-tuning a model (for transfer learning), the pre-trained model is taken and its weights fixed. Another network is then trained from scratch while receiving features from the pre-trained network as additional input.
Specifically, the authors consider a sequence of tasks. For the first task, a deep neural network (e.g. multi-layer perceptron) is trained. Assuming $L$ layers with hidden activations $h_i^{(1)}$ for $i \leq L$, each layer computes
$h_i^{(1)} = f(W_i^{(1)} h_{i-1}^{(1)})$
where $f$ is an activation function and for $i = 1$, the network input is used. After training until convergence, a second network is trained – now on a different task. The parameters of the first network is fixed, but the second network can use the features of the first one:
$h_i^{(2)} = f(W_i^{(2)} h_{i-1}^{(2)} + U_i^{(2:1)}h_{i-1}^{(1)})$.
This idea can be generalized to the $k$-the network, which can use the activations from all the previous networks:
$h_i^{(k)} = f(W_i^{(k)} h_{i-1}^{(k)} + \sum_{j < k} U_i^{(k:j)} h_{i-1}^{(j)})$.
For three networks, this is illustrated in Figure 1.
https://i.imgur.com/ndyymxY.png
Figure 1: An illustration of the feature transfer between networks.
In practice, however, this approach results in an explosion of parameters and computation. Therefore, the authors apply a dimensionality reduction to the $h_{i – 1}^{(j)}$ for $j < k$. Additionally, an individual scaling factor is used to account for different ranges used in the different networks (also depending on the input data). Then, the above equation can be rewritten as
$h_i^{(k)} = f(W_i^{(k)} h_{i-1}^{(k)} + \sum_{j < k} U_i^{(k)} f(V_i^{(k)} \alpha_i^{(:k)} h_{i-1}^{(:k)})$.
(Note that notation has been adapted slightly, as I found the original notation misleading.) Here, $h_{i – 1}^{(:k)}$ denotes the concatenated features from all networks $j < k$. Similarly, for each network, one $\alpha_i^{(j)}$ is learned to scale the features (note that the notation above would imply a element-wise multiplication of the $\alpha_i^{(j)}$'s repeated in a vector, or equivalently a matrix-vector product). $V_i^{(k)}$ then describes a dimensionality reduction; overall, a one-layer perceptron is used to "transfer" features from networks $j < k$ to the current network. The same approach can also be applied to convolutional layers (e.g. a $1 \times 1$ convolution can be used for dimensionality reduction).
In experiments, the authors show that progressive networks allow efficient transfer learning (efficient in terms of faster training). Additionally, they study which features are actually transferred.
Are adversarial examples inevitable?
Shafahi, Ali and Huang, W. Ronny and Studer, Christoph and Feizi, Soheil and Goldstein, Tom
Shafahi et al. discuss fundamental limits of adversarial robustness, showing that adversarial examples are – to some extent – inevitable. Specifically, for the unit sphere, the unit cube as well as for different attacks (e.g., sparse attacks and dense attacks), the authors show that adversarial examples likely exist. The provided theoretical arguments also provide some insights on which problems are more (or less) robust. For example, more concentrated class distributions seem to be more robust by construction. Overall, these insights lead the authors to several interesting conclusions: First, the results are likely to extent to datasets which actually live on low-dimensional manifolds of the unit sphere/cube. Second, it needs to be differentiated between the existence adversarial examples and our ability to compute them efficiently. Making it harder to compute adversarial examples might, thus, be a valid defense mechanism. And third, the results suggest that lower-dimensional data might be less susceptible to adversarial examples.
Universal Adversarial Training
Shafahi, Ali and Najibi, Mahyar and Xu, Zheng and Dickerson, John P. and Davis, Larry S. and Goldstein, Tom
Shafahi et al. propose universal adversarial training, meaning training on universal adversarial examples. In contrast to regular adversarial examples, universal ones represent perturbations that cause a network to mis-classify many test images. In contrast to regular adversarial training, where several additional iterations are required on each batch of images, universal adversarial training only needs one additional forward/backward pass on each batch. The obtained perturbations for each batch are accumulated in a universal adversarial examples. This makes adversarial training more efficient, however reduces robustness significantly.
On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations
Cheney, Nicholas and Schrimpf, Martin and Kreiman, Gabriel
Cheney et al. study the robustness of deep neural networks, especially AlexNet, with regard to randomly dropping or perturbing weights. In particular, the authors consider three types of perturbations: synapse knockouts set random weights to zero, node knockouts set all weights corresponding to a set of neurons to zero, and weight perturbations add random Gaussian noise to the weights of a specific layer. These perturbations are studied on AlexNet, considering the top-5 accuracy on ImageNet; perturbations are considered per layer. For example, Figure 1 (left) shows the influence on accuracy when knocking out synapses. As can be seen, the lower layers, especially the first convolutional layer, are impacted significantly by these perturbations. Similar observations, Figure 1 (right) are made for random perturbations of weights; although the impact is less significant. Especially high-level features, i.e., the corresponding layers, seem to be robust to these kind of perturbations. The authors also provide evidence that these results extend to the top-1 accuracy, as well as other architectures. For VGG, however, the impact is significantly less pronounced which may also be due to the employed dropout layers.
https://i.imgur.com/78T6Gg2.png
Figure 1: Left: Influence of setting weights in the corresponding layers to zero. Right: Influence of randomly perturbing weights of specific layers. Experiments are on ImageNet using AlexNet.
Adversarial Initialization - when your network performs the way I want
Grosse, Kathrin and Trost, Thomas Alexander and Mosbach, Marius and Backes, Michael and Klakow, Dietrich
Grosse et al. propose an adversarial attack on a deep neural network's weight initialization in order to damage accuracy or convergence. An attacker with access to the used deep learning library is assumed. The attack has no knowledge about the training data or the addressed task; however, the attacker has knowledge (through the library's API) about the network architecture and its initialization. The goal of the attacker is to permutate the initialized weights, without being detected, in order to hinder training. In particular, as illustrated in Figure 1 for two fully connected layers described by
$y(x) = \text{ReLU}(B \text{ReLU}(Ax + a) + b)$,
the attack tries to force a large part of neurons to have zero activation from the very beginning. This attack assumes non-negative input, e.g., images in $[0,1]$ as well as ReLU activations in order to zero-out the selected neurons. In Figure 1, this is achieved by permutating the weights in order to concentrate its negative values in a specific part of the weight matrix. Consecutive application of both weight matrices results in most activations to be zero. This will hinder training significantly as no gradients are available, while keeping the statistics of the weights (e.g., mean and variance) unchanged. A similar strategy can be applied to consecutive convolutional layers, as discussed in detail in the paper. Additionally, by slightly shifting the weights in each weight matrix allows to control the rough number of neurons that receives zero activations; this is intended to have control over the "degree" of damage, i.e. whether the network should diverge or just achieve lower accuracy. In experiments, the authors show that the proposed attacks on weight initialization allow to force training to diverge or reach lower accuracy. However, in the majority of cases, training diverges, which makes the attack less stealthy, i.e., easier to detect by the attacked user.
https://i.imgur.com/wqwhYFL.png
https://i.imgur.com/2zZMOYW.png
Figure 1: Illustration of the idea of the proposd attacks on two fully connected layers as described in the text. The color coding illustrates large, usually positive, weight values in black and small, often negative, weight values in light gray.
Fault injection attack on deep neural network
Yannan Liu and Lingxiao Wei and Bo Luo and Qiang Xu
2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) - 2017 via Local CrossRef
Liu et al. propose slight perturbations of a deep neural network's weights in order to cause mis-classification on a specific input. Specifically, the authors propose two attacks: the single bias attack, where a single bias value is manipulated in order to cause mis-classification, and the gradient descent attack, where the network's weights of a particular layer are manipulated through gradient descent to cause mis-classification. In both cases, a specific input example is considered to be fixed. The attack is intended to change the label on this input while being "stealthy", i.e. not changing accuracy too much. In experiments on MNIST and CIFAR10 it is shown that these attacks are effective in changing the input's label, however also reduce the overall accuracy of the model.
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
Sascha Saralajew and Lars Holdijk and Maike Rees and Thomas Villmann
Keywords: cs.LG, cs.AI, cs.CV, stat.ML
Abstract: Adversarial attacks and the development of (deep) neural networks robust against them are currently two widely researched topics. The robustness of Learning Vector Quantization (LVQ) models against adversarial attacks has however not yet been studied to the same extent. We therefore present an extensive evaluation of three LVQ models: Generalized LVQ, Generalized Matrix LVQ and Generalized Tangent LVQ. The evaluation suggests that both Generalized LVQ and Generalized Tangent LVQ have a high base robustness, on par with the current state-of-the-art in robust neural network methods. In contrast to this, Generalized Matrix LVQ shows a high susceptibility to adversarial attacks, scoring consistently behind all other models. Additionally, our numerical evaluation indicates that increasing the number of prototypes per class improves the robustness of the models.
Saralajew et al. evaluate learning vector quantization (LVQ) approaches regarding their robustness against adversarial examples. In particular, they consider generalized LVQ where examples are classified based on their distance to the closest prototype of the same class and the closest prototype of another class. The prototypes are learned during training; I refer to the paper for details. Robustness is compared to adversarial training and evaluated against several attacks, including FGSM, DeepFool and Boundary – both white-box and black-box attacks. Regarding $L_\infty$, LVQ usually demonstrates poorer performance than adversarial training. Still, robustness seems to be higher than normally trained deep neural networks. One of the main explanations of the authors is that LVQ follows a max-margin approach; this max-margin idea seems to favor robust models.
Protecting Intellectual Property of Deep Neural Networks with Watermarking
Zhang, Jialong and Gu, Zhongshu and Jang, Jiyong and Wu, Hui and Stoecklin, Marc Ph. and Huang, Heqing and Molloy, Ian
ACM AsiaCCS - 2018 via Local Bibsonomy
Zhang et al. propose a watermarking approach to protect the intellectual property of deep neural network models. Here, the watermarking concept is generalized from multimedia; specifically, the purpose of a watermark is to uniquely identify a neural network model as the original owner's property to avoid plagiarism. The problem is illustrated in Figure 1. As watermarks, the authors consider perturbed input images. During training, these perturbations are trained to produce very specific outputs, as illustrated in Figure 2. For example, random pixels are added, or text is added to images. After training, the model can be uniquely identified by these perturbed watermark images that are unrelated to the actual task.
https://i.imgur.com/TydqBwo.png
Figure 1: Illustration of the problem setting for watermarking.
https://i.imgur.com/5Zlei0z.png
Figure 2: Example watermarks.
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations
Lamb, Alex and Binas, Jonathan and Goyal, Anirudh and Serdyuk, Dmitriy and Subramanian, Sandeep and Mitliagkas, Ioannis and Bengio, Yoshua
Lamb et al. introduce fortified networks with denoising auto encoders as hidden layers. These denoising auto encoders are meant to learn the manifold of hidden representations, project adversarial input back to the manifold and improve robustness. The main idea is illustrated in Figure 1. The denoising auto encoders can be added at any layer and are trained jointly with the classification network – either on the original input, or on adversarial examples as done in adversarial training.
https://i.imgur.com/5vaZrVk.png
Figure 1: Illustration of a fortified layer, i.e., a hidden layer that is reconstructed through a denoising auto encoder as defense mechanism. The denoising auto encoders are trained jointly with the network.
In experiments, they show that the proposed defense mechanism improves robustness on MNIST and CIFAR, compared to adversarial training and other baselines. The improvements are, however, very marginal. Especially, as the proposed method imposes an additional overhead (in addition to adversarial training).
Towards the first adversarially robust neural network model on MNIST
Lukas Schott and Jonas Rauber and Matthias Bethge and Wieland Brendel
Abstract: Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L-infinity metric (it's highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decision-based, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixel | CommonCrawl |
Probabilistic robust anti-disturbance control of uncertain systems
Network data envelopment analysis with fuzzy non-discretionary factors
The optimal solution to a principal-agent problem with unknown agent ability
Chong Lai 1, , Lishan Liu 1,2, and Rui Li 3,,
School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Kent Street, Bentley, Perth, Western Australia 6102
School of Mathematical Sciences, Qufu Normal University, Qufu 273165, Shandong, China
School of Management and Economics, University of Electronic Science and Technology of China, No.2006, Xiyuan Avenue, West Hi-Tech Zone, Chengdu 611731, China
* Corresponding author: Rui Li
Received September 2019 Revised February 2020 Published April 2020
Fund Project: This work is supported by the National Natural Science Foundation of China (No.11871302) and the Australian Research Council for the research
We investigate a principal-agent model featured with unknown agent ability. Under the exponential utilities, the necessary and sufficient conditions of the incentive contract are derived by utilizing the martingale and variational methods, and the solutions of the optimal contracts are obtained by using the stochastic maximum principle. The ability uncertainty reduces the principal's ability of incentive provision. It is shown that as time goes by, the information about the ability accumulates, giving the agent less space for belief manipulation, and incentive provision will become easier. Namely, as the contractual time tends to infinity (long-term), the agent ability is revealed completely, the ability uncertainty disappears, and the optimal contracts under known and unknown ability become identical.
Keywords: Principal-agent problem, Optimal contracts, Belief manipulation, Learning process, Agent ability.
Mathematics Subject Classification: Primary: 91B70, 91B40; Secondary: 91A26.
Citation: Chong Lai, Lishan Liu, Rui Li. The optimal solution to a principal-agent problem with unknown agent ability. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020084
T. Adrian and M. M. Westerfield, Disagreement and learning in a dynamic contracting model, The Review of Financial Studies, 22 (2009), 3873-3906. Google Scholar
D. Bergemann and U. Hege, Venture capital financing, moral hazard, and learning, Journal of Banking and Finance, 22 (1998), 703-735. doi: 10.1016/S0378-4266(98)00017-X. Google Scholar
J.-M. Bismut, Conjugate convex functions in optimal stochastic control, Journal of Mathematical Analysis and Applications, 44 (1973), 384-404. doi: 10.1016/0022-247X(73)90066-8. Google Scholar
J.-M. Bismut, Duality methods in the control of densities, SIAM Journal on Control and Optimization, 16 (1978), 771-777. doi: 10.1137/0316052. Google Scholar
K. Chen, X. Wang, M. Huang and W.-K. Ching, Salesforce contract design, joint pricing and production planning with asymmetric overconfidence sales agent, Journal of Industrial and Management Optimization, 13 (2017), 873-899. doi: 10.3934/jimo.2016051. Google Scholar
J. Cvitanić, X. Wan and J. Zhang, Optimal compensation with hidden action and lump-sum payment in a continuous-time model, Applied Mathematics and Optimization, 59 (2009), 99-146. doi: 10.1007/s00245-008-9050-0. Google Scholar
D. Fudenberg and L. Rayo, Training and effort dynamics in apprenticeship, American Economic Review, 109 (2019), 3780-3812. Google Scholar
M. Fujisaki, G. Kallianpur and H. Kunita, Stochastic differential equations for the non linear filtering problem, Osaka Journal of Mathematics, 9 (1972), 19-40. Google Scholar
Y. Giat, S. T. Hackman and A. Subramanian, Investment under uncertainty, heterogeneous beliefs, and agency conflicts, The Review of Financial Studies, 23 (2009), 1360-1404. Google Scholar
Z. He, B. Wei, J. Yu and F. Gao, Optimal long-term contracting with learning, The Review of Financial Studies, 30 (2017), 2006-2065. Google Scholar
B. Holmstrom and P. Milgrom, Aggregation and linearity in the provision of intertemporal incentives, Econometrica, 55 (1987), 303-328. doi: 10.2307/1913238. Google Scholar
H. A. Hopenhayn and A. Jarque, Moral hazard and persistence, Ssrn Electronic Journal, 7 (2007), 1-32. doi: 10.2139/ssrn.2186649. Google Scholar
J. Hörner and L. Samuelson, Incentives for experimenting agents, The RAND Journal of Economics, 44 (2013), 632-663. Google Scholar
J. Mirlees, The optimal structure of incentives and authority within an organization, Bell Journal of Economics, 7 (1976), 105-131. doi: 10.2307/3003192. Google Scholar
M. Mitchell and Y. Zhang, Unemployment insurance with hidden savings, Journal of Economic Theory, 145 (2010), 2078-2107. doi: 10.1016/j.jet.2010.03.016. Google Scholar
J. Prat and B. Jovanovic, Dynamic contracts when the agent's quality is unknown, Theoretical Economics, 9 (2014), 865-914. doi: 10.3982/TE1439. Google Scholar
Y. Sannikov, A continuous-time version of the principal-agent problem, The Review of Economic Studies, 75 (2008), 957-984. doi: 10.1111/j.1467-937X.2008.00486.x. Google Scholar
H. Schättler and J. Sung, The first-order approach to the continuous-time principal–agent problem with exponential utility, Journal of Economic Theory, 61 (1993), 331-371. doi: 10.1006/jeth.1993.1072. Google Scholar
K. Uğurlu, Dynamic optimal contract under parameter uncertainty with risk-averse agent and principal, Turkish Journal of Mathematics, 42 (2018), 977-992. doi: 10.3906/mat-1703-102. Google Scholar
C. Wang and Y. Yang, Optimal self-enforcement and termination, Journal of Economic Dynamics and Control, 101 (2019), 161-186. doi: 10.1016/j.jedc.2018.12.010. Google Scholar
X. Wang, Y. Lan and W. Tang, An uncertain wage contract model for risk-averse worker under bilateral moral hazard, Journal of Industrial and Management Optimization, 13 (2017), 1815-1840. doi: 10.3934/jimo.2017020. Google Scholar
N. Williams, On dynamic principal-agent problems in continuous time, working paper, University of Wisconsin, Madison, (2009). Google Scholar
N. Williams, A solvable continuous time dynamic principal–agent model, Journal of Economic Theory, 159 (2015), 989-1015. doi: 10.1016/j.jet.2015.07.006. Google Scholar
T.-Y. Wong, Dynamic agency and endogenous risk-taking, Management Science, 65 (2019), 4032-4048. Google Scholar
J. Yong and X. Y. Zhou, Stochastic controls: Hamiltonian systems and HJB equations, vol. 43, Springer Science and Business Media, 1999. doi: 10.1007/978-1-4612-1466-3. Google Scholar
Figure 1. (a) The evolution of the agent's consumption over time $ t $ (b) Reduction in the principal's dividend over time $ t $
Table 1. Comparison of the optimal consumption and dividend under known and unknown ability
Known ability Unknown ability
Consumption $ c^N=\mu M-\frac{1}{\lambda}\left[\ln k+\ln(-q)\right] $ $ c^{un}=\mu M-\frac{1}{\lambda}\big[\ln {k^T(t)}+\ln(-q)\big] $
Dividend $ d^N=ry-\frac{1}{\lambda}\big[K(t)+\ln r -\ln(-q)\big] $ $ d^{un}=ry-\frac{1}{\lambda}\big[K_1(t)+\ln r -\ln(-q)\big] $
Xin Zhang, Jie Xiong, Shuaiqi Zhang. Optimal reinsurance-investment and dividends problem with fixed transaction costs. Journal of Industrial & Management Optimization, 2021, 17 (2) : 981-999. doi: 10.3934/jimo.2020008
José Madrid, João P. G. Ramos. On optimal autocorrelation inequalities on the real line. Communications on Pure & Applied Analysis, 2021, 20 (1) : 369-388. doi: 10.3934/cpaa.2020271
Sergio Conti, Georg Dolzmann. Optimal laminates in single-slip elastoplasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 1-16. doi: 10.3934/dcdss.2020302
Haili Yuan, Yijun Hu. Optimal investment for an insurer under liquid reserves. Journal of Industrial & Management Optimization, 2021, 17 (1) : 339-355. doi: 10.3934/jimo.2019114
Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251
Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253
Giulio Ciraolo, Antonio Greco. An overdetermined problem associated to the Finsler Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021004
Tommi Brander, Joonas Ilmavirta, Petteri Piiroinen, Teemu Tyni. Optimal recovery of a radiating source with multiple frequencies along one line. Inverse Problems & Imaging, 2020, 14 (6) : 967-983. doi: 10.3934/ipi.2020044
Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112
Chong Lai Lishan Liu Rui Li | CommonCrawl |
Is there such a thing as "Action at a distance"?
Asked 10 years, 1 month ago
What ever happened to "action at a distance" in entangled quantum states, i.e. the Einstein-Rosen-Podolsky (EPR) paradox? I thought they argued that in principle one could communicate faster than speed of light with entangled, separated states when one wave function gets collapsed. I imagine this paradox has been resolved, but I don't know the reference.
quantum-mechanics quantum-information quantum-entanglement faster-than-light causality
$\begingroup$ I believe most of these correlations are related to symmetries, or per Emmy Noether, conservation. Is this correct? Also, we are assuming 3 dimensions in assessing our distances. I'm sorry if I only added more questions. $\endgroup$ – user16050 Nov 17 '12 at 16:24
It's not possible to communicate faster than light using entangled states. All you get out of entanglement is a correlation between the values of two measurements.; the entanglement doesn't allow you to influence the value measured at another location in a non-causal way. In other words, the correlation only becomes evident after combining the results from the measurements afterwards, for which you need classical information transfer.
For example, consider the thought experiment described on the Wikipedia page for the EPR paradox: a neutral pion decays into an electron and a positron, emitting them in opposite directions and with opposite spins. However, the actual value of the spin is undetermined, so with respect to a spin measurement along a chosen axis, the electron and positron are in the state
$$\frac{1}{\sqrt{2}}\left(|e^+ \uparrow\rangle|e^- \downarrow\rangle + |e^+ \downarrow\rangle|e^- \uparrow\rangle\right)$$
Suppose you measure the spin of the positron along this chosen axis. If you measure $\uparrow$, then the state will collapse to $|e^+ \uparrow\rangle|e^- \downarrow\rangle$, which determines that the spin of the electron must be $\downarrow$; and vice versa. So if you and the other person (who is measuring the electron spin) get together and compare measurements afterwards, you'll always find that you've made opposite measurements for the spins. But there is no way to control which value you measure for the spin of the positron, which is what you'd need to do to send information. As long as the other person doesn't know what the result of your measurement is, he can't attach any informational value to either result for his measurement.
$\begingroup$ Excellent answer, but I think there is something still missing. You can still naively argue that the results of all the measurements were predetermined when the electron and the positron were emitted, and therefor there is nothing "special" about the correlation. To answer this claim you need Bell's theorem, as explained by this great answer. $\endgroup$ – Joe May 25 '11 at 9:00
Well, the problem in that paradox is that yes, one of the parties will measure the entangled particle to get the wave function collapsed and yes it will collapse for the other party. However, the other party will still have to measure the thing to learn what it is or has to wait for the initial party to send them a message telling what the wave function has collapsed to. The first method will result in a 50% +x and 50% -x(if it is spin you are measuring), as the wave function that collapsed can collapse to either one of these states. So the fact that the wave function collapsed does not really transfer any usable information to the other side. The second method is capped with the speed of light anyway.
CemCem
$\begingroup$ Of course they can't transfer information faster then the speed of light because of the no-signalling theorem. However, we should be careful of how we state the first scenario! In some games, for instance CHSH, you can win the game %75 of the times classically, whereas you can do ~%80 using quantum theory! $\endgroup$ – iii Dec 24 '10 at 2:07
Let's be more rigorous. No-signalling has been proven safely and shouldn't be worried about. Nevertheless, you'd notice that the point of EPR paper was to show that if quantum mechanics is considered to be a description of "reality", then it is "incomplete". There is an approach, such as in operationalism, to say quantum mechanics isn't meant to be a description of reality. It's a description of our knowledge of reality, due to Asher Peres. Another approach is to say we can give an ontological model of quantum theory using contextual hidden variables, such as the one in de Broglie-Bohm model. So conclusion: EPR argument hasn't been resolved if you mean it's gone! Because in fact orthodox quantum mechanics isn't a complete description of reality. However, it doesn't mean one can signal faster than light!
Some interesting extra information: There is an interesting paper which kind of analyses Einstein's argument. It bring historical facts that show Einstein didn't like the EPR and wrote another paper with the same title in correspondence with Schrödinger, and the one with Rosen and Podolsky was never reviewed by him.
This quotation is from a letter of Einstein to Schrödinger, dated June 19, 1935:
"For reasons of language this [paper] was written by Podolsky after many discussions. But still it has not come out as well as I really wanted; on the contrary, the main point was, so to speak, buried by the erudition."
Update: A source of confusion in my answer has been pointed out by Marek. I'll try to clarify here: Scientific realism assumes there is an underlying objective reality which has attributes regardless of them being measured by an observer. One can suggest a model which ignores such reality and say "...there is no logical necessity for a realistic worldview to always be obtainable"(Fuchs and Peres, Physics Today 53 (3), 70-71.). On the other hand, one can offer an ontological model which in this case it can be located in 3 different category as in figure below:
I believe Einstein had in mind to show that quantum mechanics can't give a picture of type (a). Which was successful. Because, even if there is an underlying reality, quantum states can't sharply specify them by any means.
$\begingroup$ that's a great piece of historical trivia. Einstein's most highly cited paper (by a huge margin!) wasn't even written by him. $\endgroup$ – Jeremy Dec 24 '10 at 3:09
$\begingroup$ @Sine: nice overview but one thing has to be made more precise: "quantum mechanics is not complete description of reality" has to be specified as "quantum mechanics doesn't allow us to know everything we want to know". But that's because (as far as we know) reality is inherently quantum in nature (in particular, there can be no local hidden parameter theory) and so quantum theory certainly is complete description of reality in this sense. $\endgroup$ – Marek Dec 24 '10 at 10:05
$\begingroup$ @Marek: The update is my attempt to clarify what I meant by completeness. $\endgroup$ – iii Dec 24 '10 at 11:02
$\begingroup$ @Sina: uh, I have to say I am confused now. I think the picture a) is correct as long as one works in good interpretation of quantum theory (in particular, one has to dispose of the notion of wave-function collapse). Complete state of system indeed is specified by a ray in Hilbert space and there is no more information to be found anywhere. On the other hand, picture b) is surely incorrect if it means to be a hidden parameter theory. As for picture c), I am uncertain about what it means precisely. But again it feels like hidden parameter theory and so is ruled out. $\endgroup$ – Marek Dec 24 '10 at 11:10
$\begingroup$ Hmmm... I understand it's not entirely clear from the figure only! It's derived from the paper I linked in the body of my answer. I was intending to point to where one should look at, in that paper to find an adequate answer. It is explained in page 5. Nevertheless, I guess all this is just complimentary information. I think I made the point by just saying No-signalling theorem shows that we can't signal faster than light using non-local features of quantum mechanics. :-) $\endgroup$ – iii Dec 24 '10 at 13:38
Everybody misses the point Einstein was trying to make, which makes it all the more remarkable that it's been 80 years since he was working on spooky action at a distance. The no signalling theorems mean nothing, and it's a shame that most answers simply site: no signalling, nothing spooky about it. Bell emphasized that his theorem could be quickly summarized as: there is non-locality. Guess what? Bell was very well aware of the no signalling theorems. The point is not that we can send signals, the point is that there is a signal sent by the photons themselves -- they have to be communicating. How else could they always coordinate their spins? Einstein's whole critique of quantum mechanics was that it needed to be like Bertleman's socks -- e.g. that the spins were already determined before the experiment, or else there would have to be a non-local communication to coordinate the spins. Einstein called it telepathy, and it's been proven by Bell.
If you don't think there's spooky action, then how do you explain that the spins are always coordinated? If you gave Alice and Bob each a quarter and separated them by a large distance and tasked them with choosing heads or tells, and they always came back to you with one choosing heads, the other choosing tails, what might you think? Maybe they talked to each other on a phone and coordinated their results?
Being silly, in the vein of the season, if you agreed before hand that if in David's example your colleague could open their christmas present on the other side of the world if their electron was spin up, and the same time they measured their electron you measured your positron as spin down, you now know they have opened their present, faster than the speed of light
Instantaneous data transfer!
SoulmanZSoulmanZ
$\begingroup$ haha very nice indeed! (unfortunately i won't be giving a +1 because it's not really a correct answer) $\endgroup$ – Cem Dec 23 '10 at 22:31
$\begingroup$ -1 I guess this is a joke, but it's not instantaneous data transfer, and some people could find it confusing. $\endgroup$ – Mark Eichenlaub Dec 23 '10 at 22:33
$\begingroup$ sorry mark! was gonna qualify it as false, but i thought it was pretty self-explanatory. For those that don't understand - you have no 'proof' the present is open, only faith that those at the other end did whay they were told, and weren't drunk on egg nog, kissing the undergrad $\endgroup$ – SoulmanZ Dec 23 '10 at 23:20
$\begingroup$ @Soulman: you don't need quantum present for this, classical present would do just fine. If you put blue teddy bear in one box and red teddy in the other and then ask your friend to take one of them with them then as soon as you open your present and find blue teddy, BAM you instanteously know that your friend has the red one. I think people often don't realize, that classical physics can sometimes contain same paradoxes quantum physics can. And also that problem with entanglement and collapse lies in something completely different (cont). $\endgroup$ – Marek Dec 24 '10 at 10:10
$\begingroup$ This is Berkelmann's always-mismatched socks, used by Bell to demonstrate nonlocal correlation. When you see one is red, you instantly know the other isn't. This isn't how quantum entanglement works, because this type of thing is determined by local hidden variables (the color of the socks) and so can't violate Bell's inequality. $\endgroup$ – Ron Maimon Dec 18 '11 at 7:07
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-information quantum-entanglement faster-than-light causality or ask your own question.
A Quantum Telephone
How is entanglement possible?
Why is quantum entanglement considered to be an active link between particles?
Regarding time dilation and particle entanglement
Quantum communication
Are quantum-entangled particles affected by relativistic speeds?
Energy transfer using quantum entanglement
How viable is using Quantum Entanglement for long distance communication?
Could the Heisenberg Uncertainty Principle turn out to be false?
EPR vs. EPRBB? Why can't we perform the original EPR experiment?
What was the need for doing experiments to prove quantum entanglement?
Can someone explain why this QM FTL communication setup is wrong?
Recharacterizing spooky action at a distance as a time loop invariant?
Sending Information With Entangled Particles | CommonCrawl |
Structural Chemistry
On the relations between aromaticity and substituent effect
Halina Szatylowicz
Anna Jezuita
Tadeusz M. Krygowski
Aromaticity/aromatic and substituent/substituent effects belong to the most commonly used terms in organic chemistry and related fields. The quantitative description of aromaticity is based on energetic, geometric (e.g., HOMA), magnetic (e.g., NICS) and reactivity criteria, as well as the properties of the electronic structure (e.g., FLU). The substituent effect can be described using either traditional Hammett-type substituent constants or characteristics based on quantum-chemistry. For this purpose, the energies of properly designed homodesmotic reactions and electron density distribution are used. In the first case, a descriptor named SESE (energy stabilizing the substituent effect) is obtained, while in the second case cSAR (charge of the substituent active region), which is the sum of the charge of the ipso carbon atom and the charge of the substituent. The use of the above-mentioned characteristics of aromaticity and the substituent effect allows revealing the relationship between them for mono-, di-, and polysubstituted π-electron systems, including substituted heterocyclic rings as well as quasi-aromatic ones. It has been shown that the less aromatic the system, the stronger the substituent influence on its π-electron structure. In all cases, when the substituent changes number of π-electrons in the ring in the direction of 4N+2, its aromaticity increases. Intramolecular charge transfer (a resonance effect) is privileged in cases where the number of bonds between the electron-attracting and electron-donating atoms is even. Quasi-aromatic rings, when attached to a truly aromatic hydrocarbon, simulate well the "original" aromatic rings, alike the benzene. For larger systems, a long-distance substituent effect has been found.
Molecular modeling Substituent effect Electronic structure Substituent effect stabilization energy Charge of the substituent active region
Dedicated to Professor Zbigniew Galus of the Department of Chemistry of the Warsaw University our friend and outstanding physical chemist on the occasion of his 85th anniversary.
Aromaticity and substituent effects are among the most important and useful terms in organic chemistry and related fields. Taking into account the last decade (2008–2017), entries: aromatic/aromaticity, substituent(s) and substituent effect(s) appear in title, abstract or key words on average 35, 12, and 4 times per day, respectively [1]. Both aromaticity and substituent effect concepts are an old story, but still alive, fascinating, and inspiring.
For the first time, the chemical idea of aromaticity appeared as a structural concept: Kekule addressed the term to compounds containing the benzene ring [2]. A year later, Erlenmayer [3] named as aromatic the compounds having similar properties as benzene derivatives. The most important aspects of the development of the concept of aromaticity are presented in Table 2 of the review paper by Schleyer and coworkers [4].
There has been some kind of dichotomy since then: how to understand the aromatic character, using a chemical structure or chemical properties? To date, most of the works on aromaticity have been devoted to relationships between the structure and the properties of so-called aromatic compounds. It was found very early that the most significant chemical properties that differentiate aromatic compounds from their unsaturated analogs are that "they are inclined to substitution and disinclined to addition reactions and are thermally stable" as Robinson concluded [5].
The first quantitative approach to determine aromaticity is based on the concept of resonance energy (RE) [6] defined as the difference between the energy of a given molecule and the energy of its reference model, the "unsaturated" analog. It was widely used to π-electron compounds, including those that contain heteroatoms such as nitrogen, sulfur, and oxygen [7]. RE was also associated with delocalization energy (DE) defined as the calculated additional bonding energy which "results from delocalization of electrons originally constrained to isolated double bonds" [8]. The greater RE/DE values, the more stable is a molecule and higher its aromatic character. The RE concept has undergone many modifications concerning both the models of reference molecules and the level of theory used to estimate energies. RE values can be estimated from experimental thermochemical data [9, 10] or by the use of quantum chemistry computation. Dewar et al., using Parr-Pariser-Pople π-electron method, found that bond energies of acyclic polyenes are additive [11, 12, 13] and then the so-called Dewar resonance energy (DRE) was introduced [14]. Based on the same rule of bond energy additivity, Hess and Schaad used the simple Hückel Molecular Orbital (HMO) approach to a large number of π-electron hydrocarbons [15, 16, 17] and hetero π-electron compounds [18] and a new term was introduced — the Hess-Schaad stabilization energy (HSSE), for review see [19]. The use of resonance energy per π-electron (REPE) allows to compare the aromaticity of molecules of different sizes. Stabilization energies can be determined using different reference systems, a wide and instructive review by Cyrański presents all these problems in detail [14]. The HMO approach was also applied to quantitative definition of aromaticity, named by an acronym KK. It is based mainly on chemical intuition and defined as "an amount of π-electron energy that the molecule loses as a result of an addition reaction at positions r and s, i.e. when in those positions a change of hybridization state from sp2 to sp3 occurs" [20]. Schematically, the idea of KK index is presented in Fig. 1.
Scheme of the reaction path for substitution and addition in terms of π-electron energies. Reprinted (adapted) from Tetrahedron Lett 11:320 (1970) [20]. Copyright (1970), with permission from Elsevier
The higher the KK value, i.e., the greater energy loss due to the addition reaction, the more difficult the molecule is to undergo this reaction, and the more aromatic the molecule is. This definition of aromaticity is evidently related to the old chemical issue that aromatic molecules prefer a substitution reaction rather than an addition reaction [21]. Thanks to this approach, π-electron systems can be classified as shown in Fig. 2: annulenes with 4N+2 and 4N π-electrons form two curves, and between them and below there are other cyclic and acyclic π-electron systems [22]. A similar graph but only for annulenes was presented earlier by Dewar [23] and Figeys [24], for review and generalization see [25].
Dependence of KK index on the number of π-electrons in molecules. Reprinted (adapted) from Tetrahedron Lett 11: 1311 (1970) [22]. Copyright (1970), with permission from Elsevier
The first quantitative characteristic of aromaticity based on molecular geometry was introduced by Julg and Francoise [26]. It was defined as a function of the normalized variance of the perimeter bond lengths. The greater deviation from the mean bond length, the less delocalization of the π-electrons, and the molecule is less aromatic. Next year, the bond lengths were replaced by the HMO bond orders and the difference between the mean bond order and the bond orders of all bonds of a molecule, taken in modulo and normalized, gave a numerical descriptor of aromaticity [27]. In the next step, the average value of bond lengths was replaced by an empiric concept of optimal bond length [28, 29]. Then, differences in the length of bonds, di, in a given molecule from the optimal bond length, dopt, were used as the basis for estimating aromaticity index named HOMA (Harmonic Oscillator Model of Aromaticity):
$$ \mathrm{HOMA}=1-\frac{1}{n}\sum \limits_{i=1}^n\alpha {\left({d}_{opt}-{d}_i\right)}^2 $$
where n is the number of CC bonds taken into a consideration, α = 257.7 is an empirical normalization constant chosen to give HOMA = 0 for non-aromatic system and HOMA = 1 for a system where all bonds are equal to dopt = 1.388 Å, and di are the bond lengths.
For π-electron systems with heteroatoms the parameters: dopt and α, are given in collection of papers [29, 30, 31, 32, 33, 34]. An important advantage of the HOMA approach is that it can be used for estimation of π-electron delocalization of any π-electron fragment of a molecule. The approach has been modified many times [34, 35, 36] but the basic idea has not been changed. A few years later, Bird introduced aromaticity index I6 for six-membered rings [37] and I5 for five-membered ones [38] using the bond orders calculated directly from bond lengths via the formula suggested by Gordy [39].
Some help in understanding the aromatic character can come from the harmonic oscillator stabilization energy (HOSE) [40, 41]. This approach is related to the well-known way in organic chemistry of presenting the chemical properties of molecules using their resonance structures [42]. The HOSE is based on estimation of the stabilization energy and the contribution of particular Kekule' (canonical) structures, obtained from experimental bond lengths, in the description of a π-electron system. The physical meaning of HOSEi can be interpreted as follows: it is energy by which the real molecule is more stable than its ith Kekule' structure. Among the many applications of the HOSE model, two of them show their advantages. The obtained HOSE values [41] were found to be in a good correlation with the RE values obtained by Hess and Schaad for alternant hydrocarbons [15] and non-alternant species [17], with correlation coefficients, cc, 0.991 (for n = 22 data points) and 0.937 (for n = 12), respectively. There was also a very good correlation between HOSE contributions [41] of the resonance structures and those proposed by Randic [43] (cc = 0.985 for n = 65 data points). Recently, it has been found that HOSE contributions of resonance structures correlate very well with canonical structures estimated using a topological approach, cc = 0.997 for 150 data points [44]. A detailed overview of the geometry-based aromaticity indices can be seen in the review [45].
Another approach to determine aromaticity has come from magnetic studies of π-electron systems. One of the first descriptors of this type is the diamagnetic susceptibility exaltation. Already in 1968, it was proposed as a criterion of aromaticity [46], since it was accepted as a documentation of the presence of π-electron delocalization in a molecule [47, 48]. It is important to mention that magnetic susceptibility is a property of a whole molecule and can be obtained both experimentally and by quantum chemistry computations. Some kind of revolution was introduction in 1996 by Schleyer of the concept of nucleus independent chemical shift (NICS) [49]. It was defined as the negative value of the absolute shielding calculated in the geometric center of the ring system. Now it is also calculated at other points inside [50] or around molecules [4]. Due to many possibilities of the point and way of NICS estimation, Schleyer recommended a component corresponding to the principal axis perpendicular to the ring plane, NICSzz, as the preferred measure for characterizing the π system [51]. Another possibility is to estimate the NICS value 1 Å above the molecular plane, named NICS(1) [4]. It should be mentioned that all NICS values describe only local aromaticity, i.e., of a particular ring, moreover, they depend not only on the size of the ring but also on the neighboring parts of the ring in question.
In recent decades, there have been characteristics of aromaticity based clearly on electron structure and electron delocalization. For this purpose, electron structure descriptors based on AIM theory [52, 53, 54] were used: charges, Laplacian, energy, and its components such as kinetic and potential energies, estimated at the bond or ring critical points [55, 56].
Other approaches based on the electronic structure are associated with characteristics of electron delocalization. In the case of six-member rings, a delocalization index for atoms in para positions was defined, PDI [57], whereas for all atoms in the ring as well as any π-electron fragments — a multicenter bond index, MCI [58]. In 2005, Sola et al. [59] introduced the aromatic fluctuation index, FLU, that describes the fluctuation of electronic charge between adjacent atoms in a given ring. It has been documented that above-mentioned indices are well correlated with HOMA and NICS for benzenoids as well as non-benzenoid hydrocarbons, and even nitrogen analogs and some unsaturated cyclic systems. For review see [60].
In front of so many possible criteria of aromaticity an important question arises: how far so different approaches lead to equivalent conclusions? This problem was the subject of many papers [61, 62, 63, 64, 65]. The answer, at least to the extent to which the problem in question relates to the traditional definition of aromaticity, was presented by Cyrański et al. [66]. In general, the overall trend is broadly met and there are correlations between the aromaticity indexes, but in many specific situations, they may lead to inconsistent results. However, the use of any of the well-accepted aromaticity descriptors for structurally similar molecular systems should lead to reliable conclusions [67]. This condition is met for substituted derivatives of a given molecule.
One more descriptor of electron structure of aromatic compounds may come from the pEDA/sEDA approach [68]. The pEDA and sEDA descriptors are defined as populations of the π- and sigma orbital electrons, respectively, in a given planar molecule or its planar part.
Recently, an approach based on quantum chemistry has been introduced — named as the electron density of delocalized bonds, EDDB [69], and successfully applied as an aromaticity criterion [70] as well as used for description of aromaticity of acenes [71]. The EDDB method revealed that the local aromaticity of a particular ring in a polycyclic benzenoid hydrocarbon may be significantly affected by long-range exchange corrections in the description of electron delocalization [72].
Substituent effect (SE) is another term in the title which requires a substantial comment. It is well recognized that benzene is a toxic and dangerous carcinogen, its substituted derivative — benzoic acid — is applied in the preparation of commonly used preservatives, in a form of its sodium or calcium salts. Subsequent substitution by acetylic group leads to a drug, well known under the name of aspirin [73]. This type of qualitative picture presents a broad spectrum of changes in chemical/physicochemical and even biochemical properties. However, still, a significant problem exists: how to describe the substituent effect quantitatively. The first quantitative approach in describing the substituent effect was proposed by Louis Plack Hammett [74, 75]. He introduced, as the quantitative characteristic of the substituent effect, the substituent constant termed σ, defined by Eq. (2):
$$ \upsigma \left(\mathrm{X}\right)=\lg\ K\left(\mathrm{X}\right)-\lg\ K\left(\mathrm{H}\right) $$
where K(X) and K(H) are equilibrium constants for substituted and unsubstituted benzoic acids in water under normal conditions.
For chemical processes, rate, or equilibrium constants (k or K, respectively), the use of the substituent constants leads to the Hammett equation, Eq. (3):
$$ \lg\ \left(K\left(\mathrm{X}\right)\ \mathrm{or}\ k\left(\mathrm{X}\right)\right)=\uprho\ \upsigma {\left(\mathrm{X}\right)}_{\mathrm{p},\mathrm{m}}+ const $$
where ρ is the so-called reaction constant and describes sensitivity of the process to the impact of the substituent X.
The value of const in Eq. (3) should be close to the lg K(H) or lg k(H), this is for unsubstituted system. In principle, the Hammett equation is a typical similarity model [76], changes in physicochemical properties P(X) follow the general equation:
$$ \mathrm{P}\left(\mathrm{X}\right)=\uprho\ \upsigma {\left(\mathrm{X}\right)}_{\mathrm{p},\mathrm{m}}+ const $$
The above equations postulate that changes in various chemical/physicochemical properties observed in the "reaction site" Y in X-R-Y systems depend in a similar way on substituents X as the acidity of m- and p-substituted benzoic acids. The Hammett equation or its modifications have found countless applications. It has been widely documented that substituent constants may well serve to describe impact of the substituent on most of the physicochemical and even biochemical properties of molecules [77, 78, 79, 80, 81, 82, 83, 84]. Already in the first three decades, since the original idea was introduced, over 20 different modifications of the original Hammett substituent constants appeared [85]. They have been designated for various specific types of intramolecular interactions. However, in general, they have caused some disappointment in understanding how these empirical modeling of substituent effect can really work. Thus, some clarification of the topic is important for the advantageous use of the terms of the substituent effect and substituent constants.
Substituent constants have been used to parametrize the HMO-based approach Then, parameters for resonance and Coulomb integrals in the HMO theory (for review see Streitwieser [86]) were related to the Hammett substituent constants σ, leading to the concept of an Effective Inductive Parameter (EIP) [87]. The application of the HMO EIP model allowed an interpretation of polarographic E1/2 potentials of dichloro-anthraquinone derivatives for first and second steps of electroreduction [27], substituent effects on polarographic properties of some aromatic nitro- and azo-compounds [88] and on PMR chemical shifts of monosubstituted thiophene derivatives [89].
A very interesting description of electronic properties of the substituent results from the statistical analysis of the geometry patterns of monosubstituted benzene rings [90, 91]. The benzene ring deformations are associated with the old concept of group electronegativity [92, 93] and with the one recently modified by the Domenicano research group [94, 95].
The dynamic development of quantum chemistry methods and computer-aided applications [96] has created a very convenient atmosphere for research in the field of SE. To define one of the first SE descriptors based on quantum chemistry, a homodesmotic reaction [97, 98] was used:
$$ \mathrm{X}-\mathrm{R}-\mathrm{Y}+\mathrm{R}\to \mathrm{X}-\mathrm{R}+\mathrm{R}-\mathrm{Y}. $$
Then, the energy of this reaction, according to Eq. (5):
$$ \mathrm{SESE}=\mathrm{E}\left(\mathrm{R}-\mathrm{X}\right)+\mathrm{E}\left(\mathrm{R}-\mathrm{Y}\right)\hbox{--} \left[\mathrm{E}\left(\mathrm{X}-\mathrm{R}-\mathrm{Y}\right)+\mathrm{E}\left(\mathrm{R}\right)\right] $$
describes the overall energy of the process and was named Substituent Effect Stabilization Energy (SESE). Most often, its values are well correlated with the Hammett constants [99]. When the SESE value is positive, it means that the intramolecular interactions between the substituents X and Y in X-R-Y stabilize the system.
Another successful approach based on quantum chemistry refers to the application of the molecular electrostatic potential (MESP) topography, documented for monosubstituted benzene derivatives by a good correlation with SC's [100]. The use of MESP on the ring carbon atoms or in the atoms of the reaction site also revealed their good correlations with SC's [101, 102, 103]. In addition, the MESP approach allowed to appraise the through bond and through space interactions [104]. Molecular electrostatic potential has also been used for the quantitative assessment of the inductive effect [105] and finally to the classification of the substituent effect [106].
The first electronic interpretation of the substituent effect was proposed by Hammett [75]. However, the direct application of the substituents atomic charges, q(X), does not correlate with the Hammett substituent constants. Such correlation works well when instead q(X) the charge of the substituent active region approach, abbreviated cSAR(X), introduced by Sadlej-Sosnowska [107, 108], is applied. It is defined as a sum of atomic charges of all atoms of the substituent X and the ipso carbon atom:
$$ \mathrm{cSAR}\left(\mathrm{X}\right)=\mathrm{q}\left(\mathrm{X}\right)+\mathrm{q}\ \left({\mathrm{C}}_{ipso}\right) $$
In addition, in the disubstituted benzene derivatives X-Ph-Y, the cSAR values allowed to estimate the magnitude of the charge transferred from X to Y, or vice versa [109].
The success of cSAR(X) compared to q(X) is due to the fact that CC bonds cut for the cSAR(X) approach are very weakly polar in opposition to C-X bonds. The latter can be very polar and therefore sensitive to the method of atomic charge assessments, as shown in Scheme 1.
Graphical presentation of q(X) (a) and cSAR(X) (b) definitions.
As mentioned above, in contrast to the atomic charges of the substituent q(X), the cSAR(X) values correlate well with SC's [110], moreover, independently of the type of atomic charge assessments (Mulliken [111], AIM [112], Voronoy [113], Hirshfeld [114] and NBO [115]). This has been documented for 12 para-substituted derivatives of nitrobenzene. Figure 3 presents linear regressions between cSAR(X) values calculated by the use of different methods of atomic charge assessment. Even if the correlation for AIM data is weaker, however, when the cSAR (NO2) values are estimated using these different methods, all mutual correlations are excellent, as presented in Fig. 4.
Linear correlations between cSAR(X) values calculated by VDD method and data from Hirshfeld, Mulliken, Bader, and Weinhold approaches for p-nitrobenzene X derivatives with X = NO2,CN, CHO, COOMe, COMe, Cl, H, Me, OMe, NH2 and NHMe (cc = 0.996, 0.981, 0.923 and 0.982, respectively. Reused from [110], this work is licensed under the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)
Correlation between cSAR(NO2) calculated from VDD charges and data from Hirshfeld, Mulliken, Bader and Weinhold methods for p-nitrobenzene X derivatives with X = NO2, CN, CHO, COOMe, COMe, Cl, H, Me, OMe, NH2 and NHMe (cc = 0.999, 0.998, 0.986 and 0.986, respectively). Reused from [110], this work is licensed under the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)
Recent studies on disubstituted benzene and cyclohexa-1,3-diene derivatives have provided support for use of quantum chemistry–based substituent characteristics. The substituent effect estimated by cSAR(X) and SESE exhibited equivalent effectiveness as the traditional substituent constants. Molecular systems of the series X-R-Y have been investigated for 16 substituents and seven "reaction sites", Y = NO2 [116, 117], OH [118], COOH [119], NH2 [120, 121] as well anionic COO− [119] and O− [118] moieties, with substituents in three and four positions of R = benzene or cyclohexa-1,3-diene. In addition, the use of both traditional and quantum chemistry–based descriptors of SE allows us to answer the question of how far the substituent effect in disubstituted cyclohexa-1,3-diene derivatives differ from those in bicyclo[2,2,2]octane and benzene [122]. The use of quantum chemistry–based descriptors has allowed to study dependence of the solvent on SE [123] and also provided a physical interpretation of the inductive and resonance effects [124].
There are two types of studies related to the substituent effect. Either they are realized by some specific exchange of one substituent by another or they are devoted to research on changes of some physicochemical or biochemical properties, taking into account a set of substituents, so-called "reaction series." In the first case, the influence of the substituent change on some chemical, physicochemical, or biochemical properties is examined, while in the second — a certain generalization for the collected data is looked for. In this report, we will review the second type of the approach.
Classification of the substituent effect
There are a few possible classifications of how the substituent effect can be taken into account. The most general model is presented in Scheme 2. The most frequently used type of interaction is named as classical or traditional SE, where properties of the "reaction site" Y (the fixed group in the series) in the disubstituted X-R-Y system are related to properties of the substituent. The other type of SE is when the properties of the substituent X are related to the nature of the "reaction site" Y. These interactions are known as the reverse SE [110]. One more aspect of SE is observed when properties of the R transmitting moiety are a subject of the influence of the substituent X (or of both, X and Y), and finally when various properties of the Y fragments are mutually interrelated.
Model approach to the substituent effect. Graphical abstract reprinted from Phys Chem Chem Phys 18:11711–11721 (2016) [120] with permission from the PCCP Owner Societies.
Another classification can be made when the SE is considered for mono-, di-, tri-, and multi-substituted species. The di- and multi-substituted systems are much more complex and problems with additivity or non-additivity of SE appear [125, 126]. Hence, related papers are rarely found in literature. Finally, some types of SE may be considered for polycyclic systems, where sometimes no simple rules work.
It should be emphasized that the use of SE descriptors based on quantum chemistry enables the quantitative characterization of the reverse SE, which describes how much a given substituent can change its electron-donating/-attracting properties in dependence on the place and the type of the molecular system to which it is attached. This type of effects was already observed by Hammett [75] showing that the substituent constants for nitro group in para-NO2-phenol and in para-NO2-benzoic acids differ significantly: 1.27 and 0.78, respectively. In addition, the quantum chemistry–based SE descriptors, such as cSAR(X) or SESE, allow to estimate an electron-donating/-attracting ability of any substituent and in almost all possible cases (systems).
Monosubstituted π-electron systems
The first paper on the quantitative dependence of aromaticity on SE was made in 1970 [127]. Aromaticity was characterized by index Dq, defined as a modulo of the normalized sum of differences between the HMO calculated averaged atomic π-electron charges and the charge in the position r, respectively. Therefore, Dq is a measure of SE on π-electron structure of benzene ring or, in other words, on a differentiation of atomic charges in the ring. When Dq values are plotted against the modulo of the substituent constants σp, then changes in aromaticity due to the impact of the substituent X are described by the equation:
$$ {\mathrm{D}}_{\mathrm{q}}=-0.915\mid {\upsigma}_{\mathrm{p}}\mid +0.084 $$
with cc = −0.946 (for n = 10 data points).
This means that for a stronger SE, a greater diversity of π-electron charges in the ring is observed. It can be compared with other studies of SE on aromaticity in mono-substituted benzene derivatives. For 19 systems [128] different descriptors of aromaticity were used, such as: aromatic stabilization energy (ASE) [129], HOMA [29], NICS's [4], and electron delocalization PDI index [57], whereas substituents were characterized by substituent constants. The obtained results revealed that, with exception of ASE, all other indices change to a small extent, indicating high resistance of the π-electron in benzene ring to the SE [128]. In all cases, the correlation coefficients have affirmed good linear regression. This is somewhat analogous to a well-known tendency of benzene-like systems to preserve their initial π-electron structure during the reaction course, that leads to aromatic substitution [21]. Therefore, it is not surprising that SE can be observed much better in less aromatic π-electron systems.
A very symptomatic is comparison of SE on π-electron delocalization found in monosubstituted cyclohexadiene (olefinic) and benzene (aromatic) systems. Relationships, for 16 substituted derivatives (see Scheme 3), between obtained HOMA values and substituent constants are presented in Figs. 5 and 6, respectively [130]. They show that π-electron delocalization in olefinic system increases with an increase of the electron-accepting/donating strength of SE described by substituent constants, whereas in the aromatic system, the trend is opposite and less pronounced.
Substituted derivatives of cyclohexa-1,3-diene (CHD): 1-X-CHD and 2-X-CHD (a) and benzene: X-Ph (b); X = NMe2, NH2, OH, OMe, CH3, H, F, Cl, CF3, CN, CHO, COMe, CONH2, COOH, NO2, NO
Dependence of HOMA on substituent constants, σp, for 1-X–cyclohexa-1,3-dienes. Reprinted from RSC Adv 6: 96528 (2016) [130]. Copyright 2016 with permission from The Royal Society of Chemistry
Dependence of HOMA on substituent constants, σp, for mono-substituted benzene derivatives. Reprinted from RSC Adv 6: 96528 (2016) [130]. Copyright 2016 with permission from The Royal Society of Chemistry
Differences in the impact of SE on the π-electron delocalization in olefinic and aromatic systems has also been expressed by comparisons of linear regressions of cSAR(X) on substituent constants in 1 and 2 positions in cyclohexadiene and in benzene (see Scheme 3), as presented in Table 1. Similarly, the regressions of cSAR(X) in 1- and 2-substituted cyclohexadiene CHD) differ from that observed in monosubstituted benzene. It has been shown that the position 1- in CHD is significantly more sensitive to SE than position 2- while the sensitivity of benzene is in between. Undoubtedly, the obtained slopes (Table 1) describe ability the π-electron systems for transmission of the SE.
Regressions of cSAR(X) on σ constant: cSAR(X) = a ∙ σ + b (from ref. [130])
1-X-CHD
− 0.263
X-Ph
Pentafulvene and heptafulvene (Scheme 4) are considered as non- or weakly aromatic classical cyclic π-electron systems. For exocyclic substituted fulvene derivatives changes of aromaticity due to SE were studied by means of the HOMA index, estimated from experimental bond lengths [131]. The HOMA values of pentafulvene derivatives were characterized by a large variability range: between − 0.106 for 6-(4-dimethylaminophenyl)fulvene and 0.702 for 6-dimethylamino-piperidinofulvene. In addition, the greatest HOMA value, equal to 0.986, was found for a salt: di-cyclopentadienyl calcium, where five π-electron ring of pentafulvene accepts the sixth electron from calcium atom, changing it into cation. Consequently, this allows the five-member ring to follow the Hückel rule, promoting it to the ring of type 4N+2.
Pentafulvene (a) and heptafulvene (b)
A similar conclusion was drawn from the results of a study of ring currents in complexes of pentafulvene with Li atoms [132]. The wider study [133] carried out for aromaticity (using of NICS, HOMA, pEDA indices) of pentafulvene complexes with alkaline metal (Li, Na, K, Rb, and Cs) shown that HOMA for free molecule of pentafulvene was − 0.297 and for all salts the values were ~ 0.560, revealing a good agreement with other aromaticity descriptors.
Substituent effects on π-electron delocalization were also investigated for a set of 29 exocyclically substituted fulvene derivatives [134]. Changes of aromatic character were observed in ring currents and using pEDA and HOMA descriptors. Excellent correlation (R2 = 0.988) between pEDA and aromaticity index HOMA was found. Depending on the electron donating/accepting power of substituents, the range of HOMA values was very large, between ~ − 0.5 and ~ 0.7.
An application of natural bond orbital (NBO) [115] approach for SE transmission through fulvene and benzene ring systems [135] allows to look inside the transmission properties of these systems, undoubtedly related to changes in π-electron delocalization. When pEDA values of fulvene are plotted against the data for benzene, then regression has the slope equal to 1.44 with cc = 0.949, indicating a strong π-electron accepting characteristic of fulvene ring, which contains 5 π-electrons and tends to follow the Hückel rule; whereas in the case of benzene containing six electrons, no such effect takes place. Therefore, from the point of view of π-electron structure, fulvene is significantly more sensitive to SE than benzene. In the case of exocyclically substituted fulvene systems, for electron donating substituents, a good linear regression between HOMA and exocyclic CC bond length (with the slope = 10.4 and cc = 0.970) is observed, whereas no correlation is found for other substituents. This is due to the strong electron attraction of a five-member ring, with a tendency to have six π-electrons.
Another weak or non-aromatic non-alternant π-electron system is heptafulvene (Scheme 4b). The aromaticity of its complexes with halogen atoms has been studied using HOMA, pEDA, and NICS indicators [136]. There are 7 π-electrons in the heptafulvene ring, and thus its interaction with the halogen atoms (Scheme 5) leads to a charge transfer to the halogen and, as consequence, halogen anions are formed. This process results in a change of HOMA from 0.165 for the free molecule up to 0.640 for the fluorine salt. The smaller halogen atom, i.e., the more electronegative, the greater change is observed. The dependence of HOMA on the charge at halogen atoms has a correlation coefficient as high as cc = −0.999! The correlations between HOMA and pEDA is also excellent with cc = −0.999, as well as between binding energy and NICS (cc = − 0.995).
Structure of heptafulvene-halogen atom complex (X = F, Cl, Br, I, At).
A wide overview of factors affecting aromaticity of mono-substituted derivatives of pentafulvene, benzene, and heptafulvene can be found [137].
An important group of aromatic systems is azoles, five-membered heterocyclic compounds containing at least one nitrogen atom as part of the ring. The simplest, pyrrole, despite of its five-membered ring, is to some extent an analog to benzene because it contains six π-electrons. This is achieved due to the presence of 2pz electron pair at NH group in the ring, which in consequence leads to a dramatic change in the SE on aromaticity of the ring, as shown in Table 2 (data taken from Ref. [138]) In the case of benzene derivatives, the substituent significantly less affects the aromaticity of the ring.
The lowest and the highest aromaticity indices (substituents are in parentheses) for monosubstituted benzenes and pyrroles (from ref. [138])
Aromaticity
Ph-X
Pyr-X
Δ a
0.90 (Li)
0.98 (F)
0.61 (BH2)
NICS(0)
− 6.72 (Li)
− 9.99 (F)
− 3.27
− 9.22 (BH2)
− 16.53 (F)
− 8.92 (NH2)
− 10.37 (Li)
NICS(1)zz
− 24.4 (NH2)
− 28.83 (H)
− 24.0 (BH2)
126.24 (BH2)
35.32 (F)
82.58 (Li)
aRanges of aromaticity index values between the most and the least aromatic molecules
Azoles containing various numbers of nitrogen atoms are further analogs of 6 π-electron rings that have some aromatic properties. Difference between SE in benzene and pyrazole and imidazole, all of them containing six π-electrons, is excellently shown in Fig. 7 [139]. For electron-donating substituents (σ<0) HOMA values are over 0.8, whereas for electron withdrawing ones, HOMA are less than 0.8. In the latter case, substituents attract π-electrons from the ring, leading to a formation of systems not fulfilling 4N+2 rule.
Correlations between HOMA aromaticity index and resonance substituent constant (σR) for substituted benzene (Bz), pyrazole (Pz) and imidazole (Im) derivatives. Reprinted (adapted) from J Phys Chem A 115:8575 (2011) [139]. Copyright 2011 with permission from the American Chemical Society
Five-membered tetrazole contains four nitrogen atoms and similarly as benzene six π-electrons. Tetrazole exists in two tautomeric forms, notified as 1H and 2H, as shown in Scheme 6. A comparison of the substituent effect in monosubstituted both tautomers of tetrazole and benzene derivatives on π-electron structure of these systems leads to interesting but diversified results [140]. The π-electron structure of the ring has been characterized by the pEDA index; 16 substituents were considered, with different π- and sigma-donor/acceptor properties. In all three cases the pEDA index, describing the π-electron transfer from the substituent to the ring or vice versa, is well correlated with σp+ constants. The more detailed analysis revealed that the dependence of 2pz orbital occupancies at carbon atoms of benzene in ortho and para positions on pEDA follows a linear trend with cc = 0.971 and 0.968, respectively. However, the same correlation for the carbon atom in the meta- position is worse (cc = − 0.791) and with the small opposite slope. It is again confirmed that the meta position differs in its interaction with substituents, and hence Hammett substituent constants for the meta and para positions are different.
C5-substituted 1H- (a) and 2H- tetrazoles (b).
Similar correlations were found for both 1H- and 2H- tetrazole derivatives. Occupations at 2pz orbitals of all nitrogen atoms, except N3, correlate nicely with pEDA values (cc ≥ 0.95). The lack of the correlation with the 2pz occupation at the N3 atom may suggest that this position in 1H- and 2H- tetrazoles resembles to some extent the meta- position in the benzene series.
A similar study comparing SE in the case of C- and N-monosubstituted pyrrole revealed that dependence of cSAR(NX) on cSAR(C3X) has the slope equal to 0.88 (R2 = 0.90), indicating the position C3 more sensitive to SE [141]. It has also been shown that the electron-donating/electron-attracting properties of the substituents attached at C3 position are practically identical to those observed in the monosubstituted benzene derivatives.
Very interesting and not typical is a consideration of doubly bonded substituent to a five-membered ring [142], this is mono-double-bond substituted cyclopenta-1,3-dienes (cyclopenta-2,4-dienone analogs, CPDA). The resulting dependence of HOMA on NICS is non-linear, but it is undoubtedly acceptable, as shown in Fig. 8.
Non-linear correlation between HOMA(5) and NICS(1)(5) indices for the CPDA systems. Reprinted from Org Biomol Chem 11:3008 (2013) [142]. Copyright 2013 with permission from The Royal Society of Chemistry
Disubstituted π-electron systems
Most of the typical applications of the Hammett rules apply to di-substituted π-electron systems of the X-R-Y type, where Y is a so-called "reaction site" or a fixed chemical group in the series, X is varying substituents, and R is a transmitter. The latter is the subject of our interest: how far R is affected by the substituent effect?
The first comprehensive analysis of the substituent effect on aromaticity in disubstituted benzene derivatives was presented by Cyranski and Krygowski [143, 144]. For this purpose, regression and factor analyses [145] of experimental molecular geometry [146, 147] of meta and para benzene derivatives (with Y = NO2, CN, COOH, Cl, OH, and NH2) were performed. Then, the application of these methods to five geometry-based aromaticity indices (HOMA, BAC [63], BE [63], E(n) [63] and I6 [37]) for six reaction series of para-disubstituted X-Ph-Y revealed [143, 144] that: (i) geometric indices of aromaticity follow the Hammett rule with σp, (ii) if X and Y with similar electron properties (both are either donating or withdrawing) are excluded, the observed correlation becomes much stronger, (iii) two orthogonal factors are sufficient to explain more than 95% of the total variance.
Systematic studies of the SE on π-electron delocalization estimated by HOMA and transmission properties of 3- and 4-disubstituted derivatives of benzene and cyclohexa-1,3-diene (CHD) were carried out using quantum chemistry–based descriptors: cSAR and SESE as well as traditional Hammett-like substituent constants. The results for the 4-substituted 1-nitro, and 1-hydroxy derivatives of CHD [122] are shown in Fig. 9. In both series, it was found that when electron properties of the substituents are opposite to those of the fixed group, the HOMA values correlate well with SESE, with a significant slope. This effect is not observed if X and Y have similar electron properties.
Dependences of HOMA on SESE for 4-X-CHD-NO2 and 4-X-CHD-OH series. Reused from [122], this work is licensed under the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)
The transmission properties of the substituent from a given position to the reaction center can be described by the charge flow index (CFI) [122], defined as:
$$ \mathrm{CFI}=\mathrm{cSAR}\left(\mathrm{Y}\right)\hbox{--} \mathrm{cSAR}\left(\mathrm{X}\right) $$
The comparison of the transmission of the substituent effect from positions 3- and 4- in the disubstituted CHD and benzene series is shown in Table 3. Slope values of linear equations indicate weaker transmission from the meta (3-) position than from the para (4-) position, but in CHD much stronger this effect is observed than in benzene series. When HOMA values of the substituted nitrobenzene derivatives for the meta are plotted against the para positions [117], then the slope is 0.56 (with high R2 = 0.97).
Values of the slope, a, and determination coefficient, R2, for correlation between CFI for 1–3 and 1–4 (meta and para) interactions in CHD and BEN derivatives (from ref. [122])
CFI1–3 X-CHD-Y vs CFI1–4 X-CHD-Y
CFI1–3 X-BEN-Y vs CFI1–4 X-BEN-Y
COO−
O−
Some information regarding the discrepancy between the SE transmission in meta and para substituted derivatives can be drawn from the dependence of HOMA on cSAR(X) and SESE (as substituent effect descriptors), for substituted phenolates it is presented in Fig. 10 [118].
Fig. 10
Dependences of HOMA on a cSAR(X) and b SESE in meta- and para-substituted phenolates. Reused from [118], this work is licensed under the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)
Again, as shown in these scatter plots, the energetic effect of SE is much smaller for meta-substituted species than for the para ones. The same is confirmed by changes in geometry (measured via HOMA) and cSAR(X) values.
Some light in the attempt to understand can come from the scatter plot of pEDA(Ring) vs sEDA(Ring) [117], presented in Fig. 11. As we can see, there is no general correlation between these two contributions to the description of the electronic structure of the transmitting ring.
Dependence of pEDA(Ring) on sEDA(Ring) for meta- and para-substituted nitrobenzene derivatives. For red points the sequence is Me, CN, CF3, CONH2, COOH, COMe, COCl, CHO. Reprinted with permission from J Phys Chem A 121:5196 (2017) [117]. Copyright 2017 American Chemical Society
The sEDA values depend on the π-electron properties of the substituent (compare sEDA of NO2 and NH2 groups), but another strong factor is electronegativity of the linking atom (nitrogen in both cases). Here, the Huheey concept of the group electronegativity may be to some extent helpful [92, 93].
How much the substituent can change the properties of the ring is shown in Fig. 12. The role of the intramolecular charge transfer is nicely documented when we look at the HOMA change due to the rotation of the nitro group in para-nitroanilines [148].
Dependences of HOMA values on rotation angle φ of NO2 group in para-nitroaniline complexes (for equilibrium structures, except for HNH···F− interactions). Reused from Crystals 6:29 (2016) [148], this work is licensed under the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)
A specific interaction is observed in doubly bonded 1,4-disubstituted benzene derivatives [142]. As in the case of systems with a double bonded substituent attached to a five-membered ring (see above), the dependence of NICS on HOMA is non-linear, but excluding a few cases, well acceptable (see Fig. 13). Interestingly, the AIM parameters at ring critical points, such as electron density or Laplacian, correlate excellently with HOMA (cc = 0.985 and 0.988, respectively). The same applies to a series of five-membered rings.
Non-linear correlations between NICS(1)(6) and HOMA(6) indices for the BQA systems for di-double-bond substituted cyclohexa-1,4-dienes ([1,4]benzoquinone analogs, BQA). Reprinted with permission from Org Biomol Chem 11:3006 (2013) [142]. Copyright 2013 with permission from The Royal Society of Chemistry
Polysubstituted π-electron systems
The polysubstituted π-electron systems, in addition to Y and R, contain several substituents X1, X2, …, Xn located at different R positions.
It is well-known that substituted phenol derivative exhibit substantial changes in their acidity [149]. It is interesting, therefore, how changes in OH properties due to the influence of substituents affect aromaticity of the ring. The answer is given by the relationship between HOMA values and the CO bond lengths for 664 complexes of variously poly-substituted phenols interacting with various bases in the crystalline state [150], all data retrieved from the CSD database [146]. The dependence of HOMA on dCO is shown in Fig. 14.
Relationship of HOMA on C−O bond length, dC-O, for variously substituted phenols interacting with bases. Reprinted from J Chem Inf Comput Sci 44:2077 (2004) [150]. Copyright (2004) with permission from American Chemical Society
It is obvious that the stronger is the interaction of OH group with the base, the shorter becomes the CO bond and the more localized π-electron structure of the ring and, in consequence, its less aromatic character. Simulation of this kind of interactions by quantum chemistry calculation, using a simple model for phenol and para-X-nitrophenols interacting with fluoride anion in a variable distance from the hydrogen atom of the hydroxyl group, leads to similar conclusions [151]. The HOMA and NICS values plotted against dCO distance presented a similar picture: the higher the dCO value, the more aromatic the ring is. The same was observed for substituent effect on proton transfer in para-substituted phenol complexes with fluoride anions [152].
Studies on exocyclically substituted derivatives of benzylic cations exhibit significant changes in aromaticity of the ring, which depend clearly on the varying charge at the exo-carbon atom [153]. The use of HOMA index and its values plotted against the exo-CC bond length and the charge of the exo-carbon atom leads to acceptable correlations (cc = 0.845 and cc = 0.88, respectively). An application of the HOSE model [40, 41] allowed to show that the contribution of resonance structures is also correlated with charge on the exo-carbon atom. When the problem is considered in relation to polycyclic benzenoid systems [154] an important conclusion has appeared: "If a single substituent able to form double bond is attached to the benzoid hydrocarbon in a position which permits the formation of the quinoidal structure along a larger part of the π-electron moiety, then it acts as dearomatizing factor for this fragment and in consequence for the whole system. Moreover, this effect is associated with a long-range intramolecular charge transfer from CH2+group to the position(s) being the terminal(s) of the quinoidal structure in the molecule." It was also shown that the charge at CH2+ group as well as aromaticity correlate well with the Hammett-Streitwieser position constants [155, 156].
Among substituted derivatives of benzenoid hydrocarbons, the most localized π-electron systems are encountered in quinones. Hence, the study of the aromaticity and routes of π-electron delocalization in 4-substituted-1,2-benzoquinones (Scheme 7) is very interesting. The use of HOMA, MCI, DI, and FLU aromaticity indices and 11 substituents in position 4 (Scheme 7) gave insight into the nature of SE in these systems [157]. All the above-mentioned measures of π-electron delocalization revealed a very important feature of the studied systems. The substituents in the position 4 affect about nine times stronger the C2O bond length than the C1O one, as described by the slopes of the regression lines (dCO vs σ) − 0.0046 and − 0.0005.
4-substituted-1,2-benzoquinones, X = NO, NO2, CN, CHO, H, Me, OMe, OH, NH2, NHMe, and NMe2
This picture is in line with much larger changes of HOMA and MCI descriptors of π-electron delocalization via OC2C3C4 than through OC1C6C5C4. This observation was taken as a basis for a general statement that: "if the number of bonds between an electron accepting and electron donating atoms is even, then the intramolecular charge transfer is possible i.e. the resonance effect works" [157]. This is also in line with previous results, in which the traditional Hammett approach was used for meta- and para-substituted systems [82]. In addition to the important conclusions presented above, when the HOMA values of the ring of the studied systems are plotted against substituent constants, the linear regression has a good correlation (cc = − 0.930): the stronger electron donating substituent, the higher value of HOMA.
onosubstituted 1,2- and 2,3-naphthoquinone derivatives have been the subject of studies on conjugated paths between CO groups and the substituents (X = NO, NO2, CN, CHO, Me, OMe, OH, NH2, NHMe, and NMe2) [158]. The applications of the π-electron delocalization characteristics, such as FLU, DI, and HOMA, as well as changes in the CO bond lengths and SESE calculation, allowed for a better recognition of the problem. The results obtained revealed regression lines between these parameter values plotted against substituent constants, shown in Tables 4 and 5. In almost 50% of cases, correlation coefficients (in modulo) were better than 0.9. It should be noted, however, that conjugated path for the same substituent may be realized as illustrated in Fig. 15; to characterize each of them, the HOMA index was used.
Statistics of regression (y = a × σ + b) of bond lengths and DI values for both carbonyl groups, SESE, HOMA, and FLU values of the rings on substituent constants for 2,3-naphthoquinone derivatives, correlation coefficients (R) taken as modulo value (from ref. [158])
vs. σ
CO (2): R = 0.99
DI (2): R = 0.97
Ring (A): R = 0.86
R = 0.93
− 0.0015
Ring (B): R = 0.30
DI (2): R = 0.032
− 7E−0.5
Statistics of regression (y = a × σ + b) of bond lengths and DI values for both carbonyl groups, SESE, HOMA, and FLU values of the rings on substituent constants for 1,2-naphthoquinone derivatives, correlation coefficients (R) taken as modulo values (from ref. [158])
Dependences of HOMA for conjugation paths on substituent constants for 6-substituted 2,3-naphthoquinone derivatives (N: number of bonds between X and oxygen atoms). Reprinted from J Phys Chem A 115:12691 (2011) [158]. Copyright 2011 with permission from American Chemical Society
It can be concluded that in both series of 1,2- and 2,3- naphthoquinone derivatives, only one of the two carbonyl groups exhibits a better-defined substituent effect, characterized by both a higher correlation coefficient and more substantial values of slopes for dC=O vs σ regressions (Tables 4 and 5). These are mostly cases where the number of bonds between donating atom of the substituent and oxygen atom of the carbonyl group is even. For odd numbers, no clear relations are observed.
Recent SE studies on aromaticity in variously substituted 1-, 2-, and 9-anthrols have revealed some interesting observations [159] (X = NO2, CN, H, OH or NH2). First, the variability of HOMA estimated for perimeter is very low, never greater than 0.023, indicating a low sensitivity of aromaticity (estimated in this way) on the SE. This is in line with an earlier finding that the perimeter bond lengths are little sensitive to any internal perturbations; moreover, the HOMA index estimated for perimeter leads to higher values than in cases when all bonds, i.e., perimeter and internal ones, are taken into account [160]. Second, HOMA values for individual rings are always lower than those for perimeter and usually the rings with the substituent present show a reduced HOMA value. Third, HOMA values for perimeter and individual rings in monosubstituted anthracene resemble those observed in analogously substituted anthrols. In addition, for the substituted 2-anthrol series, a long-distance substituent effect has been documented: the OH group is in the first ring, the substituent is attached to the middle ring, and the most sensitive to π-electron delocalization is the last ring.
Hydrogen-bonded complexes of exocyclic substituted derivatives of 2-methylene-2H-indene, shown in Fig. 16, can also be regarded as polysubstituted π-electron systems. A systematic study of the relationship between substituent effects and the aromaticity of a six-membered ring has recently been published [161]. To characterize π-electron delocalization HOMA, FLU, SA (Shannon aromaticity) [162] and NICS(1)zz aromaticity indices were used. Both in the case of isolated monomers and H-bonded complexes, excellent linear correlations (R2 ≥ 0.97) were found between aromaticity indices and the substituent constants. The aromaticity of the six-membered rings increases with an increase in the electron-donating character of the X substituents. In addition, the strength of the resulting π-hydrogen bond (energy in the range of 4.0 to 7.0 kcal/mol) depends on the aromaticity of the six-membered ring and increases with increasing aromaticity. It can therefore be said that a long-distance substituent effect also works in this case.
Mutual effects of substituents and H-bonding strength on aromaticity of a six-membered ring for exocyclic substituted derivatives of 2-methylene-2H-indene; graphical abstract reprinted from Phys Chem Chem Phys 21:623–630 [161] with permission from the PCCP Owner Societies
Substituent effects in quasi-aromatic systems
The term quasi-aromatic compounds was introduced by Lloyd and Marshall [163] and then supported by studies of metal complexes of acetyloacetone, which are characterized by the ease with which they undergo electrophilic substitution at the β carbon [164, 165]. Quasi-aromatic rings are best pictured by structures of enol forms of malonaldehyde, shown in Scheme 8. Their properties can be changed by substituents [166], Scheme 9, or the hydrogen atom of the quasi-ring can be replaced by some metal atoms, e.g., Li or BeH [167].
Structures of enol form of malonaldehyde.
Structural scheme of studied malonaldehyde derivatives for two conformations: a bridged, b open; X1, X2, and X3 denote H or F or Cl.
An application of the HOMA approach to the covalent bonds of the quasi-aromatic rings and NICS to the center of the ring leads to the conclusion that NICS is insensitive to π-electron delocalization in the quasi-ring. In contrast, HOMA values for variously substituted malonaldehyde span the values between 0.472 and 0.870 [166]. However, if the hydrogen atom in the quasi-ring in malonaldehyde is replaced by Li, then changes in delocalization in the spacer (OCCCO) are insignificant, the HOMA index is between 0.927 and 0971 [167]. Thus, in this case, the quasi-aromatic ring of malonaldehyde resembles the truly aromatic one, benzene, which is known to be weakly sensitive to the substituent effect [168]. The energy difference in bridged and open conformations is 12.96 kcal/mol, while the differences in bond lengths for single C-O and double bonds are 0.043 Å and 0.119 Å, respectively [169]. A detailed discussion of resonance structures of 1(3)- and 2-X substituted malonaldehyde (X = NO, NO2, CN, CHO, F, H, CH3, OCH3, OH, and NH2.) was presented by Palusiak et al. [170]. The direction of the resonance effect along the quasi-aromatic ring and its influence on H-bonding strength is well illustrated by a scheme in Fig. 17 [171].
The direction of resonance effect along the quasi-aromatic ring and its influence on H-bonding strength: the strengthening (a) and weakening (b) of the H-bond. Reprinted from Tetrahedron 71:4899 (2015) [171]. Copyright (2015), with permission from Elsevier
In a more quantitative, energetic way [172], this problem is presented in Table 6. Energy relations between various mesomeric structures of quasi-aromatic H-bonded rings for malonaldehyde and alike analogs reveal dependence on structural feature of these systems.
Relative energies (in kcal/mol) of several structures of different isomers of quasi-aromatic H-bonded rings (from ref. [172])
The problem of a relation between π-electron delocalization in the quasi-ring and a strength of H-bonding, as well as Li-bonding, is clearly presented for salicylaldehyde, o-hydroxy Schiff base, o-nitrosophenol, and their lithium analogs [173]. In addition, detailed studies on the role of quasi-aromatic rings attached to benzenoid hydrocarbons reveal that they can also simulate real aromatic rings [169, 174]. It is well known that the central ring of triphenylene is, in line with the Clar rules, "empty" from π-electrons or in other words is not aromatic, its HOMA value is 0.17. When the triphenylene is simulated by its analog, where three benzene rings are replaced by three quasi-rings (see Fig. 18) and their hydrogen atoms are replaced by Li, then we find that an increase in number of Li (replacing hydrogen) is associated with a dramatic decrease in central ring aromaticity. It is documented by both aromaticity indices, HOMA and NICS, as shown in Fig. 18. In other words, the more quasi-aromatic rings with lithium bonds attached to the benzene ring in the triphenylene analog, the lower aromaticity of the central benzene ring.
Dependences of HOMA and NICS on the number of Li replacing H atoms in the quasi-ring. Reprinted from J Org Chem 116:7681 (2006) [174]. Copyright 2006 with permission from American Chemical Society
The extension of this approach into 33 phenolic rings and a set of 20 quasi-rings (formed by intramolecular hydrogen and lithium bonds) has revealed that charge and Laplacian, as well as energy and its components as kinetic and potential energies, estimated in ring critical points are well correlated with HOMA and NICS's (NICS, NICS(1) and NICS(1)zz) values [56]. The study was carried out by comparing the above-mentioned aromaticity indices of benzene, naphthalene, anthracene, phenanthrene and triphenylene with their analogs in which one benzene ring was replaced with a quasi-aromatic ring. The obtained results strongly confirmed the statement that the attached quasi-aromatic rings really simulate these aromatic ones. However, it should be noted again that, unlike HOMA, the NICS values do not describe electron delocalization in quasi- aromatic rings.
The problem of interrelations between π-electron delocalization in quasi-ring and benzene for ortho-hydroxy Schiff base and its derivatives, in which H atom of quasi-aromatic ring is replaced by Li or BeH (Scheme 10) was also investigated [167]. For this purpose, calculations on two levels of theory (B3LYP/6-311+G** and MP2/aug-cc-pVDZ) were used. Detailed information on the relation between quasi-ring in open and closed conformations and their influences on benzene ring [167] is gathered in Table 7.
Scheme 10
Tautomeric and canonical forms of ortho-hydroxy Schiff base (a) and its studied derivatives (b).
Calculated HOMA and NICS(1)zz values, and delocalization energies (Edel, in kcal mol−1) for ortho-hydroxy Schiff base and its derivatives (Scheme 10); B3LYP/6-311+G(d,p) results (from ref. [167])
E del
Ph-ring
quasi-ring
H-enol-imine
− 27.15
H-enol-imine (open)
H-enol-enamine
Li-enol-imine derivative
BeH-enol-imine derivative
Schiff anion
The results obtained can be summarized as follows: (i) despite different calculation methods and level of applied quantum chemistry, the results are in a good qualitative agreement; (ii) π-electron delocalization of the benzene ring is weakly sensitive to the conformation of open or closed H-enol imine, but dramatically sensitive when H-keto-enamine is formed, (iii) π-electron delocalization in a closed quasi-ring increases in the sequence: H, Li and BeH, which is associated with irregular decrease of delocalization in benzene ring, estimated by HOMA and NICS.
A similar problem, for tautomeric interconversions (Scheme 11) and rotational isomerism in o-nitrosophenol [175], is illustrated in Fig. 19. Tautomeric forms of o-nitrosophenol differ dramatically in their π-electron delocalization. For the most stable isomers of the studied tautomers (shown in Scheme 11), low HOMA values characterize both benzene and quasi-rings in the ketoxime (0.25 and 0.40, respectively), while for the nitrosole form they are 0.91 and 0.69, respectively.
Tautomeric equilibria in o-nitrosophenol: a ketoxime and b nitrosoenol forms
Scatter plots of HOMA for phenyl ring vs. R(C-O) and R(C-N) for various forms of o-nitrosophenol. Taken from J Phys Org Chem 18:892 (2005) [175]. Copyright (2005) Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission
Phenomena known as aromaticity and the substituent effect are one of the most important issues in chemistry, biochemistry, and related fields. The indices based on energy, geometry (e.g., HOMA), magnetic (e.g., NICS), and electronic structure properties (e.g., FLU) are the most commonly used for the quantitative description of aromaticity. In the case of the substituent effect, it is most often characterized by substituent constants (e.g., Hammett substituent constants). However, the development of computational methods has led to the use of substituent effect descriptors based on quantum chemistry methods. For this purpose, the energies of properly designed homodesmotic reactions, electron density distribution, or electrostatic potential are used. However, in these cases, their use is verified by comparing the "new" descriptors with that obtained using the classical approach (i.e., Hammett-like constants). Among the new physicochemical concepts of the substituent effect the most promising is the cSAR approach, which allows to study both the classical and reverse substituent effect.
The mutual relations between aromaticity and the substituent effect can be summarized as follows:
Strongly aromatic molecules are resistant to substituent effects. The less aromatic system, the more sensitive to SE.
Quasi-aromatic rings, when attached to a truly aromatic hydrocarbon, simulate well the "original" aromatic rings, alike benzene.
NICS as an aromaticity descriptor does not work for the detection of π-electron delocalization in quasi-aromatic rings.
HOMA and FLU describe well π-electron delocalization in π-fragments of any π-electron systems.
HOMA values estimated for perimeter bond lengths are very weakly sensitive to the substituent effect.
When the number of bonds between the electron-attracting and electron-donating atoms of groups in the π-electron system is even, then the intramolecular charge transfer is much more effective than in any other cases.
Almost all SE descriptors indicate much smaller result of interactions in substituted systems for 1,3- positions (the meta like) than for 1,4- ones (the para like).
(viii)
For larger systems a long-distance substituent effect has been found.
HS thanks the Warsaw University of Technology for supporting this work.
ISI Web of science, retrieved in December 2018Google Scholar
Kekule' A (1865). Bull Soc Chim Fr 3:98–110Google Scholar
Erlenmayer E (1866) 137: 327–359Google Scholar
Chen Z, Wannere CS, Corminboeuf C, Puchta R, Schleyer PR (2005). Chem Rev 105:3842–3888CrossRefPubMedGoogle Scholar
Robinson R (1958). Tetrahedron 3:323–324CrossRefGoogle Scholar
Pauling L, Sherman J (1933). J Chem Phys 1:606–617CrossRefGoogle Scholar
Pauling L (1960) The nature of the chemical bond. Cornell Univ Press, Ithaca, p 195Google Scholar
Streitwieser Jr A (1961) Molecular orbital theory for organic chemists. Wiley, New York, p 237ffGoogle Scholar
Cohen N, Benson SW (2001). Chem Rev 93:2419–2438CrossRefGoogle Scholar
Slayden SW, Liebman JF (2001). Chem Rev 101:1541–1566CrossRefPubMedGoogle Scholar
Dewar MJS, Gleicher GJ (1965). J Am Chem Soc 87:692–696CrossRefGoogle Scholar
Dewar MJS, de Llano C (1969). J Am Chem Soc 91:789–795CrossRefGoogle Scholar
Dewar MJS, Harget A, Trinajstić N (1969). J Am Chem Soc 91:6321–6325CrossRefGoogle Scholar
Cyrański MK (2005). Chem Rev 105:3773–3811CrossRefPubMedGoogle Scholar
Hess Jr BA, Schaad LJ (1971). J Am Chem Soc 93:305–310CrossRefGoogle Scholar
Hess Jr BA, Schaad LJ (1971). J Am Chem Soc 93:2413–2416CrossRefGoogle Scholar
Hess Jr BA, Schaad LJ (1971). J Organomet Chem 36:3418–3423CrossRefGoogle Scholar
Schaad LJ, Hess Jr BA (2001). Chem Rev 101:1465–1476CrossRefPubMedGoogle Scholar
Kruszewski J, Krygowski TM (1970). Tetrahedron Lett 11:319–324CrossRefGoogle Scholar
Smith MB, March J (2001) March's advanced organic chemistry5th edn. Wiley, New York, p 681Google Scholar
Krygowski TM (1970). Tetrahedron Lett 11:1311–1312CrossRefGoogle Scholar
Figeys HP (1970). Tetrahedron 26:5225–5234CrossRefGoogle Scholar
Krygowski TM, Kruszewski J (1972). Bull Acad Polon Sci Ser Sci Chim 20:993–1000Google Scholar
Julg A, Francoise P (1967). Theor Chim Acta 7:249–259CrossRefGoogle Scholar
Kemula W, Krygowski TM (1968). Tetrahedron Lett 9:5135–5140CrossRefGoogle Scholar
Kruszewski J, Krygowski TM (1972). Tetrahedron Lett 13:3839–3842CrossRefGoogle Scholar
Krygowski TM (1993). J Inf Comput Sci 33:70–78Google Scholar
Madura ID, Krygowski TM, Cyrański MK (1998). Tetrahedron 54:14913–14,918CrossRefGoogle Scholar
Zborowski KK, Proniewicz LM (2009). Pol J Chem 83:477–484Google Scholar
Zborowski KK, Alkorta I, Elguero J, Proniewicz LM (2012). Struct Chem 23:595–600CrossRefGoogle Scholar
Andrzejak M, Kubisiak P, Zborowski KK (2013). Struct Chem 24:1171–1184CrossRefGoogle Scholar
Raczyńska ED, Hallman M, Kolczyńska K, Stępniewski T (2010). Symmetry 2:1485–1509CrossRefGoogle Scholar
Frizzo CP, Martins MAP (2012). Struct Chem 23:375–380CrossRefGoogle Scholar
Bird CW (1985). Tetrahedron 41:1409–1414CrossRefGoogle Scholar
Bird CW (1992). Tetrahedron 48:335–340CrossRefGoogle Scholar
Gordy W (1947). J Chem Phys 15:305–310CrossRefGoogle Scholar
Wieckowski T, Krygowski TM (1981). Can J Chem 59:1622–1629CrossRefGoogle Scholar
Krygowski TM, Anulewicz R, Kruszewski J (1983). Acta Cryst B39:732–739CrossRefGoogle Scholar
Hendricson JB, Cram DJ, Hammond GS (1980) Organic chemistry4th edn. McGraw-Hill, New York, p 148Google Scholar
Randic M (1977). Tetrahedron 33:1905–1920CrossRefGoogle Scholar
Ciesielski A, Krygowski TM, Cyranski MK, Balaban AT (2011). Phys Chem Chem Phys 13:3737–3747CrossRefPubMedGoogle Scholar
Krygowski TM, Szatyłowicz H, Stasyuk OA, Dominikowska J, Palusiak M (2014). Chem Rev 114:6383–6422CrossRefPubMedGoogle Scholar
Dauben HJ, Wilson JD, Laity JL (1968). J Am Chem Soc 90:811–813CrossRefGoogle Scholar
Benson RC, Flygare WH (1970). J Am Chem Soc 92:7523–7529CrossRefGoogle Scholar
Flygare WH (1974). Chem Rev 74:653–687CrossRefGoogle Scholar
Schleyer PR, Maerker C, Dransfeld A, Jiao H, van Eikema Hommes NJR (1996). J Am Chem Soc 118:6317–6318CrossRefPubMedGoogle Scholar
Cyrański MK, Krygowski TM, Wisiorowski M, van Eikema Hommes NJR, Schleyer PR (1998). Angew Chem Int Ed 37:177–180CrossRefGoogle Scholar
Corminboeuf C, Heine T, Seifert G, Schleyer PR, Weber J (2004). Phys Chem Chem Phys 6:273–276CrossRefGoogle Scholar
Bader RFW (1992) Atom in molecules. A Quantum Theory. Oxford University Press, OxfordGoogle Scholar
Bader RFW (1991). Chem Rev 91:893–928CrossRefGoogle Scholar
Popelier P (2000) Atoms in molecules, an introduction. Printice HallGoogle Scholar
Howard ST, Krygowski TM (1997). Can J Chem 75:1174–1181CrossRefGoogle Scholar
Palusiak M, Krygowski TM (2007). Chem Eur J 13:7996–8006CrossRefPubMedGoogle Scholar
Poater J, Fradera M, Duran M, Sola M (2003). Chem Eur J 9:400–406CrossRefPubMedGoogle Scholar
Bultinck P, Rafat M, Ponec R, Van Gheluwe B, Carbó-Dorca R, Popelier P (2006). J Phys Chem A 110:7642–7648CrossRefPubMedGoogle Scholar
Matito E, Duran M, Sola M (2005). J Chem Phys 122:14109CrossRefPubMedGoogle Scholar
Feixas F, Matito E, Poater J, Sola M (2015). Chem Soc Rev 44:6434–6451CrossRefPubMedGoogle Scholar
Katritzky AR, Barczyński P, Musumarra G, Pisano D, Szafran M (1989). J Am Chem Soc 111:7–15CrossRefGoogle Scholar
Schleyer PR, Freeman PK, Jiao H, Goldfuss B (1995). Angew Chem Int Ed 34:337–340CrossRefGoogle Scholar
Krygowski TM, Ciesielski A, Bird CW, Kotschy A (1995). J Chem Inf Comput Sci 35:203–210CrossRefGoogle Scholar
Katritzky AR, Karelson M, Sild S, Krygowki TM, Jug K (1998). J Organomet Chem 63:5228–5231CrossRefGoogle Scholar
Sadlej-Sosnowska N (2001). J Organomet Chem 66:8737–8743CrossRefGoogle Scholar
Cyrański MK, Krygowski TM, Katritzky AR, Schleyer PR (2002). J Organomet Chem 67:1333–1338CrossRefGoogle Scholar
Feixas F, Matito E, Poater J, Sola M (2008). J Comput Chem 29:1543–1554CrossRefPubMedGoogle Scholar
Oziminski WP, Dobrowolski JCZ (2009). J Phys Org Chem 22:769–778CrossRefGoogle Scholar
Szczepanik DW, Zak E, Dyduch K, Mrozek J (2014). Chem Phys Lett 583:154–159CrossRefGoogle Scholar
Szczepanik DW, Andrzejak M, Dominikowska J, Pawełek B, Krygowski TM, Szatyłowicz H, Sola M (2017). PCCP 19:28970–28981CrossRefPubMedGoogle Scholar
Szczepanik DW, Sola M, Krygowski TM, Szatyłowicz H, Andrzejak M, Pawełek B, Dominikowska J, Kukułka M, Dyduch K (2018). PCCP 20:13430–13436CrossRefPubMedGoogle Scholar
Szczepanik DW, Sola M, Andrzejak M, Pawełek B, Dominikowska J, Kukułka M, Dyduch K (2017). J Comput Chem 38:1640–1656CrossRefPubMedGoogle Scholar
Szatylowicz H, Krygowski TM (2017). Wiadomości Chemiczne 71:497–516Google Scholar
Hammett LP (1937). J Am Chem Soc 59:96–103CrossRefGoogle Scholar
Hammett LP (1940) Physical organic chemistry. 1st Ed. Mc Graw-Hill, New York, p 196Google Scholar
Krygowski TM, Wozniak K (1991) In: Zalewski RI, Krygowski TM, Shorter J (eds) Similarity models in organic chemistry, biochemistry and related fields. Similarity models: statistical tools and problems in using them. Chpt. 1. Elsevier, Amsterdam, pp 3–75Google Scholar
Jaffe HH (1953). Chem Rev 53:191–261CrossRefGoogle Scholar
Hansch C (1969). Acc Chem Res 2:232–239CrossRefGoogle Scholar
Exner O (1972) In: Chapman NB, Shorter J (eds) Advances in linear free energy relationships. The Hammett equation - the present position, Chpt. 1. Plenum Press, London, p 1Google Scholar
Johnson CD (1973) The Hammett equation. Cambridge University Press, CambridgeGoogle Scholar
Shorter J (1991) In: Zalewski RI, Krygowski TM, Shorter J (eds) Similarity models in organic chemistry, biochemistry and related fields. Substituent effect parameters and models applied in organic chemistry. Chpt. 2. Elsevier, Amsterdam, p 77Google Scholar
Hansch C, Leo A, Taft RW (1991). Chem Rev 91:165–195CrossRefGoogle Scholar
Krygowski TM, Stępień BT (2005). Chem Rev 105:3482–3512CrossRefPubMedGoogle Scholar
Exner O, Bohm S (2006). Curr Org Chem 10:763–778CrossRefGoogle Scholar
Swain CG, Lupton Jr EC (1968). J Am Chem Soc 90:4328–4337CrossRefGoogle Scholar
Streitwieser Jr A (1961) Molecular orbital theory for organic chemists. Wiley, New YorkGoogle Scholar
Kemula W, Krygowski TM (1967). Bull Acad Polon Sci Ser Sci Chim 15:479–484Google Scholar
Krygowski TM, Tomasik P (1970). Bull Acad Polon Sci Ser Sci Chim 18:303–308Google Scholar
Kamieński B, Krygowski TM (1971). Tetrahedron Lett 12:103–104CrossRefGoogle Scholar
Domenicano A, Mazzeo P, Vaciago A (1976). Tetrahedron Lett 17:1029–1032CrossRefGoogle Scholar
Domenicano A, Murray-Rust P (1979). Tetrahedron Lett 24:2283–2286CrossRefGoogle Scholar
Huheey JE (1965). J Phys Chem 69:3284–3291CrossRefGoogle Scholar
Campanelli AR, Domenicano A, Ramondo F (2003). J Phys Chem A 107:6429–6440CrossRefGoogle Scholar
Campanelli AR, Domenicano A, Ramondo F, Hargittai I (2004). J Phys Chem A 108:4940–4948CrossRefGoogle Scholar
Bachrach SM (2014) Computational organic chemistry. Wiley, New JerseyCrossRefGoogle Scholar
George P, Trachtman M, Bock CW, Brett AM (1976). J Chem Soc Perkin Trans 2:1222–1227CrossRefGoogle Scholar
Pross A, Radom L, Taft RW (1980). J Organomet Chem 45:818–826CrossRefGoogle Scholar
Siodla T, Oziminski WP, Hoffmann M, Koroniak H, Krygowski TM (2014). J Organomet Chem 79:7321–7331CrossRefGoogle Scholar
Gadre SR, Suresh CH (1997). J Organomet Chem 62:2625–2627CrossRefGoogle Scholar
Galabov B, Ilieva S, Schaefer III HF (2006). J Organomet Chem 71:6382–6387CrossRefGoogle Scholar
Sadlej-Sosnowska N (2007). J Phys Chem A 111:11134–11,140CrossRefPubMedGoogle Scholar
Galabov B, Ilieva S, Hadijeva B, Atanasov Y, Schaefer III HF (2008). J Phys Chem A 112:6700–6707CrossRefPubMedGoogle Scholar
Sayyed FB, Suresh CH, Gadre SR (2010). J Phys Chem 114:12330–12,333CrossRefGoogle Scholar
Suresh CH, Gadre SR (2008). Phys Chem Chem Phys 10:6492–6499CrossRefPubMedGoogle Scholar
Remya GS, Suresh CH (2016). Phys Chem Chem Phys 18:20615–20,626CrossRefPubMedGoogle Scholar
Sadlej-Sosnowska N (2007). Pol J Chem 81:1123–1134Google Scholar
Sadlej-Sosnowska N (2007). Chem Phys Lett 447:192–196CrossRefGoogle Scholar
Krygowski TM, Sadlej-Sosnowska N (2011). Struct Chem 22:17–22CrossRefGoogle Scholar
Stasyuk OA, Szatylowicz H, Fonseca Guerra C, Krygowski TM (2015). Struct Chem 26:905–913CrossRefGoogle Scholar
Mulliken RS (1955). J Chem Phys 23:1833–1840 1841–1846, 2338–2342, 2343–2346CrossRefGoogle Scholar
Bader RWM (1990) Atoms in molecules: a quantum theory. Clarendon Press, OxfordGoogle Scholar
Bickelhaupt FM, van der Eikemma NIR, Fonseca Guerra C, Baerends EJ (1996). Organometallics 15:2923–2931CrossRefGoogle Scholar
Hirshfeld FL (1977). Theor Chim Acta 44:129–138CrossRefGoogle Scholar
Weinhold F, Landis CR (2005) Valency and bonding, a natural bond orbital donor-acceptor perspective. Cambridge University Press, CambridgeCrossRefGoogle Scholar
Szatylowicz H, Jezuita A, Ejsmont K, Krygowski TM (2017). Struct Chem 28:1125–1132CrossRefGoogle Scholar
Szatylowicz H, Jezuita A, Ejsmont K, Krygowski TM (2017). J Phys Chem A 121:5196–5203CrossRefPubMedGoogle Scholar
Shahamirian M, Szatylowicz H, Krygowski TM (2017). Struct Chem 28:1563–1572CrossRefGoogle Scholar
Varaksin KS, Szatylowicz H, Krygowski TM (2017). J Mol Struct 1137:581–588CrossRefGoogle Scholar
Szatylowicz H, Siodla T, Stasyuk OA, Krygowski TM (2016). Phys Chem Chem Phys 18:11711–11,721CrossRefPubMedGoogle Scholar
Szatylowicz H, Siodla T, Krygowski TM (2017). J Phys Org Chem 30:e3694CrossRefGoogle Scholar
Szatylowicz H, Jezuita A, Siodla T, Varaksin KS, Ejsmont K, Shahamirian M, Krygowski TM (2018). Struct Chem 29:1201–1212CrossRefGoogle Scholar
Szatylowicz H, Jezuita A, Siodla T, Varaksin KS, Ejsmont K, Madura ID, Krygowski TM (2018). J Phys Chem A 122:1896–1904CrossRefPubMedGoogle Scholar
Szatylowicz H, Jezuita A, Siodla T, Varaksin KS, Domanski MA, Ejsmont K, Krygowski TM (2017). ACS Omega 2:7163–7171CrossRefGoogle Scholar
Hęclik K, Dębska B, Dobrowolski JC (2014). RSC Adv 4:17337–17346CrossRefGoogle Scholar
Hęclik K, Dobrowolski JC (2017). J Phys Org Chem 30:e3656CrossRefGoogle Scholar
Krygowski TM (1970). Bull Acad Polon Sci Ser Sci Chim 18:463–468Google Scholar
Krygowski TM, Ejsmont K, Stepien MK, Poater J, Sola M (2004). J Organomet Chem 69:6634–6640CrossRefGoogle Scholar
Minkin VI, Glukhovtsev MN, BYa S (1994) Aromaticity and antiaromaticity, electronic and structural aspect. Wiley, New YorkGoogle Scholar
Siodla T, Szatylowicz H, Varaksin KS, Krygowski TM (2016). RSC Adv 6:96527–96,530CrossRefGoogle Scholar
Krygowski TM, Ciesielski A, Cyranski M (1995). Chem Pap 49:128–132Google Scholar
Oziminski WP, Krygowski TM, Fowler PW, Soncini A (2010). Org Lett 12:4880–4883CrossRefPubMedGoogle Scholar
Oziminski WP, Krygowski TM, Noorizadeh S (2012). Struct Chem 23:931–938CrossRefGoogle Scholar
Krygowski TM, Oziminski WP, Palusiak M, Fowler PW, McKenzie AD (2010). Phys Chem Chem Phys 12:10740–10745CrossRefPubMedGoogle Scholar
Oziminski WP, Krygowski TM (2011). J Mol Model 17:565–572CrossRefPubMedGoogle Scholar
Krygowski TM, Oziminski WP, Cyranski MK (2012). J Mol Model 18:2453–2460CrossRefPubMedGoogle Scholar
Cysewski P, Jelinski T, Krygowski TM, Oziminski WP (2012). Curr Org Chem 16:1920–1933CrossRefGoogle Scholar
Zborowski K, Alkorta I, Elguero J (2007). Struct Chem 18:797–805CrossRefGoogle Scholar
Curutcher C, Poater J, Sola M, Elguero J (2011). J Phys Chem A 115:8571–8577CrossRefGoogle Scholar
Oziminski WP, Krygowski TM (2011). Tetrahedron 67:6316–6321CrossRefGoogle Scholar
Zborowski KK, Szatyłowicz H, Stasyuk OA, Krygowski TM (2017). Struct Chem 28:1223–1227CrossRefGoogle Scholar
Mazurek A, JCz D (2013). Org Biomol Chem 11:2997–3013CrossRefPubMedGoogle Scholar
Cyranski MK, Krygowski TM (1995). Pol J Chem 69:1080–1087Google Scholar
Morrison DF (1976) Multivariate statistical methods. McGraw-Hill Inc, New YorkGoogle Scholar
Allen FH (2002). Acta Crystallogr Sect B Struct Sci 58:380–388CrossRefGoogle Scholar
(1989) Cambridge Structural Data Base, User's manuel, part I, II, III, CCDC CambridgeGoogle Scholar
Szatylowicz H, Stasyuk OA, Guerra CF, Krygowski TM (2016). Crystals 6:29CrossRefGoogle Scholar
Rapaport Z (ed) (2003) The chemistry of phenols. Wiley, New YorkGoogle Scholar
Krygowski TM, Szatyłowicz H, Zachara JE (2004). J Chem Inf Comput Sci 44:2077–2082CrossRefPubMedGoogle Scholar
Krygowski TM, Zachara JE, Szatyłowicz H (2004). J Org Chem 69:7038–7043CrossRefPubMedGoogle Scholar
Krygowski TM, Szatyłowicz H, Zachara JE (2005). J Chem Inf Model 45:652–456CrossRefPubMedGoogle Scholar
Krygowski TM, Wisiorowski M, Nakata K, Fujio M, Tsuno Y (1996). Bull Chem Soc Jpn 69:2275–2279CrossRefGoogle Scholar
Krygowski TM, Cyrański M, Nakata K, Fujio M, Tsuno Y (1997). Tetrahedron 53:11383–11,398CrossRefGoogle Scholar
Krygowski TM (1971). Bull Acad Polon Sci Ser Sci Chim 19:49–59Google Scholar
Krygowski TM (1972). Tetrahedron 28:4981–4987CrossRefGoogle Scholar
Szatyłowicz H, Krygowski TM, Palusiak M, Poater J, Sola M (2011). J Organomet Chem 76:550–556CrossRefGoogle Scholar
Shahamirian M, Cyrański MK, Krygowski TM (2011). J Phys Chem A 115:12688–12,694CrossRefPubMedGoogle Scholar
Szatylowicz H, Domanski MA, Krygowski TM (2019). ChemistryOpen 8:64–73CrossRefPubMedPubMedCentralGoogle Scholar
Zborowski KK, Krygowski TM (2014). Tetrahedron Lett 55:6359–6361CrossRefGoogle Scholar
Nekoei AR, Vatanparast M (2019). Phys Chem Chem Phys 21:623–630CrossRefPubMedGoogle Scholar
Noorizadeh S, Shakerzadeh E (2010). Phys Chem Chem Phys 12:4742–4749CrossRefPubMedGoogle Scholar
Lloyd D, Marshall DR (1964). Chem Ind (London):1760–1761Google Scholar
Collman JP, Moss RA, Goldby SD, Trahanowsky WS (1960). Chem Ind (London):1213–1214Google Scholar
Lloyd D, Marshall DR (1971) In: Bergmann ED, Pullman B (eds) Aromaticity, pseudoaromaticity, antiaromaticity. Proceedings of an international symposium held in Jerusalem 1970. Israel Academy of Science and Humanities, Jerusalem, p 85Google Scholar
Krygowski TM, Zachara JE (2005). Theor Chem Accounts 114:229–234CrossRefGoogle Scholar
Krygowski TM, Zachara JE, Moszyński R (2005). J Chem Inf Model 45:1837–1841CrossRefPubMedGoogle Scholar
Krygowski TM, Stepień BT (2004). Pol J Chem 68:2213–2217Google Scholar
Palusiak M, Simon S, Sola M (2006). J Organomet Chem 71:5241–5248CrossRefGoogle Scholar
Palusiak M, Simon S, Sola M (2007). Chem Phys 342:43–54CrossRefGoogle Scholar
Krygowski TM, Bankiewicz B, Czarnecki Z, Palusiak M (2015). Tetrahedron 71:4895–4908CrossRefGoogle Scholar
Lenain P, Mandado M, Mosquera MA, Bultinck P (2009). J Phys Chem A 112:10689–10696CrossRefGoogle Scholar
Krygowski TM, Zachara-Horeglad J, Palusiak M (2010). J Organomet Chem 75:4944–4949CrossRefGoogle Scholar
Krygowski TM, Zachara JE, Osmiałowski B, Gawinecki R (2006). J Organomet Chem 116:7678–7682CrossRefGoogle Scholar
Raczyńska ED, Krygowski TM, Zachara JE, Ośmiałowski B, Gawinecki R (2005). J Phys Org Chem 18:892–897CrossRefGoogle Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Email authorView author's OrcID profile
1.Faculty of ChemistryWarsaw University of TechnologyWarsawPoland
2.Faculty of ChemistryOpole UniversityOpolePoland
3.Department of ChemistryWarsaw UniversityWarsawPoland
Szatylowicz, H., Jezuita, A. & Krygowski, T.M. Struct Chem (2019). https://doi.org/10.1007/s11224-019-01360-7
Received 11 March 2019
Publisher Name Springer US | CommonCrawl |
In vivo estimation of elastic heterogeneity in an infarcted human heart
Gabriel Balaban ORCID: orcid.org/0000-0002-6794-96111,
Henrik Finsberg2,3,
Simon Funke2,
Trine F. Håland6,
Einar Hopp4,
Joakim Sundnes2,3,
Samuel Wall2,5 &
Marie E. Rognes2
Biomechanics and Modeling in Mechanobiology volume 17, pages 1317–1329 (2018)Cite this article
In myocardial infarction, muscle tissue of the heart is damaged as a result of ceased or severely impaired blood flow. Survivors have an increased risk of further complications, possibly leading to heart failure. Material properties play an important role in determining post-infarction outcome. Due to spatial variation in scarring, material properties can be expected to vary throughout the tissue of a heart after an infarction. In this study we propose a data assimilation technique that can efficiently estimate heterogeneous elastic material properties in a personalized model of cardiac mechanics. The proposed data assimilation is tested on a clinical dataset consisting of regional left ventricular strains and in vivo pressures during atrial systole from a human with a myocardial infarction. Good matches to regional strains are obtained, and simulated equi-biaxial tests are carried out to demonstrate regional heterogeneities in stress–strain relationships. A synthetic data test shows a good match of estimated versus ground truth material parameter fields in the presence of no to low levels of noise. This study is the first to apply adjoint-based data assimilation to the important problem of estimating cardiac elastic heterogeneities in 3-D from medical images.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
Myocardial infarction (MI) is a condition in which muscle tissue in the heart is damaged due to a loss of blood supply. After an infarction, there is an increased risk of further complications, such as rupture, infarct expansion, ventricular remodelling, hypertrophy, and heart failure (Holmes et al. 2005). Post-MI, the elastic properties of the myocardium have been shown to play a large role in determining the outcome (Morita et al. 2011; Fomovsky et al. 2012).
A promising way to study the elastic properties of in vivo myocardium is by mathematical modelling and computer simulation. With simulation it is possible to create an in silico representation of a patient's heart after an infarction. This opens up new possibilities for quantification of elasticity, beyond what is available in medical imaging today. Additionally, an in silico model that is personalized to a patient can potentially simulate the effects of treatments or therapies on the patient, thereby improving the outcome and reducing risks after MI.
Previous studies have estimated elasticity as a global value in models of infarcted hearts (Chabiniok et al. 2009; Gao et al. 2014; Fan et al. 2016). This resulted in simulated pressure–volume relations that matched well to those observed in vivo. Additionally, estimated elasticity values were shown to be significantly higher in patients with infarction compared to healthy controls (Fan et al. 2016). While these results are intriguing, the use of global parameters neglects the fact that infarction is a local phenomenon.
A more detailed approach has been to identify infarcted and healthy regions a priori and then define separate parameters for these regions in a model (Walker et al. 2005; Mojsejenko et al. 2015; McGarvey et al. 2015). The resulting regional parameters were shown to be higher in the infarcted region as compared to the healthy remote myocardium. This demonstrates the potential of modelling to quantify differences in tissue stiffness within the same heart. However, the infarctions that caused the stiffness differences were induced in otherwise healthy animals, leading to clearly demarcated regions of myocardial infarction. In the general clinical setting, however, patients may suffer from multiple infarctions, possibly occurring at different times and locations, and/or may be suffering from other cardiac pathologies. Such conditions may lead to substantial heterogeneity in elastic properties, not known a priori.
To address the issue of spatial heterogeneity in cardiac elasticity, we here present a novel 3-D data assimilation procedure. This procedure employs an adjoint gradient-based optimization method which can efficiently handle high-dimensional parameter sets. In turn, this allows for the spatial resolution of heterogeneous elastic parameters throughout the myocardium. Previous studies on the topic of soft tissue elastography have proposed the adjoint gradient approach with 2-D models and synthetic data (Oberai et al. 2003) for both compressible (Gokhale et al. 2008) and incompressible mechanics (Goenezen et al. 2011). Furthermore, we applied adjoint gradient optimization to the problem of estimating local cardiac contraction (Balaban et al. 2017; Finsberg et al. 2017), but did not consider spatially resolved elastic parameters, which we now address.
We demonstrate the utility of our method by personalizing an in silico model of cardiac elasticity to data collected from a patient in heart failure with a previous myocardial infarction and a heterogeneous distribution of fibrotic tissue. Input data consist of regional strains, which are computed by speckle tracking echocardiography, and a pressure transient obtained from a catheter. Additionally, we quantify the patient's cardiac scar burden from late gadolinium enhanced magnetic resonance images (LG-MRI) to provide a context for the modelling results.
2 Methods and materials
2.1 Clinical data
Clinical data were obtained with the permission of the Oslo University Hospital in the context of the Impact study (Hospital 2016). Specifically, we consider the case of a 64-year-old man in systolic heart failure, with left bundle branch block, coronary artery disease, and chronic infarction predominantly in the inferior and inferolateral sections of the left ventricular wall.
Prior to treatment, the patient had echocardiography, LG-MRI, and left ventricular (LV) pressure measurements taken, which are the basis for the clinical data used in this study. Pressure recordings were carried out with an intra-vascular pressure sensor catheter (Millar micro catheter: precision 1 mmHg, accuracy 1.5 mmHg Millar 2017); that is, a pressure catheter that was positioned in the LV via the right femoral artery. Pressure data were obtained automatically and digitized (Powerlab system, AD Instruments) before offline analyses were performed with a low-pass filter of 10Hz.
Top row: Example short- and long-axis slices taken from 3-D echocardiography with tracked segments in green. Bottom row: Model LV geometry derived from the 3-D echo data. From left to right are the computational mesh, rule-based fibre orientations and the standard AHA zones shown in separate colours
A 4-D echocardiography examination of the patient's LV was performed using a GE Vingmed E9 machine. Speckle tracking motion analysis was carried out with GE's software package EchoPac. Data from 6 heartbeats were combined in order to obtain a single sequence of images for a single heartbeat. Example short- and long-axis slices taken from the image sequence are shown in Fig. 1. Seven separate measurement points of left ventricular strain during atrial systole were obtained from the echo images. The strains were given as regional averages defined for a standard 17 segment AHA representation (Cerqueira et al. 2002) and measured in the local longitudinal, radial, and circumferential directions, without any off-diagonal shear components.
Strain and pressure data were synchronized using begin of atrial systole (BAS) as the first point of registration. In the pressure data, BAS was located by a deflection in a simultaneously acquired left atrial electrogram. In the strain data, BAS was identified by the onset of longitudinal stretching following diastasis. Pressures corresponding to strains were registered by the use of image acquisition times until just before ventricular systole, which was identified in the strain data by the onset of longitudinal contraction.
Pressure increases in late diastole are generally very small in magnitude, and for our patient strain points 2 and 3 shared the same pressure measurement. In order to give each strain point a unique pressure, an additional cubic polynomial smoothing was carried out. Both smoothed and original pressure data are illustrated in Fig. 2.
Cardiac magnetic resonance imaging was performed with a 3.0 Tesla scanner (Skyra, Siemens, Erlangen, Germany). We quantified the amount of myocardial fibrosis on a per region basis from short-axis late gadolinium enhancement images acquired 10–20 min after intravenous injection of 0.2 mmol/kg of gadoterate meglumine (Guerbet, Villepinte, France). This resulted in an estimated volume ratio of fibrotic to healthy tissue for each myocardial segment (scar burden). In this analysis the apex region was merged into the neighbouring apical regions, giving a 16 segment division. Example LG-MRI images and the scar burden data are displayed in Fig. 3.
Left ventricular pressure trace synchronized to echo-derived strain measurements taken in atrial systole. The original catheter data are shown in dotted black, whereas the cubic polynomial smoothed data are shown in solid green
Top row: two example short-axis late enhancement gadolinium MRI images used for regional scar quantification. Fibrotic sections of the myocardium appear in white. Bottom row: regional quantification of myocardial scar burden based on LG-MRI. The inner, middle, and outer rings represent apical, midwall, and basal sections, respectively. The RV insertion points are marked by two horizontal lines extending to the left of the bullseye
2.2 Mesh and fibre generation
We created a computational mesh based on a 3-D ultrasound image to capture the details of the patient ventricular geometry in an in silico model. The image was taken at the start of atrial systole, when the pressure was at a minimum. Using GE's EchoPac software, we extracted triangulated data points for the left ventricular endocardial and epicardial surfaces. These surfaces were cut by a plane fitted to the basal points of the surfaces, and adjusted so that the ventricular volume of the computational mesh was within 1 mL of the volume measured in the image. Using the epicardial, endocardial, and basal surfaces as boundaries, we created a volumetric mesh using Gmsh (Geuzaine and Remacle 2009). This mesh contained 741 vertices and 2214 tetrahedra. AHA zones were delineated on this volumetric mesh based on data provided by EchoPac, so that our AHA zones were consistent with those used to calculate image-based strains.
Local myocardial fibre orientations were assigned with a helix angle of 40 degrees on the endocardium rotated clockwise throughout the ventricular wall to \(-50\) degrees on the epicardium using a rule-based method (Bayer et al. 2012). Snapshots of the image-based geometry, along with AHA segments and fibres, are shown in the bottom row of Fig. 1.
2.3 Elastic wall motion model
We adopt a quasi-static continuum mechanics framework to simulate the motion of the left ventricle throughout atrial systole. As primary variables, we consider a vector field \(\mathbf {u}\) giving the displacement map between a reference configuration \(\varOmega \) and a deformed configuration undergoing a pressure load. Furthermore, we define the deformation gradient \(\mathbf {F}= {{\mathrm{Grad}}}\mathbf {u}+ \mathbf{I}\).
In our wall motion model the myocardium is considered to be a hyperelastic material with strain energy given by a transversely isotropic simplification of the Holzapfel–Ogden law (Holzapfel and Ogden 2009),
$$\begin{aligned} \psi ( \mathbf C ) = \frac{a}{2 b} \left( e^{b (I_1( \mathbf C ) - 3)} -1 \right) + \frac{a_f}{2b_f} \left( e^{ b_f (I_{4f}( \mathbf C ) - 1)_+^2} -1 \right) . \end{aligned}$$
The energy density \(\psi \) in (1) defines the amount of elastic energy stored per unit volume myocardium, given the values of the right Cauchy–Green tensor \( \mathbf C = \mathbf {F}^{T} \mathbf {F}\). The notation \((\cdot )_{+}\) refers to \(\max \{\cdot , 0\}\), and the mechanical invariants \(I_1\) and \(I_{4f}\) are defined as
$$\begin{aligned} I_1( \mathbf C ) = {{\mathrm{tr}}} \mathbf C , \quad \quad I_{4f} = \mathbf {e_f}\cdot \mathbf C \, \mathbf {e_f}, \end{aligned}$$
with \( \mathbf {e_f}\) indicating the local myocardial fibre direction field.
The material parameters \(a, a_f, b, b_f\) are scalar-valued quantities which influence the stiffness of the material. We allow these material parameters to vary spatially with a piecewise linear representation, so that each material parameter has a separate value at each vertex of the mesh. For the sake of improved numerical stability (Land et al. 2015), we employ a modified strain energy density \(\tilde{\psi }\) in place of \(\psi \) with
$$\begin{aligned} \tilde{\psi }( \mathbf C ) = \psi \left( J^{-\frac{2}{3}} \mathbf C \right) , \end{aligned}$$
where \(J = \det \mathbf {F}\) is the deformation gradient. The elastic energy (3) is embedded into a standard pressure–displacement variational formulation of incompressible hyperelasticity [Chapter 8.5 of Holzapfel (2000)]. Displacements are set to 0 in the longitudinal direction at the base of the ventricular geometry by a Dirichlet boundary condition. Movement in the other directions at the base is restricted by a linear spring with constant \(k = 1.0 \text { kPa}\) as in our previous study (Balaban et al. 2017).
The total variational equation, including the effects of blood pressure, \(p_{\text {blood}}\), and the basal spring, is given by: find the displacement \(\mathbf {u}\) and the hydrostatic pressure p such that
$$\begin{aligned} \begin{aligned}&\int _{\varOmega } \left( \mathbf {P}+ p J \mathbf {F}^{-T} \right) : {{\mathrm{Grad}}}\delta \mathbf {u}\, \mathrm {d}V+ \int _{\varOmega } (J - 1) \delta p \, \mathrm {d}V\\&\quad + \int _{\partial \varOmega _{\text {base}}} k \, \mathbf {u}\cdot \delta \mathbf {u}\, \mathrm {d}S+ p_{\text {blood}}\int _{\partial \varOmega _{\text {endo}}} J \mathbf {F}^{-T} \mathbf{N}\cdot \delta \mathbf {u}\, \mathrm {d}S= 0, \end{aligned} \end{aligned}$$
for all admissible variations \(\delta \mathbf {u}, \delta p\) in the displacement and pressure respectively. In (4), \(\mathbf {P}\) is the first Piola–Kirchhoff tensor: \(\mathbf {P}= \frac{\partial \tilde{\psi }}{\partial \mathbf {F}}\), \(\partial \varOmega _{\text {endo}}\) represents the endocardium and \(\partial \varOmega _{\text {base}}\) the ventricular base, and \(\mathbf{N}\) is the unit outward facing boundary normal. We discretize (4) by a mixed finite element method with Taylor–Hood interpolation (Hood and Taylor 1974); that is, a piecewise quadratic representation of the displacement field and a piecewise linear representation of the pressure.
The software implementation of the finite element vector and matrix assembly code is based on the software package FEniCS (Logg et al. 2011). Nonlinear systems are solved using the PETSc SNES implementation of a Newton line search algorithm (Balay et al. 2015), while the inner linear solves are handled by a distributed memory parallel LU solver (Li and Demmel 2003).
2.4 Elastic parameter estimation via constrained minimization and adjoint gradient calculations
We consider a least squares minimization of the mismatch between model derived and measured strains, to personalize the elastic material properties of our computational mechanics model.
We compute both model and measured strains in terms of the deformation gradient tensor \(\mathbf {F}\) and multiply the model strains by \(\mathbf {F}_0^{-1}\), which is the strain at the smallest measured in vivo pressure. This allows for the simulated strains to be calculated from a reference that is at the same pressure as that used for the image-based strains (Nikou et al. 2016). For a given echo image number i, AHA region \(\varOmega _j\), and strain direction k, we compute the model strain as
$$\begin{aligned} \mathbf {F}_\mathrm{model}^{i,j,k} = \frac{1}{|\varOmega _j|} \int _{\varOmega _j} \mathbf{e}_k \cdot \mathbf {F}_i \mathbf {F}_0^{-1} \mathbf{e}_k \, \mathrm {d}V \end{aligned}$$
where \(|\varOmega _j|\) is the AHA segment volume, and \(\mathbf{e}_k\) the unit vector pointing in the direction k.
The image-based strain measurements are given as regional engineering strains, which we relate to a diagonal component of the deformation gradient by
$$\begin{aligned} \mathbf {F}_\mathrm{measured}^{i,j,k} = \varepsilon ^{i,j,k} + 1 \end{aligned}$$
where \(\varepsilon \) is the engineering strain and \(\mathbf {F}_\mathrm{measured}\) the corresponding measured deformation gradient diagonal component. We note that this implies the linear approximation \(\varepsilon _k \approx \nabla \mathbf {u}\cdot \mathbf{e}_k\).
We quantify the mismatch between model and measured strains with the following functional
$$\begin{aligned} I_{\text {data}}= \sum _{i = 1}^{N_{m}} \sum _{j = 1}^{N_{r}} \sum _{k \in \{c,r,l\}} \left( \mathbf {F}_\mathrm{model}^{i,j,k} - \mathbf {F}_\mathrm{measured}^{i,j,k} \right) ^2. \end{aligned}$$
Here \(N_{m}=7\) is the number of strain measurements available in atrial systole and \(N_{r}=16\) the number of AHA regions, with the apex segment excluded for compatibility with the LG-MRI data. Finally, the direction set of index k refers to the circumferential (c), radial (r), or longitudinal (l) directions.
We allow each of the four elastic material parameters \(a, b, a_f, b_f\) in (1) to vary in space, and more precisely, to vary as a continuous piecewise linear function defined relative to the computational mesh. This allows us to resolve spatially heterogeneous material parameters, at the cost of greatly increasing their dimensionality. To constrain the minimization problem at hand, we introduce a first-order Tikhonov regularization functional favouring more smooth material parameter sets. This regularization functional is defined as:
$$\begin{aligned} I_{\text {smooth}}= \frac{1}{|\varOmega |} \sum _{z \in \{a, a_f, b, b_f \}} \int _{\varOmega } | {{\mathrm{Grad}}}z |^2 \, \mathrm {d}V, \end{aligned}$$
where \(|\varOmega |\) is the volume of the simulated myocardium.
In total, we consider the optimization problem of minimizing a combined data and smoothness functional over the admissible material parameter fields \(a, b, a_f, b_f\):
$$\begin{aligned} \min _{a, b, a_f, b_f} I = \min _{a, b, a_f, b_f} \left( I_{\text {data}}+ \lambda I_{\text {smooth}}\right) \end{aligned}$$
with regularization parameter \(\lambda \).
The total functional (9) is minimized by simultaneously optimizing all of the degrees of freedom of the 4 elastic parameters. This optimization is carried out by a sequential quadratic programming (SQP) algorithm (Kraft 1988). Each iteration of the SQP algorithm requires one or more evaluations of the functional (9), and the gradient of the functional with respect to all of the material parameter variables. This gradient is calculated efficiently by the adjoint gradient method [Eq. 13 of Balaban et al. (2016)] symbolically derived by the software package dolfin-adjoint (Farrell et al. 2013). In particular, the computational cost of the adjoint gradient does not significantly depend on the number of optimization parameters, of which there are 2964 in our study. This compares favourably with a one sided finite difference approach to functional gradient calculation, which would require 2964 model realizations, one for each optimization parameter.
We employ a continuity scheme (Gokhale et al. 2008) to reduce the number of nonlinear solves needed to evaluate the functional (9). In this scheme the first time the functional is evaluated the cavity pressure is applied in small increments, and the displacement–pressure solutions are saved at the seven recorded in vivo pressures. For further functional evaluations, the hyperelastic equation is solved directly at the seven pressure levels, with the previously stored displacement-pressure solution as the initialization point. If convergence in the Newton solver is not achieved, then the difference between the previous and current material parameter vector is divided into smaller increments, which are then applied. In our implementation the number of divisions is doubled every time that convergence is not achieved. Using such divisions we obtained convergence in all cases in our study.
3 Numerical results
The main results of this study are heterogeneous elastic material parameters optimized to match clinical data, presented in Section 3.2. We also present simulated equi-biaxial extensions tests based on regional averages of estimated elastic parameters. Prior to the presentation of the main results, we present results for a synthetic data test for the purpose of verification and inspection of algorithm performance in Section 3.1.
Optimizations were carried out until the norm of the projected gradient was less than \(1.0 \times 10^{-4}\), or 500 iteration of the SQP algorithm had been reached. A lower bound of 0.4 was applied to all material parameter fields pointwise during optimization.
3.1 Parameter estimation and evaluation using synthetic data
For the purpose of verification of the model and the optimization procedure, we consider initial trials using synthetically generated data over the ventricular mesh. In these trials, the ground truth elastic parameters were defined as:
$$\begin{aligned} \begin{aligned} a^{0} = 2 - \frac{y}{y_\mathrm{max}}, \quad a_f^{0} = 2 + \frac{y}{y_\mathrm{max}}, \\ b^{0} = 2 - \frac{z}{z_\mathrm{max}}, \quad b_f^{0} = 2 + \frac{z}{z_\mathrm{max}}, \end{aligned} \end{aligned}$$
where \(y_\mathrm{max}\) and \(z_\mathrm{max}\) are the maximum absolute coordinate values in the y and z directions of the computational mesh (and where the yz-plane was defined by the basal plane). Using these ground truth parameters, average regional strains were generated by solving (4) for 6 LV blood pressures: \(p_\mathrm{blood} \in \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6\}\) (kPa). Four sets of strains were generated: one noise-free case and three noisy cases. For the noisy cases, realizations of Gaussian noise with standard deviations of 0.1, 0.2, and 0.3 mm were applied to the displacements from which strains were calculated. We quantified the effect of this noise on the value of the synthetic strains in the second column of Table 1. We note that though the average effect of the noise is small, individual strains have relative errors as high as 24, 25, and 12 per cent for the 0.1, 0.2, and 0.3 mm noise levels, respectively.
Optimizations were carried out using the synthetic strains as target data in the total functional (9). All material parameters were initialized to a spatially constant value of 1.5. For each level of noise, the values \(\lambda ~\in ~\{1, 10, 100, 1000, 10000\}\) of the regularization parameter were tested, and the case with the lowest relative \(L^2\)-error, averaged across the 4 parameters, was selected. The \(\lambda \) values that were selected are listed in Table 1. As expected, the regularization value increases with the noise level. In the target functional (7), \(\mathbf {F}_0^{-1}\) was calculated from the model strains at 0.1 kPa.
We remark that, in order to represent non-trivial material parameters, the synthetic material parameter fields were chosen with a nonzero spatial gradient. In turn, this gave a nonzero contribution from the regularization functional, cf. (8). Thus, even in the case of an exact optimization, we did not expect to obtain an optimal functional value of 0 and did not expect to recover the exact material parameter fields in this test case.
The ground truth and estimated material parameter fields for this noise-free case are presented in Fig. 4. Moreover, the differences between the ground truth and estimated parameters are given in terms of the relative \(L^2\)-errors in Table 1, along with the optimal data and smoothness functional values.
We note that for the noise-free and 0.1 mm noise case, the parameters are accurately reproduced, with all relative errors being less that 6%. For the 0.2 and 0.3 mm noise cases, the errors in the first three material parameters a, b, \(a_f\) are also less than 6%, but the error in the parameter \(b_f\) is 19% and 17% for 0.2 mm and 0.3 mm noise respectively. The accuracy of the reproductions is also visible in Fig. 4, where we can see that the linear gradients are reproduced for \(a, b, a_f\) in all cases and for \(b_f\) in the noise-free and 0.1 mm cases.
Top view of the ground truth and estimated parameters fields of the synthetic data test
Table 1 Optimal functional values and relative errors in reconstructed material parameter fields obtained in the synthetic data test
3.2 Parameter estimation using patient-specific strain data
As a first step towards creating a patient-specific model of the infarcted left ventricle in atrial systole, we identified suitable values for the regularization parameter \(\lambda \).
We tested a series of trial material parameter optimizations with the patient strains as target data using \(\lambda \in \{1, 5, 10, 50, 100, 500, 1000\}\). Before optimization, material parameters were initialized with global values \(a = 1.291\) kPa, \(b = 5.0, a_f = 2.582\) kPa, and \(b_f = 5.0\) cf. (Asner et al. 2015, Table 5, case P2). Optimal data and regularization functional values \(I_{\text {data}}\) and \(I_{\text {smooth}}\) were obtained for each of the \(\lambda \) values tested. These are shown in Fig. 5. For the subsequent experiments, we selected \(\lambda = 5\) as the corresponding optimal functional values lay in a corner of the trade-off curve, and therefore represented a good compromise between smoothness and data fit. This choice of \(\lambda \) is inspired by the so called L-criterion (Hansen and O'Leary 1993).
Optimal data functional value versus optimal smoothness functional value for a series of optimization experiments with clinical data over a range of regularization parameter values \(\lambda \). The regularization parameter values are stated next to the corresponding data point
Table 2 Results of parameter estimations with patient data starting from 20 points drawn from a Latin hypercube design
With the value of the regularization parameter \(\lambda \) fixed, we carried out a series of optimizations using various initializations for the elastic parameters. These initializations consisted of 20 global parameter sets whose values were taken from a Latin hypercube design (McKay et al. 1979) with minimum and maximum limits of 0 and 10 respectively for each variable. This design created parameters which spanned the parameter space with low redundancy. Optimal functional values for the optimizations are shown in Table 2 along with the spatial mean and standard deviation for each elastic parameter. We note that there is great variability in the optimal parameter sets calculated, and that there is a clear best fitting parameter set. Furthermore, the values of the smoothness functional are similar among all parameter sets and small in comparison to the total functional values. This indicates that the parameter sets differ in their ability to fit the model to the data, but are similar in their smoothness.
The best fitting parameter fields (corresponding to the first row of Table 2) are visualized in the top row of Fig. 6. We note that these fields are fairly smooth, yet show large variation across the ventricle. We also compare strain curves generated by the optimized model to the patient strains in Fig. 7.
View of optimal material parameters at two different mesh resolutions estimated from patient strain data. The first and third rows show the inferior view and the second and fourth rows the anterior view
3.3 Stability of optimized material parameters under mesh refinement
In order to test the effect of mesh refinement on the estimated material parameters, we have carried out a parameter estimation with a slightly finer mesh (1117 vertices, 3373 elements). This estimation was initialized with the same constant values that were used previously in the best fitting optimization to clinical data. The target data were the clinical strains. The resulting optimal material parameter fields are shown in the bottom two rows of Fig. 6. We note that the corresponding original and higher resolution parameters appear to be very similar. We note that the higher resolution parameters came with an increased computational cost, as the time required for an average evaluation of the total functional increased from 30 s to 46 s as compared to the original resolution.
3.4 Regional stress–strain relationships
The personalization of the mechanics model to the patient data resulted in four material parameters fields that were resolved in space over the ventricular geometry. We combined these four parameters into a more intuitive visualization of stiffness by considering regional stress–strain relationships. This allows for regional comparisons to be made for a given level of strain, as stiffer materials give higher stresses given the same strain.
Regional stress–strain curves were calculated with in silico equi-biaxial extension tests, using analytical values for the stresses based on [Eqs. 17, 18 of Holzapfel and Ogden (2008)]. A test was conducted per AHA region using the average of the material parameter fields over the corresponding region. The resulting stress–strain relations along the fibre and cross-fibre directions are presented in Fig. 8.
By applying an adjoint gradient-based data assimilation method, we were able to estimate spatially heterogeneous material properties in an infarcted left ventricle with a good match of simulated to measured strains. This has important implications for the use of computational mechanics models in planning and optimizing therapies in silico. Conditions such as myocardial infarction are local and lead to elastic heterogeneities which should be accounted for in a personalized model. This study presents a general and flexible method to account for these elastic heterogeneities.
Our experiments with synthetic data indicate that fairly accurate reproductions of spatially varying parameters are possible in the absence of noise, as the relative \(L^2\) errors were less than 5% for all parameters in this case. In the presence of Gaussian noise the relative errors in the a-type parameters increased slightly, but were still below 6%. The effect of the noise on the reproduced b-type parameters was more pronounced, and in particular, errors in the \(b_f\) parameter for the 0.02 and 0.03 mm noise cases were large enough that the spatial gradient present in the ground truth \(b_f\) parameter could no longer be reproduced. These results suggest that spatial heterogeneities can be more robustly estimated with a-type parameters rather than b-type exponential parameters. Indeed, several recent data assimilation studies have limited parameters estimations to the a-type parameters only (Hadjicharalambous et al. 2015; Asner et al. 2017).
We note that the optimal \(I_{\text {data}}\) values were two orders of magnitude higher in the clinical case than in the synthetic case. This could be due to higher noise in the clinical data and or modelling error in the representation of in vivo cardiac motion (1). Similarly, the optimal \(I_{\text {smooth}}\) values were several orders of magnitude higher in the clinical case than in the synthetic case. This is due to relatively higher gradients in the optimal material parameters fitted to the clinical data.
In both simulated and measured patient data, we noticed that the heavily infarcted region encompassing inferior to inferolateral segments at the base and the mid-posterior segment differed in several ways from healthy segments. In these infarcted segments strains were smaller, and the simulated equi-biaxial stress–strain relationships showed greater fibre stresses. Additionally, the optimal a and \(a_f\) material parameters are larger in the infarcted anterolateral segment, and there is a band of high a parameter value running through the infarcted inferior segments. These observations indicate increased myocardial stiffness. This is consistent with the increased stiffness observed in healing infarcts during an ex vivo tissue experiment (Gupta et al. 1994) and in previous computational modelling with in vivo data (Mojsejenko et al. 2015).
Optimized model (solid line) versus measured (black dot) strain components averaged over the volume of each AHA zone. The reference geometries for the strain measurements are derived from the echo image at 1.2 kPa in the case of the measured strain, and the model at 1.2 kPa in the case of the model derived strain. The line colouring indicates the relative amount of scar in a segment as given by Fig. 3
Regional fibre and cross-fibre stress–strain curves generated from simulated equi-biaxial extension tests. In each AHA region the spatial average of the optimal material parameters is used in the simulated extension experiment. The colour of each line indicates the corresponding regional scar burden value
We also observed that the mid-anterolateral segment was identified as free from scar in the late enhancement MRI analysis, yet showed signs of stiffness similar to the heavily infarcted segments described above, that is both low strain and high simulated stress. Such apparent stiffness in a healthy segment is consistent with an infarction impairing the mechanics of neighbouring healthy tissue (Holmes et al. 2005) or could be an effect of myocardial border zone tissue.
Ideally, model material parameters should be uniquely identifiable from in vivo data in order to produce potentially useful biomarkers for clinical practice. Recently, it has been shown that the linear parameters a and \(a_f\) of a reduced Holzapfel–Ogden law (1), are structurally identifiable (Hadjicharalambous et al. 2015). Structural identifiability means that there exist sets of model loaded states such that only one set of parameters produces them, making it theoretically possible to uniquely identify the parameters. Our in vivo data are corrupted by noise, which makes the question of the unique identifiability of parameters more complex. Additionally, we have optimized the exponential b and \(b_f\) parameters in our in vivo experiment, for which possible structural identifiability is still unknown. Last but far from least, we have spatially resolved all of the parameters, thereby greatly increasing their dimensionality. Under such circumstances the theoretical identifiability of material parameters is an open question.
To improve the identifiability of material parameters in our estimations, we have added regularization to the optimized functional. Indeed, Fig. 5 confirms the existence of several material parameter sets that fit the model to the data very similarly, but differ in their smoothness. By choosing a corner point in the space of optimized data and smoothness functionals our aim was to pick the smoothest set of elastic parameters that still fit the data well. However, even with the regularization, our parameter estimation still showed a dependency on the choice of initial parameters, and a variety of results were obtained (Table 2). Nevertheless only one parameter set fit the best, allowing us to choose it from among the others.
5 Limitations
The identifiability of material parameters was limited in our study, and all optimizations depended upon their initial guess. This dependence is demonstrated in Table 2 by the variety of minima. In the future it would be of interest to further examine constraints to spatially resolved material parameters, ideally yielding an optimization procedure that yields the same parameters regardless of the initialization. One such possible constraint is the left ventricular chamber volume, which has been previously matched together with strain data (Balaban et al. 2017; Mojsejenko et al. 2015; Sun et al. 2009). Further possible constraints are aggregated geometry measures such as LV twist, and long- and short-axis motion. These have been shown to improve identifiability of elastic parameters in experiments with mouse ventricles (Nordbø et al. 2014).
Further limitations were related to the rule-based fibres, mechanics modelling, computational efficiency, and strain and pressure synchronization
5.1 Rule-based fibres
The fibre orientations in our model were generic and not patient specific. As a result, healthy fibre angles were used in infarcted areas. Previous studies have shown that fibre orientations of infarcted areas can be significantly different from healthy tissue (Mojsejenko et al. 2015; Fomovsky et al. 2012). If this effect were incorporated in our parameter estimation, we would expect a change in the optimal material parameters in the infarcted areas, especially in the \(a_f\) and \(b_f\) parameters, which control the amount of anisotropy in the model along the fibre direction. In the future, further improvements to diffusion-tensor MRI technology may allow for in vivo identification of local myocardial fibre directions, which would allow for the fibre directions to be directly incorporated into the optimized model without needing to be estimated.
5.2 Mechanics modelling
The image-based reference geometry contained a pressure load that was not accounted for in the current study as 0 pressure was assumed for the reference geometry. Using recently developed techniques, it is possible to calculate a pressure-free reference geometry simultaneously with material parameter estimation (Nikou et al. 2016). Applying this technique in our study was unfortunately not possible as the unloaded mesh self-intersected partway into the calculation when we attempted it.
Active tension was assumed to vanish in our model. Typically this tension has decayed to 0 in the diastasis phase of a healthy heart, but may extend into atrial systole under pathological conditions. If active tension were present in the diastasis phase of our patient, then it could add additional stiffness to the myocardium. At the same time, the release of active tension could contribute to strain in atrial systole. Missing these effects would lead to potential overestimation of passive tissue stiffness in the first case and an underestimation in the second.
The computational model lacked several relevant physical effects, notably inertia, viscoelasticity, residual stresses in the unloading geometry, mechanical coupling of the LV to the right ventricle and atria, the effect of sheet microstructure, and tissue compressibility due to blood entering and exiting the ventricle via coronary vessels. The spring constant at the base was a rough approximation and could be replaced by displacement data at the basal boundary if it were available. The apex of the computational model was free, while longitudinal motion at the base was fixed. The in vivo situation is the opposite, the base moves longitudinally, and the apex is mostly stationary.
5.3 Computational efficiency
The spatial discretizations of the material parameters were not optimized. Instead, the computational mesh used to solve the variational equation of motion (4) was also used for the representation of the spatial parameters due to ease of implementation. It is possible that a coarser representation of the material parameters could have also produced good model-data fits. Using fewer parameters could potentially improve the identifiability of parameters and reduce the number of SQP iterations needed to find a minimum.
For the sake of computational efficiency, the resolution of the mesh was not increased to the point of obtaining a numerically convergent solution. Errors in the discretization of the hyperelastic variational equation (4) may have affected the optimized elastic parameter values. However, the results of our test optimization with a finer mesh indicate that any errors due to insufficient mesh resolution did not substantially affect the overall pattern of the optimized parameters.
5.4 Strain and pressure synchronization
LV pressure and strain measurements were not taken simultaneously and had to be synchronized in our study. Though both strain and pressure measurements were taken when the patient was relaxed and prone, there could have been slight differences in heart rate which would confound the strain–pressure synchronization.
Adjoint-based data assimilation has been used to personalize a mechanics model to reflect the heterogeneity in material properties throughout an infarcted human left ventricle. Further trials with more datasets and more methodological development are warranted in order to evaluate the applicability of the technique.
Asner L, Hadjicharalambous M, Chabiniok R, Peressutti D, Sammut E, Wong J, Carr-White G, Razavi R, King A, Smith N et al (2017) Patient-specific modeling for left ventricular mechanics using data-driven boundary energies. Comput Methods Appl Mech Eng 314:269–295
Article MathSciNet Google Scholar
Asner L, Hadjicharalambous M, Chabiniok R, Peresutti D, Sammut E, Wong J, Carr-White G, Chowienczyk P, Lee J, King A et al (2015) Estimation of passive and active properties in the human heart using 3D tagged MRI. Biomech Model Mechanobiol 15:1–19
Balaban G, Alnæs MS, Sundnes J, Rognes ME (2016) Adjoint multi-start-based estimation of cardiac hyperelastic material parameters using shear data. Biomech Model Mechanobiol 15:1–13
Balaban G, Finsberg H, Odland HH, Rognes ME, Ross S, Sundnes J, Wall S (2017) High-resolution data assimilation of cardiac mechanics applied to a dyssynchronous ventricle. Int J Numer Methods Biomed Eng
Balay S, Brown J, Buschelman K, Gropp W, Kaushik D, Knepley M, McInnes LC, Smith B, Zhang H (2015) PETSc web page. http://www.mcs.anl.gov/petsc. Accessed 7 April 2018
Bayer J, Blake R, Plank G, Trayanova N (2012) A novel rule-based algorithm for assigning myocardial fiber orientation to computational heart models. Ann Biomed Eng 40(10):2243–2254
Cerqueira MD, Weissman NJ, Dilsizian V, Jacobs AK, Kaul S, Laskey WK, Pennell DJ, Rumberger JA, Ryan T, Verani MS et al (2002) Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: a statement for healthcare professionals from the cardiac imaging committee of the Council on Clinical Cardiology of the American Heart Association. Circulation 105(4):539–542
Chabiniok R, Chapelle D, Lesault PF, Rahmouni A, Deux JF (2009) Validation of a biomechanical heart model using animal data with acute myocardial infarction. In: CI2BM09—MICCAI workshop on cardiovascular interventional imaging and biophysical modelling. London, United Kingdom, p. 9. https://hal.inria.fr/inria-00418373
Fan L, Yao J, Yang C, Wu Z, Xu D, Tang D (2016) Material stiffness parameters as potential predictors of presence of left ventricle myocardial infarction: 3D echo-based computational modeling study. Biomed Eng Online 15(1):1
Farrell PE, Ham DA, Funke SW, Rognes ME (2013) Automated derivation of the adjoint of high-level transient finite element programs. SIAM J Sci Comput 35(4):C369–C393
Finsberg H, Balaban G, Ross S, Håland TF, Odland HH, Sundnes J, Wall S (2017) Estimating cardiac contraction through high resolution data assimilation of a personalized mechanical model. J Comput Sci. https://doi.org/10.1016/j.jocs.2017.07.013
Fomovsky GM, Rouillard AD, Holmes JW (2012) Regional mechanics determine collagen fiber structure in healing myocardial infarcts. J Mol Cell Cardiol 52(5):1083–1090
Gao H, Carrick D, Berry C, Griffith BE, Luo X (2014) Dynamic finite-strain modelling of the human left ventricle in health and disease using an immersed boundary-finite element method. IMA J Appl Math 79(5):978–1010
Geuzaine C, Remacle JF (2009) Gmsh: a 3-D finite element mesh generator with built-in pre-and post-processing facilities. Int J Numer Meth Eng 79(11):1309–1331
Goenezen S, Barbone P, Oberai AA (2011) Solution of the nonlinear elasticity imaging inverse problem: the incompressible case. Comput Methods Appl Mech Eng 200(13):1406–1420
Gokhale NH, Barbone PE, Oberai AA (2008) Solution of the nonlinear elasticity imaging inverse problem: the compressible case. Inverse Probl 24(4):045010
Gupta KB, Ratcliffe MB, Fallert MA, Edmunds L, Bogen DK (1994) Changes in passive mechanical stiffness of myocardial tissue with aneurysm formation. Circulation 89(5):2315–2326
Hadjicharalambous M, Chabiniok R, Asner L, Sammut E, Wong J, Carr-White G, Lee J, Razavi R, Smith N, Nordsletten D (2015) Analysis of passive cardiac constitutive laws for parameter estimation using 3D tagged MRI. Biomech Model Mechanobiol 14(4):807–828
Hansen PC, O'Leary DP (1993) The use of the L-curve in the regularization of discrete ill-posed problems. SIAM J Sci Comput 14(6):1487–1503
Holmes JW, Borg TK, Covell JW (2005) Structure and mechanics of healing myocardial infarcts. Annu Rev Biomed Eng 7:223–253
Holzapfel GA, Ogden RW (2009) Constitutive modelling of passive myocardium: a structurally based framework for material characterization. Philos Trans Ser A Math Phys Eng Sci 367(1902):3445–3475. https://doi.org/10.1098/rsta.2009.0091. http://www.ncbi.nlm.nih.gov/pubmed/19657007
Holzapfel GA (2000) Nonlinear solid mechanics. Wiley, Chichester
MATH Google Scholar
Holzapfel GA, Ogden RW (2008) On planar biaxial tests for anisotropic nonlinearly elastic solids. A continuum mechanical framework. Math Mech Solids 14(5):474–489
Hood P, Taylor C (1974) Navier-Stokes equations using mixed interpolation. In: Oden JT, Zienkiewicz OC, Gallagher RH, Taylor C (eds) Proceedings of the international symposium on finite element methods in flow problems held at University College of Swansea, Wales, January 1974. University of Alabama Press, Huntsville, pp 121–132.
Hospital OU (2016) Acute feedback on left ventricular lead implantation location for cardiac resynchronization therapy (CCI impact). https://clinicaltrials.gov. Accessed 7 April 2018
Kraft D et al (1988) A software package for sequential quadratic programming. DFVLR Obersfaffeuhofen, Germany
Land S, Niederer S, Lamata P, Smith NP et al (2015) Improving the stability of cardiac mechanical simulations. IEEE Trans Biomed Eng 62(3):939–947
Li XS, Demmel JW (2003) SuperLUDIST: a scalable distributed-memory sparse direct solver for unsymmetric linear systems. ACM Trans Math Softw 29(2):110–140
Logg A, Mardal KA, Wells GN et al (2011) Automated solution of differential equations by the finite element method. Springer, Berlin
McGarvey JR, Mojsejenko D, Dorsey SM, Nikou A, Burdick JA, Gorman JH, Jackson BM, Pilla JJ, Gorman RC, Wenk JF (2015) Temporal changes in infarct material properties: an in vivo assessment using magnetic resonance imaging and finite element simulations. Ann Thorac Surg 100(2):582–589
McKay M, Beckman R, Conover W (1979) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21:239–245
MathSciNet MATH Google Scholar
Millar (2017) Mikro-tip catheter pressure transducer owner's guide. http://m-cdn.adinstruments.com/owners-guides/Pressure-Catheter-Owners-Guide.pdf. Accessed 15 Dec 2017
Mojsejenko D, McGarvey JR, Dorsey SM, Gorman JH III, Burdick JA, Pilla JJ, Gorman RC, Wenk JF (2015) Estimating passive mechanical properties in a myocardial infarction using MRI and finite element simulations. Biomech Model Mechanobiol 14(3):633–647
Morita M, Eckert CE, Matsuzaki K, Noma M, Ryan LP, Burdick JA, Jackson BM, Gorman JH, Sacks MS, Gorman RC (2011) Modification of infarct material properties limits adverse ventricular remodeling. Ann Thorac Surg 92(2):617–624
Nikou A, Dorsey SM, McGarvey JR, Gorman JH III, Burdick JA, Pilla JJ, Gorman RC, Wenk JF (2016) Effects of using the unloaded configuration in predicting the in vivo diastolic properties of the heart. Comput Methods Biomech Biomed Eng 19(16):1714–1720
Nordbø Ø, Lamata P, Land S, Niederer S, Aronsen JM, Louch WE, Sjaastad I, Martens H, Gjuvsland AB, Tøndel K et al (2014) A computational pipeline for quantification of mouse myocardial stiffness parameters. Comput Biol Med 53:65–75
Oberai AA, Gokhale NH, Feijóo GR (2003) Solution of inverse problems in elasticity imaging using the adjoint method. Inverse Prob 19(2):297
Sun K, Stander N, Jhun CS, Zhang Z, Suzuki T, Wang GY, Saeed M, Wallace AW, Tseng EE, Baker AJ et al (2009) A computationally efficient formal optimization of regional myocardial contractility in a sheep with left ventricular aneurysm. J Biomech Eng 131(11):111001
Walker JC, Ratcliffe MB, Zhang P, Wallace AW, Fata B, Hsu EW, Saloner D, Guccione JM (2005) MRI-based finite-element analysis of left ventricular aneurysm. Am J Physiol Heart Circ Physiol 289(2):H692–H700
Our work is supported by The Research Council of Norway through a Centres of Excellence grant to the Center for Biomedical Computing at Simula Research Laboratory, project number 179578, and also through the Center for Cardiological Innovation at Oslo University Hospital project number 203489. Computations were performed on the Abel supercomputing cluster at the University of Oslo via NOTUR project NN9316K. Special thanks to Lars Andreas Dejgaard and Eigil Samset for assistance with speckle tracking strain analysis and to Sjur Gjerald for assistance with fibre orientations.
Division of Imaging Sciences and Biomedical Engineering, King's College London, St. Thomas Hospital, London, UK
Gabriel Balaban
Simula Research Laboratory, Oslo, Norway
Henrik Finsberg, Simon Funke, Joakim Sundnes, Samuel Wall & Marie E. Rognes
Department of Informatics, University of Oslo, Oslo, Norway
Henrik Finsberg & Joakim Sundnes
Department of Radiology and Nuclear Medicine, Oslo University Hospital, Rikshospitalet, Oslo, Norway
Einar Hopp
Department of Mathematical Science and Technology, Norwegian University of Life Sciences, Ås, Norway
Samuel Wall
Department of Cardiology, Center for Cardiological, Oslo University Hospital, Rikhospitalet, Oslo, Norway
Trine F. Håland
Henrik Finsberg
Simon Funke
Joakim Sundnes
Marie E. Rognes
Correspondence to Gabriel Balaban.
The authors declare that they have no conflict of interest.
Dedicated to the memory of Hans-Petter Langtangen 1962–2016.
Balaban, G., Finsberg, H., Funke, S. et al. In vivo estimation of elastic heterogeneity in an infarcted human heart. Biomech Model Mechanobiol 17, 1317–1329 (2018). https://doi.org/10.1007/s10237-018-1028-5
Issue Date: October 2018
Adjoint method
Cardiac mechanics
Elastography | CommonCrawl |
Quality of service and fairness for electric vehicle charging as a service
Dominik Danner1 &
Hermann de Meer1
Due to the increasing battery capacity of electric vehicles, European standard electricity socket-outlets at households are not enough for a full charge cycle overnight. Hence, people tend to install (semi-) fast charging wall-boxes (up to 22 kW) which can cause critical peak loads and voltage issues whenever many electric vehicles charge simultaneously in the same area.This paper proposes a centralized charging capacity allocation mechanism based on queuing systems that takes care of grid limitations and charging requirements of electric vehicles, including legacy charging control protocol restrictions. The proposed allocation mechanism dynamically updates the weights of the charging services in discrete time steps, such that electric vehicles with shorter remaining charging time and higher energy requirement are preferred against others. Furthermore, a set of metrics that determine the service quality for charging as a service is introduced. Among others, these metrics cover the ratio of charged energy to the required energy, the charging power variation during the charging process, as well as whether the upcoming trip is feasible or not. The proposed algorithm outperforms simpler scheduling policies in terms of achieved mean quality of service metric and fairness index in a co-simulation of the IEEE European low voltage grid configured with charging service requirements extracted from a mobility survey.
Electric Vehicles (EVs) are seen as one of the key means to reduce the global greenhouse gas emission and air pollution in the transportation sector, especially with the growing use of renewable energy. According to the European Transport Roadmap (European Commission 2011), the European Union encourages the use of EVs to reduce the emission by 80% to 95% below 1990 levels by 2050. The trend towards battery-electrical transportation, the continuously increasing battery storage capacity and driving range of EVs will likely create a high pressure on present power supply infrastructure in the future. Especially the power distribution system may be affected, since many EVs will be charged at home due to convenience and economical reasons (IEA 2020). In order to manage the increasing number of EV charging processes, either the grid must be enhanced to cope with the new peak loads, or an intelligent charging capacity distribution mechanism needs to be established. Because grid expansion is economically and ecologically not always reasonable (Brinkel et al. 2020), intelligent charging control seem to be a promising solution to orchestrate EV charging in the low voltage grid. However, charging control algorithms need to achieve a high Quality of Service (QoS) and Quality of Experience (QoE) in times of grid congestion while ensuring fairness between parallel charging services to retain customers confidence. In this context, electricity (in the form of available grid capacity) can be seen as a limited resource that has to be shared by several end consumers in the power grid. A similar problem exists within the communication networks domain, where several connections share the same physical link with a limited bandwidth. We propose to offer EV customers a charging service that is inspired by computer networking, where only up to a certain bandwidth is provided. The actually received charging current is dynamically adjusted to the grid state and is balanced among charging services to ensure QoS, QoE and fairness.
Solutions in literature that consider (real-time) EV charging allocation (Ardakanian et al. 2013; Ardakanian et al. 2014; Kong et al. 2016; Rudnik et al. 2020; Shi and Liu 2015) aim for proportional fairness on the real-time demand, which is also generally discussed with regard to congestion management (Hekkelman and Poutré 2020) and demand supply matching (Haslak 2020). Schlund et al. (2020) use the laxity of charging processes to enable bidirectional flexibility potential of distributed EV charging processes. Other authors propose price-based solutions (Gan et al. 2011; Hu et al. 2014; Wang et al. 2015). QoS aspects are mainly discussed in combination with charging station sizing (Bayram et al. 2011; Islam et al. 2018; Ul-Haq et al. 2013). However, a few papers investigate QoS and fairness based on other charging parameters in their allocation mechanisms (Frendo et al. 2019; Rezaei et al. 2014; Al Zishan et al. 2020; Zhou et al. 2013; Zhou et al. 2014), but they either ignore the impact on the low voltage grid or do not consider controllability limitations of existing EV communication protocols.
The contribution of this paper can be summarized as follows:
We first define a set of QoS and QoE metrics in "Requirements for fair charging service allocation" section that consider the ratio of charged energy to the required energy, the continuity of charging rate, the battery State of Charge (SoC) at departure and the ability to reach an upcoming destination.
Second, we propose an efficient and hierarchically scalable packet queuing allocation mechanism in "Queuing approach for electric vehicle charging" and "Queuing policies" sections that takes the residual charging time and the current SoC into account and ensures fairness between charging services. Our provided model includes not only temporal charging slot allocation (with fixed charging rates), but also distributes the charging capacity during each time slot while respecting charging hardware limitations and control protocol capabilities.
In "Evaluation" section, our proposed solution is finally evaluated on the IEEE European low voltage test feeder with real user driving profiles extracted from a mobility survey. In contrast to simpler queuing policies, the proposed dynamically weighted fair queuing approach achieves both, high QoS results and good fairness indices throughout the whole charging service.
There are many papers in literature that deal with coordinated smart charging of EVs targeting the mitigation of power grid issues as their main objective (Alyousef and de Meer 2019; Alyousef et al. 2018; Chung et al. 2014; Lopes et al. 2009; Deilami et al. 2011; Cortés and Martínez 2016; Rivera et al. 2015; Alonso et al. 2014; Kong et al. 2016; Martinenas et al. 2017; Álvarez et al. 2016). However, in this paper we see EV charging is a service to the user, hence the main objective is to make EV drivers happy under the given grid constraints. Therefore, we focus on QoS, QoE and fairness aspects of the charging services.
Nevertheless, there are approaches in literature which try to achieve a certain QoS level of EV charging processes. Some solutions consider public (fast) charging stations as their main focus area and define QoS with respect to the probability that an EV is blocked at a charging station (Bayram et al. 2011; Bayram et al. 2015; Zenginis et al. 2016). In Ul-Haq et al. (2013), all EVs can supply energy back to the charging station and QoS includes also continuity of power supply and overall charging time, whereas the QoS definition in Erol-Kantarci et al. (2012) only relates to the overall charging time and finishing charging faster implies a higher QoS. Similarly, the QoS in Fan (2012); Haack et al. (2013) is extracted from the charging time, but as binary variable. Only if an EV finishes charging within the required time, it is considered to meet the QoS, no matter what the charging power profile looks like and what final SoC is reached. Our QoS definitions include all metrics from literature and additional QoE metrics that include the circumstances of the charging service to retain customers satisfaction.
None of the above papers consider QoS fairness by design, because most of them are price based or focus only on local charging station sizing (hence only the total QoS for sequential charging processes). In Islam et al. (2016; 2018), photovoltaic and battery sizing are optimized at parking lots for a specific use case of business charging. Their optimization considers QoS as the ratio of charging energy delivered to charging energy demanded. Furthermore, the probabilistic charging model used in Islam et al. (2018) introduces a fairness factor, which influences the charging rate of each single EV based on its SoC. In Ucer et al. (2019) quality of power service is defined such that the voltage drop must be kept stable, but proportional fair charging rates must be provided to all EVs, regardless of their location in the grid. As a result, quality of service to charging EVs is defined on the instantaneous power delivery. Alyousef and de Meer (2019) also focus on the power quality of the grid and implicitly define the QoS in terms of energy delivered to the EV without differentiating the actual energy requirement of the user. Their Transmission Control Protocol (TCP) inspired approach results in a fairer distribution than a purely power quality aware algorithm from Alyousef et al. (2018). Ardakanian et al. (2013; 2014) describe the proportionally fair distribution of available power capacity as an optimization problem that can be solved in a distributed manner via decomposition of its dual problem. The satisfaction of an EV user about a charging process is defined depending on the current charging rate, while the charging service situation such as the SoC and residual charging duration is neglected.
The approach in Zhou et al. (2013) shows how available power capacity can be shared fairly using weighted fair queuing scheduling. Their approach is based on packetization of the charging process of EVs, where each packet represents the permission to charge for the next time slot. These packets are queued by the single charging processes to the proposed scheduler, which computes the packet assignment based on their statically determined weight at arrival time. In their solution, demand and supply mismatch defines the available power, whereas we also take a look to the underlying power grid topology. Furthermore, we consider both, the required energy and the remaining time until departure for decision making. Additionally, instead of switching the EVs on and off, EVs receive dynamic charging rates. In Zhou et al. (2014), the same authors compare different scheduling algorithms from the networking context, e.g. round-robin, first come first serve or first depart first serve, for distributing the available power to charging processes. A similar temporal packet-based mechanisms, which uses a probabilistic automaton to limit the transformer utilization by allowing or denying charging requests from distributed EVs, is proposed in Rezaei et al. (2014). According to Chen et al. (2013), pulse charging of EV batteries is not degrading battery lifetime. But switching huge loads of 22 kW can cause undesired high voltage fluctuations in the grid.
One recent publications on fair charging capacity allocation considers the laxity of a charging process as weights in an optimization problem, which is solved in a decentralized manner (Al Zishan et al. 2020). In order to reduce the impact of users cheating with their departure time, they propose to integrate the reputation of the user into the weights, such that people who estimate their departure time similar exactly obtain the same fair share of power capacity. Despite the fact that (distributed) optimization problems tend to be not very efficient, their output does not integrate actual hardware control capabilities, like limitations of the control protocols or adjustable power limits. Furthermore, fairness is measured on the actual charging power, neglecting other quality of charging service aspects.
In a different application area, Chen et al. (2012) propose a fair power allocation for air conditioners in the smart grid, where the power consumption is indirectly controlled by allocating thermostat settings in each time slot. In this way, ambient temperature and the amount of power required for the same temperature reduction is decoupled and fairness is defined on the QoS level of air conditioning. Similarly in our paper, QoS aspects for a charging service are not necessarily coupled to the actual charging power but measured by charging as a service related metrics.
We first state the requirements for a fair charging resource allocation and define relevant metrics to measure QoS, QoE and fairness of a charging service. Afterwards we describe the proposed queuing architecture and discuss various queuing policies.
Requirements for fair charging service allocation
The most intuitive notion of fairness is sharing a limited resource proportionally between all participants (EVs), like flow-based fairness in networking (e.g. fairness achieved with TCP). However, some charging processes might require more energy, while the available charging time may be relatively short. Other EVs with longer available charging time require only less energy and, hence, benefit more from proportional fairness. Therefore, we will have a look at service-based fairness, where charging services are defined by: (1) the time of arrival tarr and departure tdep, which together form the available charging time, and (2) the required energy to be charged Ereq>0. We assume that EV drivers provide their expected departure time and required energy with high confidence, whereby the later can be extracted from the last driven trip. Furthermore, we use a discrete time model with constant time slot size of Δ, where tsta denotes the first time slot within [tarr,tdep] when the EV is charging and tfin the time slot when either its battery SoC reaches 100% or the EV needs to leave at (planned) departure time. Additionally, the charging power profile P(t) is ≥0, because vehicle-to-grid is not considered in this paper. Figure 1 depicts all relevant charging service parameters.
Overview of a charging service. Note that tfin can be equal to tdep
QoS is the measurement of the overall performance of a service and was initially introduced for telecommunication services by ITU in 1994 (ITU 2008). The QoS definition implies that characteristics of a service, which can be measured quantitatively or qualitatively, need to match the users requirements towards the service, and hence involve the user. QoE is defined as "The degree of delight or annoyance of the user of an application or service" (Brunnström et al. 2013). The QoE relates (not necessary linearly) to QoS parameters and additionally integrates the personality and the current state of the user. QoE models are often derived from user surveys, but can also be metrics on the QoS parameters. For example, an EV that can fully recharge its battery during the available charging time receives a high QoS, since the charging service succeeds. If the charging process cannot finish before departure, the QoS is obviously lower. However, if the EV has recharged enough energy to reach the next destination (independent of whether the charging process has finished), the QoE is obviously higher than if an additional charging stop is required during the next trip.
QoS1 The main goal of a charging service is to deliver the required energy Ereq to an EV. Obviously, a finished charging process receives maximal QoS. If the energy target is not met, the QoS degrades proportional to the energy charged. The QoS metric in Eq. 1 can be evaluated at each point in time during the charging process and QoS1(tdep) denotes the final metric score of the charging service.
$$\begin{array}{*{20}l} QoS_{1}(t) = \frac{E(t)}{E_{req}} \end{array} $$
QoS2 We consider the waiting time of a charging process as second QoS criteria. It follows the logic that a charging process that starts earlier has a higher chance to finish in time. Furthermore, waiting charging processes do not receive any service until the charging process actually starts. Therefore, waiting charging processes receive a lower QoS score, like defined in Eq. 2.
$$\begin{array}{*{20}l} QoS_{2} = \left\{\begin{array}{ll} 1 - \frac{t_{sta} - t_{arr}}{t_{dep} - t_{arr}} & \text{if}\ \exists P(t) > 0 \text{,}\\ 0 & \text{else} \end{array}\right. \end{array} $$
In case the charging process does not start at all, the charging power profile (P(t)) is the identical zero function and hence no P(t)>0 exists, which results in QoS2=0. An immediately starting charging process (tsta=tarr) receives highest QoS score.
QoS3 A third QoS criteria is the variation of the charging power P(t) over time. In communication networks we would refer to as packet jitter, which measures the variation of packet delays. For an EV charging service, we focus on the charging power variation between the time slots, where high charging power variation results in bad residual charging time estimation, which in turn reduces the QoS of the feedback towards the user. The respective metric is defined in Eq. 3, where s(X) is the sample standard deviation of a set X={x1,…,xn}.
$$\begin{array}{*{20}l} QoS_{3} = 1 - \frac{2 \cdot s(P(t))}{P_{max}} \end{array} $$
The sample standard deviation s(P(t)) returns a value from the interval \(\left [0, \frac {1}{2}P_{max}\right ]\), because the charging power profile is limit between 0 and Pmax. Note that for calculating QoS3, values of P(t) are only taken from the interval t∈[tsta,tfin], because only the variation during the actual charging matters.
QoE1 The first QoE metric refers to the battery SoC instead of the actually charged energy of the charging service. Especially, with different battery sizes, but same energy requirements, a user finally does not see the actual energy charged, but only the battery SoC is displayed in the car. Following this users' recognition, a high SoC (near to SoCtarget) corresponds to a high QoE and vice versa. Similar to QoS1, the QoE metric in Eq. 4 can be evaluated at each point in time during a charging process and QoE1(tdep) is the final metric score.
$$\begin{array}{*{20}l} QoE_{1}(t) = \frac{SoC(t)}{SoC_{target}} \end{array} $$
QoE2 A second criteria of QoE is whether the EV driver will reach the next destination, which can be expressed as a binary metric like in Eq. 5. In this work, we define the next trip to be feasible if the battery holds enough energy to reach the next destination with a SoC greater than 10% at arrival. With the remaining SoC it should be possible to reach the next charging facility. This again relates to the users' recognition and range anxiety, which let the driver recharge the battery before running out of energy.
$$\begin{array}{*{20}l} QoE_{2} = \left\{\begin{array}{ll} 1 & \text{ if next trip is feasible,} \\ 0 & \text{ else} \end{array}\right. \end{array} $$
Besides high quality of service and experience scores, fairness among different users matters. A very unfair allocation means that the QoS and QoE metrics differ significantly among the different charging services, whereas very similar metric scores can be considered as fair. Note that in order to analyze fairness separately from the QoS and QoE values, the fairness index must be independent of the metric values. Therefore, we use the fairness index from Hoßfeld et al. (2018) in Eq. 6, where H defines the maximum and L defines the minimum possible metric value. The index calculation uses the sample standard deviation s(S) of the metric scores S={s1,…,sm} of m different charging services.
$$\begin{array}{*{20}l} F(S) = 1 - \frac{2 \cdot s(S)}{H - L} \end{array} $$
Since QoS and QoE metrics from Eq. 1 - (5) are defined within [0,1], the fairness index simplifies to F(S)=1−2·s(S).
Queuing approach for electric vehicle charging
In communication networks, multiple information flows are sent simultaneously through the same shared physical link, e.g. using time-division or frequency division multiplexing. For packet switching, often queuing models are used in order to send packets over a network of nodes. Each node holds a queue with packets awaiting transmission to another node. Whenever the communication link is free, a scheduler selects the next packet in the queue, normally based on first-in-first-out policy. In order to establish a certain QoS, other policies can be applied such as earliest deadline first, least laxity first, weighted fair queuing or packet prioritization.
In this paper, each EV is represented by a flow and the EV can request charging currents by scheduling packets to the power grid in discrete time slots, where the packet size psize equals the EVs' minimal adjustable charging current. This guarantees that EV battery constraints and communication protocol limitations can be considered. For example, if the battery of an EV limits the charging power to 6.9 kW (10 A on three phases) and the charging current can vary in discrete 3 A steps (1 A per phase) - like defined in IEC 61851-1 - the EV charging service needs to queue 10 packets for each phase. Without loss of generality, we describe the packet allocation for a balanced three-phase power system, hence only a single phase is considered.
The shared network is the underlying power distribution grid whose bandwidth is limited by the available capacities. In order to not overload grid assets, a Scheduling Unit (SU), which contains the queuing logic, is placed at each limiting cable or transformers. Because power distribution grids are typically operated as radial networks, the single SUs span a tree. Each EV requests charging current packets to the nearest connected SU, typically at the supplying cable or transformer. The requested packets pass the network tree towards the root node as depicted in Fig. 2a. Thereby, each SU only forwards as many packets to the next SU as the local capacity limit allows. Finally, the root node, e.g. responsible for the transformer, assigns its available capacity by returning the packets top-down to the EVs, like depicted in Fig. 2b. Note that voltage violations are treated by a feedback Q(V)-controller as described in "Simulation setup" section.
Hierarchical composition of EVs and SUs for the queuing approach
In order to determine the available capacity of a shared link, we propose to measure parts of the distribution grid, and from that infer the available charging capacity of the next time slot using short-term forecasts. The bandwidth calculation can also include an approximation of grid losses, which in turn reduces the actual available charging capacity at the end of the radial network. We assume that applying the allocation algorithm in time slots of one minute is sufficient for charging service allocation and fast enough for reasonable load management in the distribution grid. In addition to the pure network related capacity limitations, the root node can participate in demand response programs or act as a market agent, which artificially limits the aggregated charging power based on market signals.
The charging capacity Ci(t) of an EV i in time slot \(t\in \mathbb {N}\) is calculated by \(C_{i}(t) =\sum _{p \in A_{i}(t)} p_{size}\), where Ai(t) is the set of packets that is assigned to the EV by the SU. Because EVs that still have packets in the queue might leave before the next iteration, all queues are flushed afterwards. Finally, the allocation algorithm is executed again for the next time slot, starting with packet requests from the EVs.
A reliable and fault-tolerant Information and Communications Technology (ICT) architecture is required for the aforementioned procedure. In case of communication loss between EVs and SUs, the EVs moves to a fail-safe mode while pausing ongoing charging processes to avoid damaging the power grid. Because the SUs are logical units that do not store state information beyond a single time slot, SU instances can be executed in a cloud environment, which allows fast fault recovery. The communication effort between entities scales linearly with the number of involved EVs and SUs. In each time slot, EVs (leaf nodes in the tree topology) need to send the requested packets along the path of SUs (inner nodes) to the root node, which finally returns them to its origins. Because each EV/SU sends its packet requests only once back and forth in each time slot, the total communication cost can be approximated with \(\mathcal {O}(n+m)\) where n is the number of inner nodes and m the number of leaf nodes. Note that this paper does not target privacy nor security issues like packet injection, that may arise with the operation of ICT infrastructure.
Queuing policies
In the following, we will discuss different simple scheduling policies and, finally, explain the proposed dynamically weighted fair queuing approach.
First Come First Serve (FCFS) The typical implementation of a queue is the first-in-first-out strategy, which means the first element that reaches the queue will be the first element that will be processed. In the case of EV charging, we consider a first come first serve policy. The EV that arrives earlier will be served first with the maximum possible charging current. If there is available capacity left, the EV that arrived next will be served and so forth. Even though considering only the arrival time of EVs for the charging service allocation is simple to realize and secure against malicious user inputs, the flexibility (required energy and residual charging time) of the EV is not considered at all with this scheduling policy.
Earliest Departure First (EDF) Contrary to the FCFS scheduling, the earliest deadline first policy executes tasks in the order of the nearest deadline. The idea behind this method is to process the more critical tasks first under the assumption that in average each task takes similar execution time. For EV scheduling, earliest deadline first turns into earliest departure first, only considering the departure time of the EV during the scheduling. Similar with FCFS, this policy does not consider the actual required energy and all packets of the same EV are scheduled with the same priority, which results in maximum charging rates for only a few EVs.
Least Laxity First (LLF) The priority of a task is inversely assigned based on its slack time, which is equal to the remaining extra time after job execution until its deadline. Note that the slack time can even be negative in case the job cannot finish in time, which however does not change the execution order. The slack time s(t) at any time t is calculated by s(t)=d−r(t)−c(t), where d is the deadline, r(t) the release time since start and c(t) the residual computation time at time t. In EV charging, the departure time is equal to the deadline (d=tdep), the time spent within the available charging time equals the release time (r(t)=t−tarr) and the required charging time with assumed maximum charging power equals the residual computation time (\(c(t) = \frac {E_{req} - E(t)}{P_{max}}\)). Note that for calculating the residual charging time constant current charging with maximum charging power is considered.
Proportional (PROP)Proportional fair scheduling policy guarantees that every participant receives a fair share of a limited resource proportional to its anticipated resource consumption. Proportional fairness is discussed in literature many times with regard to the expected charging power (Ardakanian et al. 2013; Ardakanian et al. 2014; Kong et al. 2016; Shi and Liu 2015; Rudnik et al. 2020), hence we also define proportional allocation based on the charging power requests of the EVs. Note that in our solution proportional fairness is a local property between EVs connected to the same SU. Capacity limitations along the supply grid can prevent global proportional fairness.
Weighted Fair Queuing (WFQ)Packet generalized processor sharing (Demers et al. 1989) can be approximated with weighted fair queuing and is used to share a resource's capacity fairly between flows, while the weight determines the fraction of capacity that each flow receives. Using the WFQ approach in network scheduling, each of N packet flows that passes a shared link is managed by one separate queue i with a specified weight wi≥0, which is determined by the priority of that flow. Every time a new packet p is received, its virtual finish time is computed by
$$\begin{array}{*{20}l} p_{virtFinish} = virtStart_{i} + \frac{p_{size}}{b_{i}} \quad, \end{array} $$
where virtStarti is the last virtFinish time of the same queue i (or the current time if the queue is empty) and bi is the assigned bandwidth to that queue. The bandwidth is calculated by \(b_{i} = \frac {w_{i}}{\sum _{j = 0}^{N} w_{j}} \cdot R\), using the weights of all single queues wj and the maximum bandwidth R of the shared link. Whenever the scheduler is able to send a packet over the shared link, it selects the queue that contains the packet with the smallest virtFinish time and sends the first packet from that queue. Note that WFQ allocates the resource proportionally fair to the weight of each queue (flow) independent of the packet sizes. The pseudo code for requesting packets at a SU is given in Algorithm 1 and the packet assignment is shown in Algorithm 2. In both cases, Qi denotes the packet queue of EV/SU i=1..N,getQueue(p) determines the queue of packet p, getNextQueue() returns the queue with the smallest virtFinish time and nextPacket(Q) returns the packet with the smallest virtFinsh time of queue Q.
In networking the packet size psize denotes the number of bits of the packet and the bandwidth defines how many bits can be transmitted per second (bit/s). For EV charging, the packet size is given by the minimum adjustable charging current. Because the requests are only valid for a discrete time Δ, the actual packet size can be seen as the electrical charge (Ah) that needs to be transmitted by the grid. Analogously, the bandwidth is the current carrying capacity of the grid in ampere.
In statically WFQ, the weight of one charging process is once determined at the beginning of the individual charging process when the EV arrives at the charging station. Similar to Zhou et al. (2013), the weight is based on the comparison of the required SoC with the current SoC of the EV. The weight of EV i is calculated by wi= max(SoCtarget−SoC(tarr),0)·10+1. EVs that require a full charge receive weight of 11 and EVs that arrive at home with the required SoC in the battery obtain a weight of 1, hence will only charge with minimum priority.
Dynamically Weighted Fair Queuing (DWFQ) By dynamically changing the weights of the flows, WFQ can be utilized to control the QoS for each flow. In contrast to statically weighted fair queuing in Zhou et al. (2013), the dynamically weighted fair queuing approach considers both aspects of the charging service namely, available charging time and required energy. Because for WFQ the weights wi(t) must be ≥0, we cannot use the slack time to dynamically estimate the weight like in LLF policy. Therefore, we divide the remaining charging time c(t) by the remaining time until departure like in Eq. 8. Charging services that theoretically can finish in time receive a weight ≤1, whereas others have a weight greater than 1.
$$\begin{array}{*{20}l} w_{i}(t) = \frac{\left(\frac{E_{req} - E(t)}{P_{max}}\right)}{t_{dep} - t} \end{array} $$
We first explain the underlying experiment setup and assumptions before analyzing the obtained results with regard to QoS and fairness.
Simulation setup
To evaluate the proposed algorithm in a realistic environment, we extract charging patterns from a mobility survey, state our grid and EV assumptions and define future charging penetration and grid limitation scenarios.
EV charging pattern and battery model
Nowadays, the driving behavior of people with EV differs to combustion engine drivers, e.g., due to smaller range of the vehicles, limited availability of charging facilities or because EVs are typically used as second car. Nevertheless, we assume that most people will not (like to) change their driving behavior drastically when switching from combustion engine vehicles to electric vehicles in the future. Similar to Danner et al. (2021), we take data from a mobility survey as basis for EV charging behavior, which provides one week of travel behavior in Germany (Eisenmann et al. 2017). This data record contains 1757 surveyed households and more than 2000 individual trips in which the car is the main means of transportation. Each registered trip consists of trip sections with destination, time of departure and arrival, and distance covered.
Because we focus on EV home charging in residential sub-urban area, we filter the survey data to fit to the regional type and aggregate all trip sections such that each trip starts and ends at home. As a result, we obtain the arrival time at home tarr, the departure time from home tdep and the distance d of the last trip before arriving at home. Because of convenience reasons of the EV driver, we assume that the vehicle will not be charged if the stay between two trips is less than 1 hour, hence the corresponding distance is added to the next trip. In order to get the required energy for the charging service, we assume that all EV drivers want to recharge the consumed energy of the last trip during their stay at home. An exemplary driving pattern that leads to the charging service requirements is depicted in Fig. 3. Assuming a battery storage capacity of 40 kWh and an average energy consumption of 17 kWh per 100 km, the required energy of the charging service, which is upper limited by the battery capacity, can be estimated using Eq. 9.
$$\begin{array}{*{20}l} E_{req} = \max({40}\text{kWh}, d \cdot \frac{{17}\text{kWh}}{{100}\text{km}} + E_{m}) \end{array} $$
Exemplary EV charging pattern. During the highlighted charging service, energy for the driven 42 km need to be charged
In case the battery capacity of an EV is not big enough to cover the whole trip distance, we assume that the driver visits a public charging station during the trip, where only the required additional energy is charged. Hence, with this worst case assumption the EV will arrive at home with an empty battery and requires a full charge cycle. The departure times are assumed to be strict deadlines, meaning a not fully charged EV at departure time that missed to charge energy of Em kWh requires more energy in the next charging service, respectively.
The number of charging services per EV ranges between 0 and 15 per week. The mean parking duration is approximately 16.6 h and the mean driving distance is equal to 39.1 km. As can be expected, many commuting EVs reach home between 17:00 and 18:30 and need to leave between 6:30 and 8:00 on the next day. In addition to commuters, the data set also contains 13.2% shorter charging stops with less than 3 h, which arrive almost with normal distribution around noon.
One of the most common charging models is constant-current-constant-voltage charging, in which increasing battery SoC leads to reduced charging current in the constant voltage phase, also kown as battery saturation phase. This effect is typically observed with charging rates above 50 kW, which are not possible for EV home charging. Therefore, we model the battery charging as constant current load. Furthermore, the charging efficiency is set to 95% and the minimum adjustable charging current is given by 3 A, similar to the control capabilities in IEC 61851-1.
Power grid and simulation scenarios
The evaluation is carried out on the simulated IEEE 906 low voltage test feeder. This typical European low voltage grid, shown in Fig. 4, connects 55 households on a three phase system. In our power flow simulation in PyPower within the mosaik co-simulation environment, all households are connected balanced to all three phases. We assume that each household owns two EVs that can charge in parallel at two 11/22 kW wall-boxes and the aforementioned charging patterns are randomly assigned to the EVs. The very small confidence intervals of 10 independent simulation runs are dropped in "Analysis" section.
IEEE 906 low voltage test feeder, with the transformer located in the top left side (red bigger circle) and 55 connected households (blue smaller circles)
In the baseline scenario without EV charging, the low voltage grid has a peak loading of 60.5 kVA, which is substantially smaller than the totally maximum installed charging capacity of 2.4 MVA. Nevertheless, uncontrolled charging with 22 kW and the aforementioned charging patterns results in a peak load of approximately 312 kVA due to the simultaneity factor. Because this would increase the peak load by more than 5 times, which applied to many low voltage grids can cause critical peak loads in the superior power grid infrastructure, and additionally causes voltage issue (details in Table 1), we artificially limit the maximum loading at the transformers' SU to 100 % of the baseline peak load.
Table 1 Charging statistics and impact on the low voltage grid of the different queuing policies
The proposed queuing mechanism acts only as load management. Voltage violations are counteracted by Q(V) and P(V) droop curves like in Fig. 5. The decentralized voltage controller changes the reactive power behavior of the rectifier and in critical situations even reduces the real power demand of the EV. Note that in our simulation the power factor is configured to be always greater than 0.9 to avoid losses and keep the reactive power ratio in the low voltage grid in a reasonable range. Therefore, the real power demand is slightly reduced between 0.93 and 0.97 p.u. (1.03 and 1.07 p.u. respectively) in order to not exceed the allocated current capacity. Furthermore, the reactive power decreases with the real power demand below 0.93 p.u. in order to stay with the defined minimum power factor. Although this voltage controller might reduce the actually obtained charging capacity for EVs at critical locations in the grid, LLF and DWFQ restore fairness between different charging services by dynamically recalculating the weights. In order to avoid osculations, we apply a first-order lag filter to the control signal changes.
$$\begin{array}{*{20}l} P(t) &= k \cdot \hat{P}(t) + (1 - k) \cdot P(t - 1) \\ Q(t) &= k \cdot \hat{Q}(t) + (1 - k) \cdot Q(t - 1) \end{array} $$
Voltage controller according to VDE-AR-N 4100
\(\hat {P}(t)\) and \(\hat {Q}(t)\) are the target signal value from the voltage controller limited by the assigned charging current from the queuing mechanism. The factor k must be configured to avoid oscillation, but still reach the target signal value within desired time. Our co-simulation steps with Δ equal to 1 min and the target value is nearly reached after 5 steps using \(k = 1 - \frac {1}{e} \approx 0.632\) to provide a fast enough reaction.
The control flow of the co-simulation, visualized in Fig. 6, first executes the queuing mechanisms to obtain the assigned charging current I. Secondly, using the locally measured voltage U the voltage controller calculates the real and reactive power values of the charging service \(\hat {P}(t)\) and \(\hat {Q}(t)\). Next, the first-order lag filter is applied before the parameters are sent to the EV model, which passes the values to the power flow simulator. The calculated SoC of the EV, available current capacity and node voltage form the grid close the control loop, which is executed every 1 min.
Control flow in the co-simulation, where dashed lines are time delayed
All the following results are obtained from a 7-day week simulation. During this simulated week, the individual EVs require between 1 and 13 charging services, in average 5.06. The mean energy demand of a charging service is 6.68 kWh, which is approximately 16.7% of the assumed battery capacity. The total energy demand of all 557 charging services is 13.2% greater than the total demand of the households.
QoS, QoE and fairness index of the different queuing policies
First, we analyze the obtained metric values for the quality of service and experience of the different queuing policies, where Fig. 7 shows box plots of all 557 charging services. All policies achieve high QoS1,QoE1 and QoE2 values that are close to one for most charging services, however the number and variation of outliers varies significantly among the queuing policies. In the charging service metric QoS1, FCFS, PROP and WFQ have slightly lower averages due to many outliers, and there are even some charging services that do not receive any service at all. That is the case, when charging services are blocked by other charging service that actually provide more flexibility to be moved to a later time. In all three metrics (QoS1,QoE1 and QoE2), EDF and DWFQ achieve maximum quality with the exception for QoE2 where some charging service are rated with poor QoE because a fully charged battery is not enough to reach the next destination. The in-cooperation of departure times plays an important role for charging service allocation due to the better performance of EDF, LLF and DWFQ in these three metrics.
QoS and QoE metrics of the different queuing policies for all 557 charging services. The box plots show the result distribution among the charging services and the circle denotes the average value. Below the box plots, the achieved fairness index F is given. 22 kW wall-boxes with 100% transformer limitation. Note that for QoS1,QoE1 and QoE2 most of the charging service have very high service quality, hence the boxes are very tiny near to 1
The two metrics QoS2 and QoS3, which target QoS during the charging time (starting time and power variation), show a different picture. For both, the average value of the first three policies (FCFS, EDF and LLF) is noticeable lower than with the last three policies. This is due to the fact that the later three also enable variable charging currents, whereas the first three policies operate as purely time division multiplexing. Blocked by other services the start times of charging services are delayed, which affects QoS2, and charging interrupts of newly arriving EVs increase the variation in charging power, which is reflected in QoS3.
Below the box plots in Fig. 7, the fairness index F is depicted for each metric and policy. As can be seen, only the proposed DWFQ policy is among the best three for each metric, whereas EDF and LLF achieve a high fairness index for most metrics except QoS2. Again, this tracks back to the time division behavior of both policies. Figure 8 compares the mean quality of service and experience obtained with the fairness index. The spider plots clearly show that PROP, WFQ and DWFQ outperform the other three policies in terms of mean value of all quality metrics. Additionally, DWFQ achieves good fairness indices in most metrics, which makes it a good candidate for fair charging service allocation.
Mean QoS and QoE metric values and fairness index of different queuing policies. 22 kW wall-boxes with 100% transformer limitation
Note that similar results with less differences between the queuing policies are obtained with a transformer limitation of 120% and 140% or 11 kW wall-boxes, respectively. It can even be expected that with unlimited transformer (and cable) limitation, all policies work similar, since all charging requests can directly be served. Nevertheless, the best queuing policy should be chosen in order to ensure high quality of charging service and fairness even with highly limited scenarios.
QoS, QoE and fairness index during the charging service
As already identified in the last section, the quality of service and experience during the charging services differs quite a lot between the queuing policies. Figure 9 depicts the evolution of the QoE1 mean value and fairness index during charging. Note that for the x-axis all charging services are normalized to the range between arrival tarr and departure time tdep. This makes them comparable on the same time scale, even though the charging services have different duration and do not take place at the same time. From Fig. 9a it can be seen that the mean QoE1 value of all policies evolves quite similarly during charging. Compared to the other policies, FCFS has a slightly lower value during the first half of the charging services, because newly arrived EVs are blocked until all previous charging services are fully served. Only EDF, LLF and DWFQ finally reach the maximum at departure time. Because EDF and LLF schedule only maximum charging power to the most critical charging processes with regard to time and remaining available charging time, both reach the maximum metric value earlier than DWFQ. In contrast, DWFQ focuses on a fair allocation throughout the whole charging process, which results in a higher mean quality metric at the first half of the charging service. Despite the fact that DWFQ has a slightly lower mean QoE1 value at the last third of the charging service, most of the time this policy dominates the fairness index shown in Fig. 9b except at the very end. With a higher fairness index during the first half of the charging service, EVs are served more fairly in case they need to leave earlier than the planned departure time. Furthermore, it can be expected that DWFQ (and also WFQ) are more robust against malicious user inputs (e.g. incorrect departure times), because opposite to EDF all charging services always obtain a fair portion of the available charging capacity according to their weight.
Mean QoE1 metric value and fairness index during the charging services using different queuing policies. All charging processes are normalized to the range [tarr,tdep]. 22 kW wall-boxes with 100% transformer limitation
Impact of the charging service on the low voltage grid
Regardless of which queuing policy is used to distribute the available charging capacity, the load at the transformer is smoothly limited to the configured threshold, except for a few short violations as shown in Fig. 10. Table 1 summarizes the achieved mean and minimum SoC at departure time (QoE1) of the different queuing policies and also provides grid statistics extracted from the power flow simulation. We take the minimum of the 10-minutes average voltage values (as defined in EN 50160) at all buses as an indicator for the voltage impacts of the different queuing policies. The grid losses are calculated by comparing the charged energy of all charging services with the additional energy that passes the transformer. All queuing policies reach an acceptable voltage level, but the three policies with variable charging rates improve the voltage level by more than 2 V compared to the other policies. Similarly, grid losses are reduced by approximately 1% (approximately 37 kWh). This is due to the fact that the total charging capacity is shared among more charging services, with each receiving a smaller share, thereby reducing the voltage drop and grid losses. Note that this study does not consider that EV charging equipment might have lower efficiency when not utilized with the rated power. Values from the baseline scenario without charging (Baseline), uncontrolled charging (uncontrolled) and only using the aforementioned local voltage controller (U-control) are given in Table 1 for comparison.
Loading at the transformer during one day, limited to 100% of the baseline peak load. EV charging with 22 kW using the DWFQ policy
Conclusion and future work
This paper presented a set of QoS and QoE metrics that can be used to evaluate EV charging services. Among others, the charged energy, charging power variations and whether the next destination can be reached with the charged energy are considered. Secondly, we have proposed a hierarchically scalable charging allocation mechanism that uses queuing systems and can apply various queuing policies, e.g. first come first served, earliest departure first or least laxity first. The proposed charging solution can capture charging restrictions coming from the battery or legacy communication protocols between the wall-boxes and the EV. Three of the analyzed queuing policies provide decent QoS and QoE in all five metrics while achieving a better overall fairness compared to the other policies in our co-simulation. Due to the variable charging rates and the dynamic recalculation of the weight (using the remaining available charging time and the remaining required energy), the proposed DWFQ is among the best and additionally has only small negative effect on voltage levels and grid losses. Finally, all charging services in our simulation - extracted from a mobility survey - are sufficiently served with a transformer power limitation of 100% of its normal baseline load without EV charging. Therefore, we have demonstrated that with an advanced charging service allocation the demand of high EV penetration can be met with the same peak load as the baseline scenario without EV, however QoS and fairness highly depend on the chosen allocation policy.
In the future, we want to perform a sensitivity analysis on how malicious user inputs impact the QoS and fairness and evaluate how incentive mechanisms can improve the impact of wrong inputs. Additionally, we plan to compare the discussed hierarchical charging mechanism with a fully decentralized probabilistic allocation protocol that uses multiple access control mechanisms from the networking domain.
The co-simulation environment with all connected simulators and results are available from the corresponding author on reasonable request.
Álvarez, JN, Knezović K, Marinelli M (2016) Analysis and comparison of voltage dependent charging strategies for single-phase electric vehicles in an unbalanced danish distribution grid In: 2016 51st International Universities Power Engineering Conference (UPEC).
Al Zishan, A, Moghimi Haji M, Ardakanian O (2020) Reputation-based fair power allocation to plug-in electric vehicles in the smart grid In: 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS), 63–74.. IEEE, Sydney.
Alonso, M, Amaris H, Germain J, Galan J (2014) Optimal charging scheduling of electric vehicles in smart grids by heuristic algorithms. Energies 7(4):2449–2475.
Alyousef, A, Danner D, Kupzog F, de Meer H (2018) Enhancing power quality in electrical distribution systems using a smart charging architecture. Energy Inf 1(S1):28.
Alyousef, A, de Meer H (2019) Design of a TCP-like Smart Charging Controller for Power Quality in Electrical Distribution Systems In: Proceedings of the Tenth ACM International Conference on Future Energy Systems (e-Energy '19), 128–138.. Association for Computing Machinery, Phoenix.
Ardakanian, O, Keshav S, Rosenberg C (2014) Real-Time Distributed Control for Smart Electric Vehicle Chargers: From a Static to a Dynamic Study. IEEE Trans Smart Grid 5(5):2295–2305.
Ardakanian, O, Rosenberg C, Keshav S (2013) Distributed control of electric vehicle charging In: Proceedings of the Fourth International Conference on Future Energy Systems (e-Energy '13), 101–112.. Association for Computing Machinery, Berkeley.
Bayram, IS, Michailidis G, Devetsikiotis M, Bhattacharya S, Chakrabortty A, Granelli F (2011) Local energy storage sizing in plug-in hybrid electric vehicle charging stations under blocking probability constraints In: 2011 IEEE International Conference on Smart Grid Communications (SmartGridComm), 78–83.. IEEE, Brussels.
Bayram, IS, Tajer A, Abdallah M, Qaraqe K (2015) Capacity planning frameworks for electric vehicle charging stations with multiclass customers. IEEE Trans Smart Grid 6(4):1934–1943.
Brinkel, NBG, Schram WL, AlSkaif TA, Lampropoulos I, van Sark WGJHM (2020) Should we reinforce the grid? cost and emission optimization of electric vehicle charging under different transformer limits. Appl Energy 276:115285.
Brunnström, K, Beker SA, De Moor K, Dooms A, Egger S, Garcia M-N, Hossfeld T, Jumisko-Pyykkö S, Keimel C, Larabi M-C, Lawlor B, Le Callet P, Möller S, Pereira F, Pereira M, Perkis A, Pibernik J, Pinheiro A, Raake A, Reichl P, Reiter U, Schatz R, Schelkens P, Skorin-Kapov L, Strohmeier D, Timmerer C, Varela M, Wechsung I, You J, Zgank A (2013) Qualinet White Paper on Definitions of Quality of Experience. https://hal.archives-ouvertes.fr/hal-00977812. Accessed 08 Mar 2021.
Chen, YW, Chen X, Maxemchuk N (2012) The fair allocation of power to air conditioners on a smart grid. IEEE Trans Smart Grid 3(4):2188–2195.
Chen, LR, Wu SL, Shieh DT, Chen TR (2013) Sinusoidal-ripple-current charging strategy and optimal charging frequency study for li-ion batteries. IEEE Trans Ind Electron 60(1):88–97.
Chung, C, Chynoweth J, Chu C, Gadh R (2014) Master-slave control scheme in electric vehicle smart charging infrastructure. Sci World J 2014:14.
Cortés, A, Martínez S (2016) A hierarchical algorithm for optimal plug-in electric vehicle charging with usage constraints. Automatica 68:119–131.
Danner, D, Seidemann J, Lechl M, de Meer H (2021) Flexibility disaggregation under forecast conditions In: Proceedings of the Twelfth ACM International Conference on Future Energy Systems (e-Energy '21), 27–38.. Association for Computing Machinery, New York.
Deilami, S, Masoum AS, Moses PS, Masoum MAS (2011) Real-Time Coordination of Plug-In Electric Vehicle Charging in Smart Grids to Minimize Power Losses and Improve Voltage Profile. IEEE Trans Smart Grid 2(3):456–467.
Demers, A, Keshav S, Shenker S (1989) Analysis and simulation of a fair queueing algorithm. SIGCOMM Comput Commun Rev 19(4):1–12.
Eisenmann, C, Chlond IB, Hilgert T, von Behren S, Vortisch IP (2017) Deutsches mobilitätspanel (mop)–wissenschaftliche begleitung und auswertungen bericht 2016/2017: Alltagsmobilität und fahrleistung. Technical report, KIT.
Erol-Kantarci, M, Sarker JH, Mouftah HT (2012) Quality of service in plug-in electric vehicle charging infrastructure In: 2012 IEEE International Electric Vehicle Conference, 1–5.. IEEE, Greenville.
European Commission (2011) White Paper on transport: Roadmap to a Single European Transport Area – towards a competitive and resource efficient transport system. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52011DC0144&from=EN. Accessed 08 Mar 2021.
Fan, Z (2012) A distributed demand response algorithm and its application to phev charging in smart grids. IEEE Trans Smart Grid 3(3):1280–1290.
Frendo, O, Gaertner N, Stuckenschmidt H (2019) Real-time smart charging based on precomputed schedules. IEEE Trans Smart Grid 10(6):6921–6932.
Gan, L, Topcu U, Low S (2011) Optimal decentralized protocol for electric vehicle charging In: 2011 50th IEEE Conference on Decision and Control and European Control Conference, 5798–5804.. IEEE, Orlando.
Haack, J, Akyol B, Tenney N, Carpenter B, Pratt R, Carroll T (2013) VolttronTM: An agent platform for integrating electric vehicles and smart grid In: 2013 International Conference on Connected Vehicles and Expo (ICCVE), 81–86.. IEEE, Las Vegas.
Haslak, T (2020) Weighted fair queuing as a scheduling algorithm for deferrable loads in smart grids. In: Bertsch V, Ardone A, Suriyah M, Fichtner W, Leibfried T, Heuveline V (eds)Advances in Energy System Optimization, 123–141.. Springer, Cham.
Hekkelman, B, Poutré HL (2020) Fairness in power flow network congestion management with outer matching and principal notions of fair division In: Proceedings of the Eleventh ACM International Conference on Future Energy Systems (e-Energy '20), 106–115.. Association for Computing Machinery, Virtual Event, Australia.
Hoßfeld, T, Skorin-Kapov L, Heegaard PE, Varela M (2018) A new QoE fairness index for QoE management. Qual User Experience 3(1):4.
Hu, J, You S, Lind M, Østergaard J (2014) Coordinated charging of electric vehicles for congestion prevention in the distribution grid. IEEE Trans Smart Grid 5(2):703–711.
IEA (2020) Global ev outlook 2020. Technical report, IEA. https://www.iea.org/reports/global-ev-outlook-2020.
Islam, MS, Mithulananthan N, Bhumkittipich K (2016) Feasibility of pv and battery energy storage based ev charging in different charging stations In: 2016 13th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 1–6.. IEEE, Chiang Mai.
Islam, MS, Mithulananthan N, Lee KY (2018) Suitability of pv and battery storage in ev charging at business premises. IEEE Trans Power Syst 33(4):4382–4396.
ITU (2008) ITU-T E.800 - SERIES E: Overall Network Operation, Telephone Service, Service Operation and Human Factors. https://www.itu.int/rec/T-REC-E.800-200809-I/en. Accessed 08 Mar 2021.
Kong, F, Liu X, Sun Z, Wang Q (2016) Smart Rate Control and Demand Balancing for Electric Vehicle Charging In: 2016 ACM/IEEE 7th International Conference on Cyber-Physical Systems (ICCPS), 1–10.. IEEE, Vienna.
Lopes, JAP, Soares FJ, Almeida PMR (2009) Identifying management procedures to deal with connection of electric vehicles in the grid In: 2009 IEEE Bucharest PowerTech, 1–8.
Martinenas, S, Knezović K, Marinelli M (2017) Management of power quality issues in low voltage networks using electric vehicles: Experimental validation. IEEE Trans Power Delivery 32(2):971–979.
Rezaei, P, Frolik J, Hines PDH (2014) Packetized plug-in electric vehicle charge management. IEEE Trans Smart Grid 5(2):642–650.
Rivera, J, Goebel C, Jacobsen H-A (2015) A distributed anytime algorithm for real-time ev charging congestion control In: Proceedings of the 2015 ACM Sixth International Conference on Future Energy Systems (e-Energy '15), 67–76.. ACM, New York.
Rudnik, R, Wang C, Reyes-Chamorro L, Achara J, Boudec J-YL, Paolone M (2020) Real-time control of an electric vehicle charging station while tracking an aggregated power setpoint. IEEE Trans Ind Appl 56(5):5750–5761.
Schlund, J, Pruckner M, German R (2020) Flexability - modeling and maximizing the bidirectional flexibility availability of unidirectional charging of large pools of electric vehicles In: Proceedings of the Eleventh ACM International Conference on Future Energy Systems (e-Energy '20), 121–132.. Association for Computing Machinery, New York.
Shi, B, Liu J (2015) Decentralized control and fair load-shedding compensations to prevent cascading failures in a smart grid. Int J Electr Power Energy Syst 67:582–590.
Ucer, E, Kisacikoglu MC, Yuksel M, Gurbuz AC (2019) An internet-inspired proportional fair ev charging control method. IEEE Syst J 13(4):4292–4302.
Ul-Haq, A, Buccella C, Cecati C, Khalid HA (2013) Smart charging infrastructure for electric vehicles In: 2013 International Conference on Clean Electrical Power (ICCEP), 163–169.. IEEE, Alghero.
Wang, B, Hu B, Qiu C, Chu P, Gadh R (2015) Ev charging algorithm implementation with user price preference In: 2015 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT), 1–5.. IEEE, Washington, DC.
Zenginis, I, Vardakas JS, Zorba N, Verikoukis CV (2016) Analysis and quality of service evaluation of a fast charging station for electric vehicles. Energy 112:669–678.
Zhou, Y, Maxemchuk N, Qian X, Mohammed Y (2013) A weighted fair queuing algorithm for charging electric vehicles on a smart grid In: 2013 IEEE Online Conference on Green Communications (OnlineGreenComm), 132–136.. IEEE, Piscataway.
Zhou, Y, Maxemchuk N, Qian X, Wang C (2014) The fair distribution of power to electric vehicles: An alternative to pricing In: 2014 IEEE International Conference on Smart Grid Communications (SmartGridComm), 686–691.. IEEE, Venice.
This project has partially received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 957845: project "Community-empowered Sustainable Multi-Vector Energy Islands — RENergetic" and is partially supported by the Bavarian Ministry of Economic Affairs, Regional Development and Energy and by the Zentrum Digitalisierung.Bayern within the project "Energy Management System for Integrated Business Models" (EMSIG). Publication funding was provided by the German Federal Ministry for Economic Affairs and Energy.
University of Passau, Innstraße 41, Passau, 94032, Germany
Dominik Danner & Hermann de Meer
Dominik Danner
Hermann de Meer
DD contributed the idea, service quality metrics, algorithms as well as the evaluation setup and analysis. He also wrote the first draft of the paper. HdM provided research direction, supervision, and helped to write the final version of the paper. Both authors read and approved the final manuscript.
Correspondence to Dominik Danner.
Danner, D., de Meer, H. Quality of service and fairness for electric vehicle charging as a service. Energy Inform 4, 16 (2021). https://doi.org/10.1186/s42162-021-00175-3
Dynamically weighted fair queuing
Fair charging service allocation
Queuing model | CommonCrawl |
eigenvalues of constant times a matrix
The matrix has two eigenvalues (1 and 1) but they are obviously not distinct. "The abstract appeared in Abstracts of papers presented to the Amer. So as long as I keep working with that one matrix A. Thus the number positive singular values in your problem is also n-2. Let's say that A is equal to the matrix 1, 2, and 4, 3. 퐴푣 = 휆푣 Eigenvector Eigenvector This is a finial exam problem of linear algebra at the Ohio State University. Then, for some scalar 2 (B), we have B 11 B 12 0 B 22 x 1 x 2 = x 1 x 2 : 2. Almo st all vectors change di-rection, when they are multiplied by A. Recall that the eigenvectors are only defined up to a constant: even when the length is specified they are still only defined up to a scalar of modulus one (the sign for real matrices). Those eigenvalues (here they are λ = 1 and 1/2) are a new way to see into the heart of a matrix. • If we multiply A by 푣, the result will be equal to 푣 times a constant. Let A be a square matrix of order n. If is an eigenvalue of A, then: 1. is an eigenvalue of A m, for 2. For those numbers, the matrix A I becomes singular (zero determinant). Example The matrix also has non-distinct eigenvalues of 1 and 1. The MS Excel spreadsheet used to solve this problem, seen above, can be downloaded from this link: Media:ExcelSolveEigenvalue.xls. Example: Find Eigenvalues and Eigenvectors of a 2x2 Matrix. Two proofs given Introduction to Eigenvalues 289 To explain eigenvalues, we first explain eigenvectors. If A is invertible, then is an eigenvalue of A-1. I generate a matrix for each 3-tuple (dx,dy,dt) and compute it's largest magnitude eigenvalue. either a \(p\times p\) matrix whose columns contain the eigenvectors of x, or NULL if only.values is TRUE. We can thus find two linearly independent eigenvectors (say <-2,1> and <3,-2>) one for each eigenvalue. 288. 3 0. tiny-tim said: hi newclearwintr! Review of Eigenvalues and Eigenvector • Suppose that 푣 is an eigenvector of matrix A. We prove that eigenvalues of a Hermitian matrix are real numbers. That's generally not too bad provided we keep \(n\) small. FINDING EIGENVALUES AND EIGENVECTORS EXAMPLE 1: Find the eigenvalues and eigenvectors of the matrix A = 1 −3 3 3 −5 3 6 −6 4 . This is a good time to do two by two matrices, their eigenvalues, and their stability. if A is a derivative, then the eigenvalue is the time constant in a particular mode (the only modes that will work are the eigenvectors … if the system starts in any other mode, it won't stay in it, so the concept of effective mass or whatever is inapplicable) Jan 23, 2013 #4 newclearwintr. Likewise this fact also tells us that for an \(n \times n\) matrix, \(A\), we will have \(n\) eigenvalues if we include all repeated eigenvalues. obtain a new matrix Bwhose eigenvalues are easily obtained. 672-684. And the eigenvectors stay the same. On this front, we note that, in independent work, Li and Woodru obtained lower bounds that are polynomial in n[LW12]. For example, suppose that Bhas a 2 2 block structure B= B 11 B 12 0 B 22 ; where B 11 is p pand B 22 is q q. If . Look at det.A I/ : A D:8 :3:2 :7 det:8 1:3:2 :7 D 2 3 2 C 1 2 D . REMARK 3. Let x = xT 1 x T 2 T be an eigenvector of B, where x 1 2Cp and x 2 2Cq. In particular, Schatten norm 1 of a matrix, also called the nuclear norm, is the sum of the absolute values of the eigenvalues/singular values. If A and B are similar, then they have the same characteristic polynomial (which implies they also have the same eigenvalues). λ 1 =-1, λ 2 =-2. eigenvalues also stems from an attack on estimating the Schatten norms of a matrix. Now, let's see if we can actually use this in any kind of concrete way to figure out eigenvalues. For instance, initial guesses of 1, 5, and 13 will lead to Eigenvalues of 0, 6, and 9, respectively. Taking powers, adding multiples of the identity, later taking exponentials, whatever I do I keep the same eigenvectors and everything is easy. Two by two eigenvalues are the easiest to do, easiest to understand. Let's verify these facts with some random matrices: Let's verify these facts with some random matrices: The eigenvalues and eigenvectors of a matrix may be complex, even when the matrix is real. The code block diagonalizes the Hamiltonian into constant total-spin sectors and furthermore into blocks of definite momentum. If A is a real constant row-sum or a real constant column sum matrix, then a way to obtain an inclusion region for its eigenvalues is described in [7]. Since A is the identity matrix, Av=v for any vector v, i.e. •The first author was supported by NSF Grant DCR 8507573 and by M.P.I. The coefficient update correlation matrix R M has been calculated using Monte Carlo simulations for N = 3, M = 1, σ ν 2 = 1 and a ranging from − 0.9 to − 0.1 in steps of 0.1. 4, pp. And of course, let me remember the basic dogma of eigenvalues and eigenvectors. Linear and Multilinear Algebra: Vol. Fact eigenvalues also stems from an attack on estimating the Schatten norms of a matrix. So let's do a simple 2 by 2, let's do an R2. If you look at my find_eigenvalues() function below you will see it does a brute force loop over a range of values of dt,dx,and dy. Theorem ERMCP can be a time-saver for computing eigenvalues and eigenvectors of real matrices with complex eigenvalues, since the conjugate eigenvalue and eigenspace can be inferred from the theorem rather than computed. Gershgorin's circle theorem is also a simple way to get information about the eigenvalues of a square (complex) matrix A = (a ij). Specify the eigenvalues The eigenvalues of matrix $ \mathbf{A} $ are thus $ \lambda = 6 $, $ \lambda = 3 $, and $ \lambda = 7$. For any idempotent matrix trace(A) = rank(A) that is equal to the nonzero eigenvalue namely 1 of A. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. In general, if an eigenvalue λ of a matrix is known, then a corresponding eigen-vector x can be determined by solving for any particular solution of the singular system (A −λI)x = … $\endgroup$ – Brian Borchers Sep 13 '19 at 13:51 I do not wish to write the whole code for it because I know it is a long job, so I searched for some adhoc code for that but just found 1 or 2 libraries and at first I prefer not to include libraries and I don't want to move to matlab. Note that if we took the second row we would get . On bounding the eigenvalues of matrices with constant row-sums. On this front, we note that, in independent work, Li and Woodruff obtained lower bounds that are polynomial in n [LW12]. 5. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchange The eigenvalues of a symmetric matrix are always real and the eigenvectors are always orthogonal! A100 was found by using the eigenvalues of A, not by multiplying 100 matrices. The values of λ that satisfy the equation are the generalized eigenvalues. 1/ 2: I factored the quadratic into 1 times 1 2, to see the two eigenvalues D 1 and D 1 2. Or if we could rewrite this as saying lambda is an eigenvalue of A if and only if-- I'll write it as if-- the determinant of lambda times the identity matrix minus A is equal to 0. You should be looking for ways to make the higher level computation deal with this eventuality. then the characteristic equation is . To find eigenvalues of a matrix all we need to do is solve a polynomial. Good to separate out the two by two case from the later n by n eigenvalue problem. Eigenvector equations We rewrite the characteristic equation in matrix form to a system of three linear equations. If x 2 6= 0, then B 22x 2 = x 2, and 2 (B 22). I'm writing an algorithm with a lot of steps (PCA), and two of them are finding eigenvalues and eigenvectors of a given matrix. (2019). 3. Although we obtained more precise information above, it is useful to observe that we could have deduced this so easily. 6.1. $\begingroup$ If your matrices are positive semidefinite but singular, then any floating-point computation of the eigenvalues is likely to produce small negative eigenvalues that are effectively 0. 4. A is not invertible if and only if is an eigenvalue of A. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. I have a large $2^N \times 2^N$ matrix. Let's find the eigenvector, v 1, associated with the eigenvalue, λ 1 =-1, first. welcome to pf! and the two eigenvalues are . SOLUTION: • In such problems, we first find the eigenvalues of the matrix. 67, No. The Eigenvalues for matrix A were determined to be 0, 6, and 9. The vectors are normalized to unit length. The eigenvalues values for a triangular matrix are equal to the entries in the given triangular matrix. 3. In particular, Schatten norm 1 of a matrix, also called the nuclear norm, is the sum of the absolute values of the eigenvalues/singular values. Thus, the eigenvalues of T are in the interval −2 < λ < 2. Given eigenvalues and eigenvectors of a matrix A, compute A^10 v. One of the final exam problem in Linear Algebra Math 2568 at the Ohio State University. Banded Toeplitz matrices, block matrices, eigenvalues, computational complexity, matrix difference equation, cyclic reduction. Math. All that's left is to find the two eigenvectors. It is the exact Hamiltonian of a spin chain model which I have generated with code I wrote in Fortran. Excel calculates the Eigenvalue nearest to the value of the initial guess. so clearly from the top row of the equations we get. If I add 5 times the identity to any matrix, the eigenvalues of that matrix go up by 5. Soc, v. 8, no. If is any number, then is an eigenvalue of . Example 1 The matrix A has two eigenvalues D1 and 1=2. • The constant is called the eigenvalue corresponding to 푣. I wish to diagonalize it (find the eigenvalues), however when I import it into Mathematica and apply any vector is an eigenvector of A. 40% funds, and the second author was supported by NSF Grant DCR 8507573. The eigenvalues and eigenvectors of a matrix are scalars and vectors such that .If is a diagonal matrix with the eigenvalues on the diagonal, and is a matrix with the eigenvectors as its columns, then .The matrix is almost always invertible, in which case we have .This is called the eigendecomposition. The resulting eigenvalue spread for R and R M is plotted in Figure 2.15 for zero-mean white Gaussian ν (k) and binary ν (k) taking on values ± 1 with equal probability. The vectors are normalized to unit length. Adding a constant times the unit matrix and eigenvalues Thread starter julian; Start date Apr 7, 2012; Apr 7, 2012 eigenvalue 3 is defective, the eigenvalue 2 is nondefective, and the matrix A is defective.
Who Wrote 2000 Man, Vacuum Cleaner Clipart, Ryobi Sander Parts, Nevertheless Vs Nonetheless, Framar Dye Defender, Heavy D Parents, Radial Fan Pc, Ingólfur Arnarson Pronunciation,
2020 eigenvalues of constant times a matrix | CommonCrawl |
A new, easy-to-make pectin-honey hydrogel enhances wound healing in rats
Gessica Giusto1,
Cristina Vercelli1,
Francesco Comino1,
Vittorio Caramello1,
Massimiliano Tursi1 &
Marco Gandini1
Honey, alone or in combination, has been used for wound healing since ancient times and has reemerged as a topic of interest in the last decade. Pectin has recently been investigated for its use in various biomedical applications such as drug delivery, skin protection, and scaffolding for cells. The aim of the present study was to develop and evaluate a pectin-honey hydrogel (PHH) as a wound healing membrane and to compare this dressing to liquid honey.
Thirty-six adult male Sprague-Dawley rats were anesthetized and a 2 × 2 cm excisional wound was created on the dorsum. Animals were randomly assigned to four groups (PHH, LH, Pec, and C): in the PHH group, the pectin-honey hydrogel was applied under a bandage on the wound; in the LH group, liquid Manuka honey was applied; in the Pec group, pectin hydrogel was applied (Pec); and in the C group, only bandage was applied to the wound. Images of the wound were taken at defined time points, and the wound area reduction rate was calculated and compared between groups.
The wound area reduction rate was faster in the PHH, LH, and Pec groups compared to the control group and was significantly faster in the PHH group. Surprisingly, the Pec group exhibited faster wound healing than the LH group, but this effect was not statistically significant.
This is the first study using pectin in combination with honey to produce biomedical hydrogels for wound treatment. The results indicate that the use of PHH is effective for promoting and accelerating wound healing.
Wound healing is a complex process that involves a plethora of factors that significantly influence the reestablishment of the skin barrier. Currently, several compounds are used to positively influence the wound healing process including honey [1]. The use of honey in wound healing, alone or in combination with other compounds, is ancient and has become a topic of interest in several investigations in the last decade [2, 3]. Honey contains high levels of glycine, methionine, arginine, and proline, which are all necessary for collagen formation and fibroblast deposition, the essential factors needed for healing [4]. Manuka honey has been demonstrated to have positive effects on wound heling [5].
During the wound healing process, the epithelium cells must be allowed to migrate, but this is only possible if the environment is moist. Hence, some of the most widespread dressing methods involve the use of hydrogels. Hydrogels aid in maintaining a moist environment, therefore facilitating wound healing by preventing dehydration, necrosis, and apoptosis [6]. Hydrogels have high water content and can absorb a large amount of body fluid, contributing to the maintenance of a moist environment and encouraging granulation tissue formation. Moreover, the tridimensional structure of hydrogels works as a scaffold, permitting cell adhesion, proliferation, and neoangiogenesis [6].
Pectin has recently been investigated for use in various biomedical applications including drug delivery, skin protection, and scaffolding [7]. Pectin is a heterosaccharide found in the terrestrial plant cell wall. It is a polyuronate, and when subjected to calcium-induced gelation, forms an egg box-like structure that enables cells inside the gel [8]. However, it is typically used in conjunction with other polymers because of its poor intrinsic mechanical properties [8]. Pectin is inexpensive, can be extracted from renewable sources, is not cytotoxic, acts as a gelating agent, and is suitable for many biomedical applications [9].
Based on the positive wound-healing properties of both honey and pectin, we hypothesized that a novel, hybrid wound dressing could be used to further enhance the regenerative process. The aim of the present study was to develop and evaluate a pectin-honey hydrogel (PHH) wound membrane and compare its effectiveness to pectin hydrogel (PH) and liquid honey.
Honey (Medihoney 440) was purchased from Manuka Health (66 Weona Court, Te Awamutu 3800, New Zealand) and citrus pectin was purchased from Ardets.r.l. (Villanova Mondovì, Cuneo, Italy).
Preparation of pectin-honey hydrogels (PHH) and pectin hydrogels (pec)
The preparation method used has been previously described, with some modifications [10, 11]. Briefly, the pectin-honey hydrogels were prepared starting from a solution (1:1 v/v) of liquid honey (Manuka Health, New Zealand) and sterile deionized water. The same volume of pectin powderFootnote 1 was then added little by little with continuous stirring until the mixture was homogeneous. The resulting foam was spread onto 2 mm-thick films and hot-air-dried at 40 ± 0.5 °C and it was cut into squares of 5 × 5 cm and further conditioned in an air drier at 25 ± 1 °1 dr 5 days. The films were then collected and hand packed in polyethylene under vacuum. The pectin hydrogel (Pec) was made using the same method but substituting honey with the same volume of deionized water.
All films were sterilized by gamma-irradiation at 25 KGray (Sterigenics International LLTC, Bologna, Italy) [11, 12].
All procedures were approved by the Bioethical Committee of the University of Turin and by the Italian Ministry of Health (In Italy the approval code it started in 2015).
Thirty-six adult male Sprague Dawley rats, weighing 225–250 g, were purchased from Charles-Rivers (Italy).
All rats were housed in single cages for 7 days prior to the beginning of the experiment. They were fed commercial food, and water was given ad libitum. The room temperature was set to 23 °C for the duration of the experiment, and cages were cleaned daily.
Experimental wound model and wounding procedure
A full thickness excisional model was used to create the wounds [4]. Anesthesia was administered intramuscularly using 5 mg/kg of xylazineFootnote 2 and 50 mg/kg of tiletamine and zolazepam.Footnote 3 Animals were anesthetized for approximately 1 h. Under anesthesia, the dorsal hair was shavedFootnote 4 and skin was cleaned using 3 steps with an iodiopovidone-clorexydine scrub. Using a dermatological pencil,Footnote 5 a 2 × 2 cm square was drawn on the back skin, distally to the shoulder blades, and the skin was cut using a scalpel and scissors. This location was chosen because this area is seldom deformed by animal movements, preventing auto traumatism. Immediately after the surgery, all animals were dressed using a bandage without glue covered by a Vetrap.Footnote 6
Animals were randomly divided into four groups of 9 animals each, using a calculator (https://www.random.org/integers/):
Group C: negative control group. No treatment was applied.
Group LH: liquid Manuka honey (from the same lot used for PHH production) was applied to the wounds before bandaging.
Group Pec: animals treated with a pectin hydrogel under the dressing.
Group PHH: animals treated with PHH under the dressing.
Determination of the wound healing rate
On days 0, 2, 4, 6, 8, 11, 13, 15, 18, 21, and 23 after surgery the bandages were removed and digital pictures of the wounds were taken. Then, a new bandage with (groups Pec, PHH, and LH) or without (group C) treatment was applied. The animals were sedated with xylazine2 in order to perform the procedure. Photographs were taken in standardized conditions. Rats were gently held in the same position by an operator and a distance of 10 cm between the camera and dorsum of the rat was maintained. The wound surface area was then measured using Image J software.Footnote 7 The comparison between the area at day 0 and at the time-set days was used to calculate the ratio of the wound reduction using the following formula:
$$ Wound\ area\ reduction\ rate=\left(\frac{ A t}{A0}\right)*100 $$
Where A0 and At are the initial area and the wound area at time t, respectively [4, 13].
Histological analysis
After euthanasia, the area around the scar or residual wound was harvested, fixed in 4% buffered formalin, dehydrated, and fixed in paraffin. Five-micron slices were then stained with hematoxylin and eosin and evaluated by a blinded pathologist.
Data were analyzed with the Shapiro test to evaluate their distribution, and statistical differences were measured using one-way ANOVA for parametric values. Statistical significance was defined as p < 0.05. All tests were run using commercial software.Footnote 8 The results are expressed as mean values ± standard deviation (SD).
All data passed the Shapiro-Wilk test and were normally distributed.
Wound area reduction rate
The wound area reduction rates of the control and treatment groups are shown in Table 1 and Fig. 1. As shown, WARR was negative for all groups in the first 3 days than started to be positive. Total closure of the wound was completed for all groups but controls by day 23.
Table 1 Wound area reduction rate at each time point (±SD)
Picture of wound healing of different groups at each time point
On the 23rd day, the entire surface of the lesion treated with the dressing was covered with new epithelium. All the wounds treated with PHH and pectin dressing had well-developed dermis. Mature fibrous tissue proliferation was observed in the dermis. In the PHH and Pec groups, effective healing of the wounds was indicated by the presence of hair follicles and matured fibrous tissue (Fig. 2). In the control group, there was a significantly larger number of inflammatory cells compared with the treatment groups (PHH/Pec/LH) (Figs. 3, 4, and 5).
Histology image of a completed healed wound with organized mature fibrous tissue (small arrows in the box) and hair follicles (group PHH)
Histology image of the healed wound with severe dermal fibrosis (F and large arrows) and interstitial lymphocytic infiltration (small arrows in the box) (Control group)
Histology image of the healed wound with moderate interstitial lymphocytic infiltration (small arrows in the box) and dermal fibrosis (group PHH)
Histology image of the healed wound with severe interstitial lymphocytic infiltration (small arrows in the box) and dermal fibrosis (F and large arrows) (group LH)
Incisional and excisional wounds are the two main models which allow for the determination of wound healing phases [4, 8]. Full thickness excisional wounds were used in this study to macroscopically evaluate the wound area reduction rate in animals treated with PHH, liquid honey, and pectin hydrogels. The results demonstrate that topical administration of pectin and pectin-honey hydrogels accelerates wound healing in rats. As reported in Table 1, we found that in the first 3 days there was an increase in the wound area. In our opinion, this was due to the dimensions of the wound that initially caused wound enlargement from the midline to the abaxial edges, caused by gravity forces. After six days the WARR started to be positive for all groups. Although not significantly, from this time on, the WARR was higher in all treated groups than in the control group. The difference become significant only from day 18. This could imply a positive effect of all treatments, but in particular of LH and PHH in the proliferative phase of the wound healing. Further, we found that the WARR was slower in the control group than what has been reported in previous studies [4, 13], and while the cause of this delay is unclear, could have contributed to the differences found.
The belief that keeping a wound dry promotes healing has been negated over the last several years [14]. A moist dressing provides a better environment for wound healing, which involves different steps such as cell migration, cell differentiation, angiogenesis, matrix formation, granulation tissue formation, and re-epithelialization. Epithelialization occurs faster in a wet environment, which can be created by an occlusive or semi-occlusive wound dressing [15]. The ideal dressing should be able to absorb the exudates on the wound surface [1]. For effective wound healing, this process should be promoted and not inhibited [14].
In a previous study, we demonstrated that a pectin-honey hydrogel has optimal characteristics for wound healing, in regards to the water vapor transmission rate (WVTR) and fluid uptake [10]. Honey, with its high concentration of sugar, is a hyperosmotic substance which has high hygroscopic capacity [16,17,18]. Honey is able to increase its weight under physiological conditions up to 150%, resulting in a substance that will likely be able to absorb excessive wound exudates [11]. Furthermore, hydrogels have been proven to have a good fluid absorbance, as a result of their hydrophilic nature, and this property is very important for quick absorption of exudates during the wound healing phases [1, 16]. Pectin can act as a scaffold for cell migration and differentiation [8], while honey acts as an anti-inflammatory, antibacterial, and stimulatory agent [4]. Acceleration of wound healing could be due to intrinsic characteristics of honey such as production of hydrogen peroxide and its nutritional, hydroscopic, antioxidant, and antibacterial properties, providing wounds a suitable healing environment [4]. Surprisingly, the pectin hydrogel performed better than bulk honey. This finding could be attributed to the natural properties of this substance, such as hydrophilicity, which create a barrier against bacteria. Pectin also becomes a binding agent for growth factors [9]. During pectin solubilization, the wound environment becomes acidic, which may help to control bacterial growth [9]. Another important advantage could be the direct and continuous contact of the hydrogel with the wound during the healing phase compared to bulk honey.
Wound contraction is an essential process in healing that leads to wound closure, and honey can increase contraction and enhance the deposition of fibroblasts and collagen, which are essential for healing [4]. It has been demonstrated that the greater the wound contraction, the lesser the scar deposition [19]. We believe that the increase in the wound reduction rate in the treated groups was caused by an increase in wound contraction caused by honey and the use of the scaffolding material pectin. The use of both materials allowed for the establishment of an ideal environment for healing, as demonstrated in previous studies [8, 9, 18, 20, 21]. The same factors may have caused the reduction in inflammation (compared with controls) found in the treatments groups.
The difference between the PHH and LH groups could be attributed to the sustained contact of honey with the wound as a result of the use of the pectin hydrogel and regenerative factors from the pectin itself.
Further studies on the effects of pectin hydrogel on wound healing are warranted to clarify this aspect.
The main advantages of PHH compared with other hydrogels or honey-based devices are that it is very inexpensive, easy to produce, and is easily applied to the wound. This should allow for the use of honey membrane wound dressings in economically disadvantaged regions. The most expensive material in the membrane composition is Manuka honey, responsible for its antimicrobial activity and for the improvement in the wound healing process [13, 22]. Several investigators worldwide are studying the different characteristics of honeys, and it will be possible to utilize less expensive honeys in the pectin-honey hydrogel in the future [18, 23,24,25].
This is the first study that uses pectin in combination with honey to produce biomedical hydrogels for wound treatment. Our results clearly indicate the synergistic effect of materials used for preparation of the films, and each material contains a large number of healing-promoting activities that can be found separately in pharmaceutical products. These combined materials could be used in wound healing applications. Based on the results obtained in the present study, the use of PHH is effective for promoting and accelerating wound healing.
ARDET s.r.l., Cuneo, Italy
Bayer Animal Health, Milano, Italy
Virbac, Milano, Italy DermaSciences, Princeton, NY
Reckitt Benckise, Milano, Italy
Conmed, Utica, NY, USA
3 M, Italy
National Institutes of Health, USA
GraphPad Software Inc., La Jolla, USA
Archana D, Dutta J, Dutta PK. Evaluation of chitosan nano dressing for wound healing: characterization, in vitro and in vivo studies. Int J Biol Macromol. 2013;57:193–203.
Aljady AM, Kamaruddin MY, Jamal AM, Mohd Yassim MY. Biochemical study on the efficacy of Malaysian honey on inflicted wounds: an animal model. Med J of Islam Academy of Sciences. 2000;13(3):125–32.
Davis SC, Perez R. Cosmeceuticals and natural products: wound healing. Clin Dermatol. 2009;27(5):502–6.
Tan MK, Hasan Adli DS, Tumiran MA, Abdulla MA, Yusoff KM. The efficacy of gelam honey dressing towards excisional wound healing. Evid Based Complement Alternat Med. 2012;805932
Steward JA, McGrane OL, Wedmore IS. Wound care in the wilderness: is there evidence for honey? Wildern & Envir Med. 2014;25:103–10.
Huang X, Zhang Y, Zhang X, Xu L, Chen X, Wei S. Influence of radiation crosslinked carboxymethyl-chitosan/gelatin hydrogel on cutaneous wound healing. Mater Sci and Eng C Mater Biol Appl. 2013;33(8):4816–24.
Lin HY, Chen HH, Chang SH, Ni TS. Pectin-chitosan-PVA nanofibrous scaffold made by electrospinning and its potential use as a skin tissue scaffold. J Biomater Sci Polym Ed. 2013;24(4):470–84.
Ninan N, Muthiah M, Park IK, Elain A, Thomas S, Grohens Y. Pectin/carboxymethyl cellulose/microfibrillated cellulose composite scaffolds for tissue engineering. Carbohydr Polym. 2013;98:877–85.
Munarin F, Tanzi MC, Petrini P. Advances in biomedical applications of pectin gels. Int J of Biol Macromol. 2012;51:681–9.
Walker JE. Method of preparing homogeneous honey pectin composition. Patented. 1942 Sept;8
Giusto G, Beretta G, Vercelli C, Valle E, Iussich S, Borghi R, Odetti P, Monacelli F, Tramuta C, Grego E, Nebbia P, Robino P, Odore R, Gandini M. A simple method to produce pectin-honey hydrogels and its characterization as new biomaterial for surgical use. Biomed Mat Eng; Under review.
Carnwath R, Graham EM, Reynolds K, Pollock PJ. The antimicrobial activity of honey against common equine wound bacterial isolates. The Vet J. 2014;199(1):110–4.
Khoo YT, Halim AS, Singh KK, Mohamad NA. Wound contraction effects and antibacterial properties of Tualang honey on full-thickness burn wounds in rats in comparison to hydrofibre. BMC Complement Altern Med. 2010;3(10):48.
Maeda H, Kobayashi H, Miyahara T, Hashimoto Y, Akiyoshi K, Kasugai S. Effects of a polysaccharide nanogel-crosslinked membrane on wound healing. J Biomed Mater Res B Appl Biomater. 2015;10(1002)
Rahmanian-Schwarz A, Ndhlovu M, Held M, Knoeller T, Ebrahimi B, Schaller HE, Stahl S. Evaluation of two commonly used temporary skin dressing for the treatment of acute partial-thickness wounds in rats. Dermatol Surg. 2012;38(6):898–904.
Arslan A, Simşek M, Aldemir SD, Kazaroğlu NM, Gümüşderelioğlu M. Honey-based PET or PET/chitosan fibrous wound dressings: effect of honey on electrospinning process. J Biomater Sci Polym Ed. 2014;25(10):999–1012.
Aysan E, Ayar E, Aren A, Cifter C. The role of intra-peritoneal honey administration in preventing post-operative peritoneal adhesions. Eur J Obstet Gynecol Reprod Biol. 2002;104(2):152–5.
Molan P, Rhodes T. Honey. A biological wound dressing. Wounds. 2015;27(6):141–51.
Medhi B, Puri A, Upadhyay S, Kaman L. Topical application of honey in the treatment of wound healing: a metaanalysis. JK Scie. 2008;10(4):166–9.
Osuagwu FC, Oladejo OW, Imosemi IO, Aiku A, Ekpos OE, Salami AA, Oyedele OO, Akang EU. Enhanced wound contraction in fresh wounds dressed with honey in Wistar rats (Rattus norvegicus). West Afr J Med. 2004;23(2):114–8.
Osuegbu OI, Yama OE, Edibamode EI, Awolola NA, Clement AB, Amah CI. Honey improves healing of circumscribed excision injury to the paniculus adiposus in albino rats. Nig Q J Hosp Med. 2012;22(4):268–73.
Yusof N, AinulHafiza AH, Zohdi RM, Bak-ar MZA. Development of honey hydrogel dressing for enhanced wound healing. Radiat Phys Chem. 2007;76:1767–70.
Grego E, Robino P, Tramuta C, Giusto G, Boi M, Colombo R, Serra G, Chiado-Cutin S, Gandini M, Nebbia P. Evaluation of antimicrobial activity of Italian honey for wound healing application in veterinary medicine. Schweiz Arch Tierheilkd. 2016;158(7):521–7.
Honey M P. Antimicrobial actions and role in disease management. In: Ahmad I Aqil F. editors. New Strategies Combating Bacterial Infection. Germany: Wiley-VCH: Weinheim; 2009. p. 229–253.
George NM, Cutting KF. Antibacterial honey: in vitro activity against clinical isolates of MRSA, VRE, and other multiresistant gram-negative organisms including Pseudomonas Aeruginosa. Wounds. 2007;19(9):231–6.
Sterigenics International LLC (Minerbio, Bologna, Italy).
No supporting.
The datasets supporting the conclusions of this article are included within the manuscript (Table 1).
GG and MG designed and performed the study, acquired, analyzed and interpreted the data, wrote and reviewed the paper. CV performed the study, acquired and analyzed the data, wrote and reviewed the paper. FC and MT performed the histological analysis, interpreted the data and reviewed the paper. VC assisted during the surgical procedure and reviewed the paper. All authors read and approved the final manuscript.
Consent publication
All procedures were approved by the Bioethical Committee of the University of Turin and by the Italian Ministry of Health.
The authors declare that they have no competing of interest.
Department of Veterinary Sciences, University of Torino, Largo P. Braccini 2-5, Grugliasco, (TO), Italy
Gessica Giusto, Cristina Vercelli, Francesco Comino, Vittorio Caramello, Massimiliano Tursi & Marco Gandini
Gessica Giusto
Cristina Vercelli
Francesco Comino
Vittorio Caramello
Massimiliano Tursi
Marco Gandini
Correspondence to Gessica Giusto.
Giusto, G., Vercelli, C., Comino, F. et al. A new, easy-to-make pectin-honey hydrogel enhances wound healing in rats. BMC Complement Altern Med 17, 266 (2017). https://doi.org/10.1186/s12906-017-1769-1 | CommonCrawl |
Number Needed to Treat
The number needed to treat (NNT) is the number of patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship that are needed to treat to prevent 1 additional adverse outcome (e.g., stroke, death). For example, if a drug has an NNT of 10, it means 10 people must be treated with the drug to prevent 1 additional adverse outcome. The NNT is the inverse of the absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk reduction (ARR), which is equal to the rate of adverse outcomes occurring in the control group minus the number of adverse outcomes in the experimental group.
Characteristics and Interpretation
Number Needed to Harm
Calculating NNT and NNH
In order to comprehend the concept of number needed to treat, some previous knowledge about descriptive and inferential statistics is recommended.
Null Hypothesis – Statistics Basics
p-Value – Statistics Basics
Errors in Hypothesis Tests: Type I Error versus Type II Error
Relative Risks (Measures of Association): Introduction
Relative Risk – Relative Risks (Measures of Association)
Attributable Risk – Attributable Risk and Odds Ratio (Measures of Association)
The number needed to treat (NNT), also called the number needed to benefit (NNTB); and its analog, the number needed to harm (NNH), are simply other measures of effect sizes, like Cohen's d Cohen's d Cohen's d is the most common (but imperfect) method to calculate ES. Cohen's d = the estimated difference in the means/(pooled estimated standard deviations). Statistical Power, and help relate an effect size Effect size Effect size is the standardized mean difference between 2 groups, which is exactly equivalent to the "Z-score" of a standard normal distribution. Statistical Power difference back to real-world clinical relevance.
The NNT signifies how many patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship would need to be treated to get 1 additional patient better, who would not have otherwise gotten better without that particular treatment.
The inverse of the absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk reduction (ARR) = 1/ARR
NNT is a number between 1 and infinity:
A lower number indicates more effective treatment.
Fractions are rounded up to the next whole number.
A perfect NNT would be 1, meaning that for every patient treated, 1 got better in the trial, who would not have otherwise without that specific treatment.
Absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk reduction, absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk difference (ARD), and absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk excess (ARE)
All terms represent the absolute value of the difference between the proportion (expressed as a percent, fraction, or incidence Incidence The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from prevalence, which refers to all cases in the population at a given time. Measures of Disease Frequency) of patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship in the control group (Pc) who had the outcome of interest and the proportion of patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship in the experimental group (Pe) with that the outcome of interest:
$$ {ARR = ARD = ARE = \left | P_{c} – P_{e} \right |} $$
Must be interpreted in context: An isolated NNT point estimate has little value, although approximately 50% of clinical studies do not provide the necessary contextual information.
NNT uses the ARR and not the relative risk Relative risk Relative risk (RR) is the risk of a disease or condition occurring in a group or population with a particular exposure relative to a control (unexposed) group. Measures of Risk reduction ( RRR RRR Measures of Risk), which tends to overemphasize the benefit.
RRR RRR Measures of Risk = (Pe – Pc)/Pc.
For example, if the initial risk were 0.2% and drug X lowered this risk to 0.1%, the RRR RRR Measures of Risk would still be 50%, but the ARR would be only 0.1%, which is not much of a difference from the baseline.
As the RRR RRR Measures of Risk is directly correlated with the ARR, the NNT is also inversely correlated with the RRR RRR Measures of Risk.
The NNT tells you how many patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship would benefit, but does not tell you how much they may benefit. The answers to the following questions should be provided with the NNT in order to fully interpret it:
What is the baseline risk of patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship in the study?
What is the comparator? (e.g., no treatment? placebo Placebo Any dummy medication or treatment. Although placebos originally were medicinal preparations having no specific pharmacological activity against a targeted condition, the concept has been extended to include treatments or procedures, especially those administered to control groups in clinical trials in order to provide baseline measurements for the experimental protocol. Epidemiological Studies? another therapy?)
What is the outcome? (e.g., complete cure? 30% improvement?)
How long does the study last? (must be included with the NNT)
What is the confidence interval Confidence interval A confidence interval is the probability that your result falls between a defined range of values. Statistical Tests and Data Representation?
The lower the NNT, the better; the larger the NNT, the fewer people will be helped.
Treatment interventions that have an NNT in the single or low double digits are generally considered effective for treating symptomatic conditions.
For outcomes with high clinical significance, such as preventing death, an NNT in the lower 100s may also be considered useful.
For preventive therapies, NNTs can also be high.
The NNH is the additional number of individuals who need to be exposed to risk (harmful exposure or treatment) to have 1 extra person develop the disease compared to that in the unexposed group.
NNH is the inverse of ARE (1/ARE).
The relationship Relationship A connection, association, or involvement between 2 or more parties. Clinician–Patient Relationship between NNH and NNT: A negative NNT indicates that the treatment has a harmful effect. For example, an NNT of −10 indicates that if 10 patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship are treated with the new treatment, one additional person would be harmed compared to patients Patients Individuals participating in the health care system for the purpose of receiving therapeutic, diagnostic, or preventive procedures. Clinician–Patient Relationship receiving the standard treatment, i.e., the NNH = 10.
LIke NNT, the NNH must be interpreted in context.
The basis for calculating NNT and NNH
A 2 x 2 contingency table Contingency table A contingency table lists the frequency distributions of variables from a study and is a convenient way to look at any relationships between variables. Measures of Risk uses a binary outcome and 2 groups of subjects to show the basis for calculating NNT and NNH. Each result must be expressed as a proportion, percent, or incidence Incidence The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from prevalence, which refers to all cases in the population at a given time. Measures of Disease Frequency, and not as the actual number of subjects.
Treated group
a + c b + d
This 2 x 2 contingency table uses a binary outcome and 2 groups of subjects to show the basis for calculating NNT and NNH. Each result must be expressed as a proportion, percent, or incidence, and not as the actual number of subjects.
NNT: number needed to treat
NNH: number needed to harm
If the following is true, the difference in proportions is P treated – P control.
P treated = the proportion of subjects with a positive outcome in the treated group
P treated = a/(a + b)
P control = the proportion of subjects with a positive outcome in the control group
P control = b/(b + d)
The absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk difference (ARD) is equal to the ARR, which is calculated as the absolute value of the difference between P treated and P control.
$$ {ARD = ARR = \left | P_{treated} – P_{control} \right |} $$
So, the NNT can be calculated as:
$$ {NTT = \frac{1}{\left | P_{treated} – P_{control} \right |}} $$
If the treated or exposed group has a worse outcome than the control, then the ARR is called ARE. In that case, the NNT is called the number needed to harm (NNH). In both cases, the calculation is the same (NNH = 1/ARD).
A randomized clinical trial studied the effect of childhood exposure to 2nd-hand smoke on the incidence Incidence The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from prevalence, which refers to all cases in the population at a given time. Measures of Disease Frequency of bronchogenic adenocarcinoma (BA). The study included 100 subjects (50 exposed to childhood 2nd-hand smoke and 50 healthy controls with no childhood exposure) and involved monitoring the lifetime incidence Incidence The number of new cases of a given disease during a given period in a specified population. It also is used for the rate at which new events occur in a defined population. It is differentiated from prevalence, which refers to all cases in the population at a given time. Measures of Disease Frequency of BA. Data from the study are shown in the table below:
Exposed group
BA present
BA not present
BA: bronchogenic adenocarcinoma
What is the NNH?
Answer: NNH = 1/ absolute risk Absolute risk The AR is the risk of developing a disease or condition after an exposure. Measures of Risk difference (called "ARE" when NNH is involved). ARE = Pe – Pc = 18/50 – 7/50 = 0.22. NNH = 1/0.22 = 4.45 ⇾ 5, which means that 5 individuals need to be exposed to childhood 2nd-hand smoke to have 1 extra person develop BA compared to that in the unexposed group.
What is the relative risk Relative risk Relative risk (RR) is the risk of a disease or condition occurring in a group or population with a particular exposure relative to a control (unexposed) group. Measures of Risk increase in the study cited in Question 1?
Answer: The relative risk Relative risk Relative risk (RR) is the risk of a disease or condition occurring in a group or population with a particular exposure relative to a control (unexposed) group. Measures of Risk increase = (Pe – Pc)/Pc = (18/50 – 7/50)/7/50 = 1.57, which means that individuals exposed to childhood 2nd-hand smoke are 1.57 times more likely to develop BA after exposure to 2nd-hand smoke than those who were not exposed.
Peirce, C.S. (1878). Illustrations of the Logic of Science VI. Popular Science Monthly, vol. 13. Popular Science Monthly. Retrieved March 1, 2021, from https://en.wikisource.org/w/index.php?oldid=3592335
Clinical Tools and calculators for medical professionals—ClinCalc. Retrieved March 19, 2021, from https://clincalc.com/
Power/sample size calculator. Retrieved March 20, 2021, from https://www.stat.ubc.ca/~rollin/stats/ssize/n2.html
Interactive statistical calculation pages. Retrieved March 20, 2021, from https://statpages.info/#Power
Statistical power calculator using average values. SPH Analytics. Retrieved March 20, 2021, from https://www.sphanalytics.com/statistical-power-calculator-using-average-values/
Otte, W.M., Tijdink, J.K., Weerheim, P.L., Lamberink, H.J., Vinkers, C.H. (2018). Adequate statistical power in clinical trials is associated with the combination of a male first author and a female last author. https://doi.org/10.7554/eLife.34412
Bland, M. (2015). An Introduction to Medical Statistics. 4th ed., pp. 295–304.
Ellis, P.D. (2010). The Essential Guide to Effect Sizes. Statistical Power, Meta-Analysis, and the Interpretation of Research Results, pp. 46–86.
Walters, S.J., Campbell, M.J., Machin, D. (2020). Medical Statistics, A Textbook for the Health Sciences. 5th ed, pp. 40–48, 99–133.
Citrome, L., Ketter, T.A. (2013). When does a difference make a difference? Interpretation of number needed to treat, number needed to harm, and likelihood to be helped or harmed. International Journal of Clinical Practice, 67(5), 407–411. https://doi.org/https://doi.org/10.1111/ijcp.12142
Smith, M.K. (2012). Common mistakes involving power. Retrieved March 21, 2021, from https://web.ma.utexas.edu/users/mks/statmistakes/PowerMistakes.html
Ioannidis, J.P., Greenland, S., Hlatky, M.A., et al. (2014). Increasing value and reducing waste in research design, conduct, and analysis. Lancet, 11;383(9912), 166–175.
Coe, R. (2002). It's the effect size, stupid: What effect size is and why it is important. https://www.leeds.ac.uk/educol/documents/00002182.htm
Allen, J.C. (2011). Sample size calculation for two independent groups: A useful rule of thumb. Proceedings of Singapore Healthcare, 20(2), 138–140. https://doi.org/10.1177/201010581102000213 | CommonCrawl |
Metabolic syndrome and myocardium steatosis in subclinical type 2 diabetes mellitus: a 1H-magnetic resonance spectroscopy study
Yue Gao1,2 na1,
Yan Ren3 na1,
Ying-kun Guo2 na1,
Xi Liu1,
Lin-jun Xie2,
Li Jiang1,
Meng-ting Shen1,
Ming-yan Deng3 &
Zhi-gang Yang ORCID: orcid.org/0000-0001-9341-76971
Cardiovascular Diabetology volume 19, Article number: 70 (2020) Cite this article
Metabolic syndrome (MetS) is a cluster of metabolic abnormalities that collectively cause an increased risk of type 2 diabetes mellitus (T2DM) and nonatherosclerotic cardiovascular disease. This study aimed to evaluate the role of myocardial steatosis in T2DM patients with or without MetS, as well as the relationship between subclinical left ventricular (LV) myocardial dysfunction and myocardial steatosis.
Methods and materials
We recruited 53 T2DM patients and 20 healthy controls underwent cardiac magnetic resonance examination. All T2DM patients were subdivide into two group: MetS group and non-MetS. LV deformation, perfusion parameters and myocardial triglyceride (TG) content were measured and compared among these three groups. Pearson's and Spearman analysis were performed to investigate the correlation between LV cardiac parameters and myocardial steatosis. The receiver operating characteristic curve (ROC) was performed to illustrate the relationship between myocardial steatosis and LV subclinical myocardial dysfunction.
An increase in myocardial TG content was found in the MetS group compared with that in the other groups (MetS vs. non-MetS: 1.54 ± 0.63% vs. 1.16 ± 0.45%; MetS vs. normal: 1.54 ± 0.63% vs. 0.61 ± 0.22%; all p < 0.001). Furthermore, reduced LV deformation [reduced longitudinal and radial peak strain (PS); all p < 0.017] and microvascular dysfunction [increased time to maximum signal intensity (TTM) and reduced Upslope; all p < 0.017)] was found in the MetS group. Myocardial TG content was positively associated with MetS (r = 0.314, p < 0.001), and it was independently associated with TTM (β = 0.441, p < 0.001) and LV longitudinal PS (β = 0.323, p = 0.021). ROC analysis exhibited that myocardial TG content might predict the risk of decreased LV longitudinal myocardial deformation (AUC = 0.74) and perfusion function (AUC = 0.71).
Myocardial TG content increased in T2DM patients with concurrent MetS. Myocardial steatosis was positively associated with decreased myocardial deformation and perfusion dysfunction, which may be an indicator for predicting diabetic cardiomyopathy.
Current literature outlines that the excessive accumulation of lipid in cardiomyocytes (myocardial steatosis) is bound to facilitate myocardial lipotoxic injury, which, plays an important role in the development of diabetic cardiomyopathy [1,2,3]. On the other hand, metabolic syndrome (MetS) is a cluster of risk factors such as central obesity, hyperglycemia, dyslipidemia and hypertension that collectively increase the risk of type 2 diabetes mellitus (T2DM) and cardiovascular disease [4]. Central obesity is one of the most evident clinical features of MetS. Therefore, its development has a prominent role in MetS diagnosis [5]. Chronic inflammation caused by central obesity has been described as an essential factor in the occurrence and development of MetS, and the transition of MetS to cardiovascular disease [6]. Moreover, ectopic fat accumulates around the viscera and regularly enters tissues with only minor amount of adipose tissue such as the heart [7]. At present, there are few studies investigating the myocardial steatosis in T2DM patients with concurrent MetS, and its influence on subclinical cardiac dysfunction.
In recent decades, cardiac magnetic resonance (CMR) imaging has been commonly used in clinical practice, which can provide various characteristics of cardiac structure and myocardial tissue [8,9,10,11,12,13]. More specifically, feature tracking and first-pass perfusion of CMR imaging have been used to measure myocardial deformation and to detect microvascular dysfunction. On the other hand, proton Magnetic Resonance Spectroscopy (1H-MRS) can quantitatively detect triglyceride (TG) content in the myocardium. Therefore, this study aimed to evaluate myocardial steatosis using CMR in T2DM patients with or without concurrent MetS and to investigate the association between left ventricular (LV) subclinical myocardial dysfunction and myocardial steatosis.
Initially, we prospectively enrolled 92 patients, who were diagnosed with T2DM according to the World Health Organization standards, between June 2017 and May 2019 [14]. Exclusion criteria were as follow: [1] contraindication of CMR; [2] known cardiovascular disease or congenital heart disease; [3] presence of dyspnea, chest pain, palpitation or other cardiovascular disease-related symptoms; and [4] impaired hepatic function or a history of liver disease. Following these criteria, a total of 53 T2DM patients (31 males and 22 females; mean age 54.49 ± 11.16 years) were finally included in this study. In addition, age-, sex-, and body mass index-matched healthy volunteers were recruited in the controls group. Exclusion criteria for the control group were as follows: [1] DM or impaired glucose tolerance; [2] known acute or chronic disease such as hypertension; [3] disease-hyperlipidemia; [4] electrocardiogram abnormalities; and [5] cardiovascular abnormalities detected by CMR (perfusion defect, local, or diffuse myocardial late-gadolinium enhancement, abnormal ventricular motion, valvular stenosis, etc.). Hence, 20 healthy controls (11 males and 9 females; mean age 50.95 ± 10.185 years) were included in this study. Consequently, all T2DM patients and controls underwent CMR provided that they have provided their informed written consent. The study protocol was approved by the West-China hospital of Sichuan University Biomedical Research Ethics Committee.
Clinical characteristics, medication, and serum biochemical indexes of all patients and healthy controls were collected. Blood pressure was measured approximately 20 min before CMR examinations when the subject was in a relaxed state. Blood sampling for serum biochemical indexes was performed within 1 week of the CMR scan without changing the subject's medication regimen.
Adhering to the definition of MetS by the International Diabetes Federation (2005), we divided T2DM patients into MetS and non-MetS groups [15]. In this definition, central obesity is considered an essential diagnostic element for MetS, and be defined as waist circumference of ≥ 90 cm for males and ≥ 80 cm for females. In addition, the presence of any two of these factors is sufficient for the diagnosis of MetS: (a) increased plasma TG levels (> 150 mg/dL [1.7 mmol/L]) or specific treatment for this lipid abnormality; (b) reduced high-density lipoprotein (HDL)-cholesterol (< 40 mg/dL [1.0 mmol/L] in males; < 50 mg/dL [1.3 mmol/L] in females) or specific treatment for this lipid abnormality; (c) increased blood pressure (systolic ≥ 130 mm Hg and/or diastolic ≥ 85 mm Hg) or treatment of previously diagnosed hypertension; and (d) increased fasting plasma glucose levels (> 100 mg/dL [5.6 mmol/L]) or previously diagnosed T2DM.
CMR scanning protocol
All subjects were examined using a 3.0-T whole-body scanner (Skrya; Siemens Medical Solutions, Erlangen, Germany) in the supine position. A dedicated two-element cardiac-phased array coil was used for signal detection. Furthermore, a standard ECG-triggering device was used and end-inspiratory breath holding were performed. Following a survey scan, cine images such as long-axis four-chamber views and short-axis two-chamber views were acquired using a steady-state free-precession sequence (TR/TE 39.34/1.22 ms, flip angle 38°, slice thickness 8 mm, field of view 360 × 300 mm2, matrix size 256 × 166). Regarding first-pass perfusion imaging, gadobenate dimeglumine (MultiHance; Bracco, Milan, Italy) was intravenously injected at a dose of 0.2 ml/kg body weight at an injection rate of 2.5–3.0 mL/s, followed by a 20 ml saline flush at a rate of 3.0 ml/s. Consequently, first-pass perfusion images were acquired using an inversion-recovery echo-planar imaging sequence (TR/TE 163.00/0.98 ms, flip angle 10°, slice thickness 8 mm, field of view 360 × 270 mm2, matrix size 256 × 192) with three standard short-axis slices (apical, middle, and basal), as well as basal slices do not cover the mitral valve level.
1H-MRS were performed to obtain the myocardial TG content using a standard flex-coil for signal reception. Voxel positioning was performed in the standard 4-chamber view and 2-chamber short-view, and a single voxel was positioned on the interventricular septum in the meddle slice (Fig. 1). Spectroscopic data were acquired with ECG triggering and respiratory navigator echoes to minimize motion artifacts. We performed two scans using the abovementioned sequence. During the first scan, the water suppression mode was used to eliminate the imbibition caused by water from the signal of interest. During the second scan, the water suppression mode was not used, a water signal is obtained. Spectral data collection was performed with the PRESS sequence (TR/TE 560/33 ms, average 4). All 1H-MRS data were analyzed using a Java-based software (jMRUI, version 6.0, Leuven, Belgium). TG content was calculated as a percentage related to water and expressed as following:
$${{\left( {\text{signal amplitude of TG}} \right)} \mathord{\left/ {\vphantom {{\left( {\text{signal amplitude of TG}} \right)} {\left( {\text{signal amplitude of water}} \right)}}} \right. \kern-0pt} {\left( {\text{signal amplitude of water}} \right)}}\, \times \, 100$$
Measurement of myocardial triglyceride content by 1H-MRS. Left 4-chamber and 2-chamber cardiac image. The signal voxel was positioned at the interventricular septum in meddle slice. Myocardial triglyceride content was calculated as a percentage related to water and expressed
CMR data analysis
We uploaded all acquired images data to an offline workstation using a semi-automated software (Cvi42; Circle Cardiovascular Imaging, Inc., Calgary, Canada). Endocardial and epicardial traces were delineated manually by two experienced radiologists in the serial short-axis slices during the end-diastolic and end-systolic phases. LV functional parameters and LV mass were automatically determined. LV remodeling was characterized by the ratio of LV mass to LVEDV (LVMVR). The LV global function index (LVGFI) was calculated using the following formula:
$$ \begin{aligned}{\text{LVGFI}}\, &= \,\{ {\text{LVSV}}/([{\text{LVEDV}}\, + \,{\text{LVESV}})/ 2\, \\ &\quad+ \,\left( {{\text{LV mass}}/ 1.0 5} \right)]\} \, \times \, 100\end{aligned} $$
To evaluate LV microvascular perfusion, blood pools as well as endocardial and epicardial traces of the meddle slice of first-pass perfusion images were delineated manually (in order to match the voxel level of 1H-MRS), and a region of interest was placed over the blood pool as a means of contrast. In addition to myocardial and blood pooled time-signal intensity curves, semi-quantitative perfusion parameters were obtained such as upslope, maximum signal intensity (MaxSI), and time to maximum signal intensity (TTM).
Variability analysis
To determine intra-observer variability, LV deformation and perfusion parameters in 30 random cases that included 20 T2DM patients and 10 normal controls were measured twice in 1-week intervals by a radiologist. Then, a second investigator, who was blinded to the first investigator's results, reanalyzed the measurements. Finally, the interobserver variability was assessed on the basis of the two investigators' results. The two radiologists were blinded to the status (DM vs control, DM with MetS vs. DM without MetS).
Statistical analyses were performed with commercially available SPSS (version 21.0 for windows; SPSS, Inc., Chicago, IL, USA). Results are expressed as the mean ± standard deviation. One-way analysis of variance test was performed to evaluate the differences among the following groups: T2DM with MetS, T2DM without MetS and control. Based on Bonferroni's correction for multigroup comparisons, p-values of < 0.017 were considered as statistically significant. Spearman's and Pearson's correlation analysis were conducted to identify the relationship between myocardial steatosis and cardiac deformation. Moreover, multivariable stepwise linear regression analysis was employed to identify the relationship between myocardial TG content and subclinical cardiac dysfunction. Receiver operating characteristic curve (ROC) analysis was conduncted to predict myocardial steatosis to LV subclinical myocardial dysfunction.
Patient characteristics and metabolic parameters
Of the 53 T2DM patients, 23 were included in the non-MetS (15 males, mean age 54.85 ± 10.87 years) and 30 in the MetS group (16 males, mean age 54.48 ± 9.61 years). Table 1 presents their baseline characteristics, metabolic parameters, and medication. Weight and BMI were found to be higher in the MetS group than in the non- MetS group and the control group, whereas systolic blood pressure was higher in the MetS group than in the control group.
Table 1 Baseline and metabolic parameters T2DM patients with or without metabolic syndrome and the normal controls
HbA1c was higher in T2DM patients than in normal controls, and serum TG content was higher in the MetS group than in the control group (1.76 ± 1.64 mmol/L vs. 1.03 ± 0.27 mmol/l; p < 0.001). In terms of medication, the MetS group were more likely to received treatment for this lipid abnormality. The remaining baseline and metabolic characteristics showed no statistically significant difference among all the three groups.
CMR 1H-MRS analysis
The result of the myocardial TG content are summarized in Table 2. The MetS group had significantly higher myocardial TG content than that of the non-MetS group (1.54 ± 0.63% vs. 1.16 ± 0.45%, p < 0.001) and the control (1.54 ± 0.63% vs. 0.61 ± 0.22%, p < 0.001; Fig. 2a). Furthermore, the non-MetS group had a significantly higher myocardial TG content than the control group (1.16 ± 0.45% vs. 0.61 ± 0.22%, p < 0.001).
Table 2 CMR parameters for T2DM patients with or without metabolic syndrome and the normal controls
Differences in myocardial triglyceride content (a), LV longitudinal PS (b), LV radial PS (c), upslope (d) and TTM (e) among patients in T2DM with MetS, T2DM without MetS, and normal subjects. *p < 0.017
CMR imaging analysis
Regarding LV function and deformation, LVEDV, LVESV and LVGFI (all p < 0.001) were lower in the MetS group compared to the control group, whereas LV mass (92.25 ± 22.81 g/m2 vs. 75.93 ± 14.33 g/m2,p < 0.001) and LVMVR (0.76 ± 0.22 vs. 0.55 ± 0.13, p < 0.001) were higher in the MetS group than in the control group.
The global longitudinal peak strain (PS) (MetS vs. non-MetS: − 12.67 ± 3.46% vs. − 14.78 ± 3.48%; MetS vs. control: − 12.67 ± 3.46% vs. − 15.71 ± 2.10%, all p < 0.001) (Fig. 2b) and global radial PS (MetS vs. non-MetS: 33.28 ± 9.00% vs. 39.98 ± 12.05%; MetS vs. normal: 33.28 ± 9.00% vs. 39.85 ± 7.64%, all p < 0.001) (Fig. 2c) were lower in the MetS group than in the non-MetS and control groups. There was no statistically significant difference in myocardial deformation between the non-MetS and control group.
T2DM patients in MetS group had a significantly lower perfusion upslope (2.10 ± 1.19 vs. 2.93 ± 0.78, p < 0.001) (Fig. 2d) but higher TTM values (36.09 ± 14.57 s vs. 24.77 ± 11.01 s, p < 0.001) (Fig. 2e) than the control group. However, no difference was observed in these values compared with the non-MetS group. In fact, there was no significant difference in terms of all perfusion parameters between the non-MetS and the control group.
Association between MetS, myocardial steatosis, and myocardial function
Spearman correlation analysis showed that MetS had a positive correlation with myocardial TG content (r = 0.314, p < 0.05). Furthermore, myocardial TG content was positively associated with LV longitudinal PS (r = 0.359, p < 0.05), TTM (r = 0.415, p < 0.05), and negatively associated with upslope (r = − 0.280, p < 0.05) (Fig. 3). There was no significant correlation between MetS and other cardiac-related parameters (all p > 0.05).
Relationship between myocardial triglyceride content and LV longitudinal PS, TTM and upslope
Multivariable stepwise linear regression analysis indicated that myocardial TG content (β = 0.441, p < 0.001) and diastolic blood pressure (β = 0.254, p = 0.041) were independently associated with the TTM (Model.3: R2 = 0.459), and the myocardial TG content (β = 0.323, p = 0.021) was also independently associated with LV longitudinal PS (Model.3: R2 = 0.323) (Table 3).
Table 3 Multivariable associations between cardiac parameters and myocardial triglyceride content
ROC analysis demonstrated that the cutoff value for myocardial TG content that predicted the risk of myocardial microvascular perfusion dysfunction (sensitivity = 57.1%, specificity = 84.0%, and AUC = 0.74) (Fig. 4a) and longitudinal myocardial deformation (sensitivity = 59.2%, specificity = 84.6%, and AUC = 0.71) (Fig. 4b) was 1.56.
Receiver operating characteristic curve (ROC) analysis to predict the relationship between the myocardial triglyceride content and TTM (a), LV longitudinal PS (b)
Inter- and intra-observer variability
Table 4 summarizes the inter- and intra-observer variability for LV deformation and first-pass perfusion analysis. The ICCs for intra- and interobserver variability were 0.923–0.959 and 0.883–0.955, respectively, in LV deformation and 0.977–0.991 and 0.982–0.993 respectively, in first-pass perfusion, suggesting that both techniques are in agreement.
Table 4 Inter- and intra-observer variability of first-perfusion and tissue tracking
In this study, the following principal findings were obtained: (1) T2DM patients with MetS may be more likely to present myocardial steatosis; (2) there was a decreased of LV deformation and microcirculation perfusion in T2DM patients with MetS; and (3) an increased myocardial TG content was associated with the reduce of LV longitudinal deformation and microvascular perfusion, and it might be an appropriate predictor of the myocardium damages.
As a noninvasive technique, 1H-MRS MRS can investigate the cardiac metabolism in vivo, thereby quantitatively detecting metabolites, including fatty acids (FA), creatine etc. Therefore, 1H-MRS can help diagnose myocardial steatosis at an early stage and facilitate the targeted treatment of diabetes mellitus.
The pathological mechanism of diabetic cardiomyopathy is complex and multifactorial. Recent studies have indicated that myocardial lipotoxic injury as a result of lipid oversupply plays an important role in diabetic heart disease [16, 17]. In this study, we identified the progression of myocardial steatosis (increasing of myocardial TG content) in T2DM patients, particularly in those with concurrent MetS, and it was positively associated with MetS. We suspect that this was due to insulin resistance, central obesity, and increased serum FA content, which leads to increased myocardial FA delivery and uptake in T2DM patients [18]. Furthermore, central obesity was one of the most critical factors that facilitated excessive myocardium lipids deposition in MetS patients [19]. Therefore, T2DM patients with concurrent MetS are more prone to developing myocardial steatosis.
In addition, we observed that only the MetS group exhibited reduced LV longitudinal and radial PS, which might mean reduction of early myocardial diastolic function. According to the distribution of myocardial fibers, the longitudinal myocardial fibers are predominantly located in the sub-endocardium and are most susceptible to early microvascular ischemia [20]. As the central clinical features of MetS, insulin resistance and central obesity increase the inflammation and oxidative stress, thereby inducing endothelial dysfunction and cardiomyocyte apoptosis reducing the ability of myocardial deformation, and ultimately damaging the myocardium; this results in decreased LV deformation of varying degrees [21]. Moreover, our observations related to upslope and TTM indicated that microcirculation function considerably decreased in T2DM patients with MetS. Whereas, despite the reduction trend presented, there was no statistical difference in the non-MetS group and normal control group. It means that when T2DM patients are accompanied by MetS, their myocardial microvascular is reduced, we can presume that compared with subcutaneous fat, central obesity may cause more serious myocardiual damage because it is associated with the adverse remodeling of intramural coronary arterioles. Therefore, the impaired vasodilation further reduced myocardial microvascular perfusion8 [22,23,24,25,26,27,28].
An additional finding in our study was that the T2DM in MetS group exhibited a tendency of concentric LV remodeling and reduced of LVGFI. In contrast, the T2DM patients in non-MetS group did not present similar myocardial structural changes. Concentric LV remodeling is considered to be an early sign of obesity-related cardiac remodeling before LV hypertrophy occurs [29]. It has been reported that LV wall thickening is associated with radial strain [30]. Therefore, LV concentric remodeling can lead to myocardial hypertrophy, and radial strain can be reduced to a varying degree. In our study, the LV global radial PS was decreased in the MetS group. In addition, we hypothesize that in addition to insulin resistance and central obesity, other pathological disorders secondary to metabolic ones, such as hypertension, hyperlipidemia, and hyperglycemia, may continue to cause more serious myocardial lesions in T2DM patients with concurrent MetS than in those without MetS [31].
A pervious study has identified myocardial steatosis may play an important mechanistic role in the development of diastolic dysfunction in women with microvascular dysfunction and no obstructive CAD [32]. In our study we found the similar mechanism in T2DM patients. Our results show the association between myocardial steatosis and longitudinal PS, this also confirms that T2DM patients are prone to early diastolic dysfunction [8]. Moreover, using electrocardiographically gated gradient-echo sequence with velocity encoding, Rijzewijk et al. found that myocardial steatosis is an independent predictor of early diastolic dysfunction in uncomplicated T2DM [33]. Our present study also reached a similar conclusion using CMR, in that the correlation between myocardial TG content and myocardial deformation decreased in T2DM patients with MetS. Besides myocardial deformation, we also identified that an increase in myocardial TG content is negatively related to myocardial microvascular perfusion function, regardless of patients' age, BMI, heart rate, duration of diabetes, plasma glucose, and blood pressure, and myocardial TG content had a moderately predictive effect on the myocardial microvascular perfusion. Furthermore, Nyman et al. found that MetS was associated with LV diastolic dysfunction, and our research indicated that when T2DM is accompanied with MetS, the injury of LV deformation and microvascular perfusion is aggravated ([7]). To order to adapt to metabolic disorder, the myocardium maintains a high oxygen consumption rate and FA oxidation rate under conditions of insulin resistance, visceral adiposity, and increased serum dietary FA content, thus facilitating the accumulation of intracellular TG in the myocyte cytoplasm [34, 35]. Intracellular TG is relatively inert, but an increase in its content reflects a respective increase in anaerobic oxidation of FA and accumulation of lipotoxic intermediates such as ceramide and diacyl-glycerol [18, 36,37,38]. These lipotoxic intermediates have been shown to activate signaling pathways that affect ATP production, insulin sensitivity, and apoptosis, but they also trigger replacement fibrosis and myocardial contractile dysfunction [18, 39]. Therefore, we believe that T2DM patients with concurrent MetS are more prone to developing myocardial lipotoxic injury, thus suggesting that T2DM and MetS have synergistic effects on myocardial degeneration and myocardial injury.
There are several limitations to our study. First of all, this study was a single center study; hence, an assemble bias may have influenced the acquired results. Second, because we did not perform secondary CMR examinations or other follow-up investigations, our results need to be verified by longitudinal studies on T2DM patients. Hence, it is our principle focus to verify these findings in future follow-up studies.
Our study found that even when the cardiac function of patients with T2DM is preserved, the combined MetS may increase the reduction of myocardial deformation and myocardial perfusion, and these changes in myocardial structure are related to the degree of myocardial steatosis. Meanwhile, myocardial triglyceride content might be a useful indicator to predicting diabetic cardiomyopathy. Therefore, it is suggested that we should pay more attention to myocardial steatosis in clinically diabetic patients with metabolic syndrome, and reducing myocardial steatosis may also help prevent the progression of diabetic cardiomyopathy.
HDL:
High density lipoprotein
Plasma triglycerides
CMR:
Cardiovascular magnetic resonance
LV:
Left ventricular
EDV:
End-diastolic volume
ESV:
End-systolic volume
SV:
EF:
Ejection fraction
TTM:
Time to maximum signal intensity
MaxSI:
Max signal intensity
1H-MRS:
Peak strain
ROC:
The receiver operating characteristic curve
Mauger C, Gilbert K, Lee AM, et al. Right ventricular shape and function: cardiovascular magnetic resonance reference morphology and biventricular risk factor morphometrics in UK Biobank. J Cardiovasc Magn Reson. 2019;21(1):41.
Yoneyama K, Venkatesh BA, Wu CO, Mewton N, Gjesdal O, Kishi S, McClelland RL, Bluemke DA, Lima JA. Diabetes mellitus and insulin resistance associate with left ventricular shape and torsion by cardiovascular magnetic resonance imaging in asymptomatic individuals from the multi-ethnic study of atherosclerosis. J Cardiovasc Magn Reson. 2018;187(4177):652–3.
Korosoglou G, Humpert PM, Ahrens J, et al. Left ventricular diastolic function in type 2 diabetes mellitus is associated with myocardial triglyceride content but not with impaired myocardial perfusion reserve. J Magn Reson Imaging. 2012;35(4):804–11.
Grundy SM, Cleeman JI, Daniels SR, et al. Diagnosis and Management of the Metabolic Syndrome An American Heart Association/National Heart, Lung, and Blood Institute Scientific Statement. Circulation. 2006;112:2735–52.
O'Neill S, O'Driscoll L. Metabolic syndrome: a closer look at the growing epidemic and its associated pathologies. Obes Rev. 2015;16(1):1–12.
Yogita R, Pothineni SK. Metabolic syndrome: pathophysiology, management, and modulation by natural compounds. Ther Adv Vaccines. 2018;8(1):25–32.
Nyman K, Granér M, Pentikäinen MO, et al. Cardiac steatosis and left ventricular function in men with metabolic syndrome. J Cardiovasc Magn Reson. 2013;15(1):1–11.
Liu Xi, Yang Zhi-gang, Gao Yue, et al. Left ventricular subclinical myocardial dysfunction in uncomplicated type 2 diabetes mellitus is associated with impaired myocardial perfusion: a contrast-enhanced cardiovascular magnetic resonance study. Cardiovasc Diabetol. 2018;17(1):1–12.
Romano S, Judd RM, Kim RJ, et al. Left Ventricular Long-Axis Function Assessed with Cardiac Cine MR Imaging Is an Independent Predictor of All-Cause Mortality in Patients with Reduced Ejection Fraction: a Multicenter Study. Radiology. 2018;286(2):452–60.
Karur GR, Robison S, Iwanochko RM, et al. Use of myocardial T1 mapping at 30 T to differentiate anderson-fabry disease from hypertrophic cardiomyopathy. Radiology. 2018;288(2):398–406.
Patscheider H, Lorbeer R, Auweter S, et al. Subclinical changes in MRI-determined right ventricular volumes and function in subjects with prediabetes and diabetes. Eur Radiol. 2018;28(7):3105–13.
Dobrovie M, Barreiro-Perez M, Curione D, et al. Inter-vendor reproducibility and accuracy of segmental left ventricular strain measurements using CMR feature tracking. Eur Radiol. 2019;29(12):6846–57.
Cao JJ, Ngai N, Duncanson L, et al. A comparison of both DENSE and feature tracking techniques with tagging for the cardiovascular magnetic resonance assessment of myocardial strain. J Cardiovasc Magn Reson. 2018;20(1):26.
Alberti KG, Zimmet PZ. Definition, diagnosis and classification of diabetes mellitus and its complications Part 1: diagnosis and classification of diabetes mellitus provisional report of a WHO consultation. Diabet Med. 1998;15(7):539–53.
Kgmm A, Pz Z, Shaw J. The metabolic syndrome—a new worldwide definition from the international diabetesis federation consensus. Lancet. 2005;366:1059–62.
Levelt E, Pavlides M, Banerjee R, et al. Ectopic and visceral fat deposition in lean and obese patients with type 2 diabetes. J Am Coll Cardiol. 2016;68(1):53–63.
Hu L, Zha YF, Wang L, et al. Quantitative evaluation of vertebral microvascular permeability and fat fraction in alloxan-induced diabetic rabbits. Radiology. 2018;287(1):128–36.
Ngdf AC, Delgado V, Bertini M, et al. Myocardial steatosis and biventricular strain and strain rate imaging in patients with type 2 diabetes mellitus. Circulation. 2010;122:2538–44.
Iozzo P. Metabolic toxicity of the heart: insights from molecular imaging. Nutr Metab Cardiovasc Dis. 2010;20(3):147–56.
Vinereanu D, Lim PO, Frenneaux MP, et al. Reduced myocardial velocities of left ventricular long-axis contraction identify both systolic and diastolic heart failure—a comparison with brain natriuretic peptide. Eur J Heart Fail. 2005;7(4):512–9.
Murai J, Nishizawa H, Otsuka A, et al. Low muscle quality in Japanese type 2 diabetic patients with visceral fat accumulation. Cardiovasc Diabetol. 2018;17(1):112.
Hulten EA, Bittencourt MS, Preston R, et al. Obesity, metabolic syndrome and cardiovascular prognosis: from the Partners coronary computed tomography angiography registry. Cardiovasc Diabetol. 2017;16(1):1–11.
Mancusi C, de Simone G, Best LG, et al. Myocardial mechano-energetic efficiency and insulin resistance in non-diabetic members of the Strong Heart Study cohort. Cardiovasc Diabetol. 2019;18(1):56.
Klein C, Brunereau J, Lacroix D, et al. Left atrial epicardial adipose tissue radiodensity is associated with electrophysiological properties of atrial myocardium in patients with atrial fibrillation. Eur Radiol. 2019;29(6):3027–35.
Granér M, Siren R, Nyman K, et al. Cardiac steatosis associates with visceral obesity in nondiabetic obese men. J Clin Endocrinol Metab. 2013;98(3):1189–97.
Levy BI, Schiffrin EL, Mourad JJ, et al. Impaired tissue perfusion a pathology common to hypertension, obesity, and diabetes mellitus. Circulation. 2008;118(9):968–76.
Mandry D, Eschalier R, Kearney-Schwartz A, et al. Comprehensive MRI analysis of early cardiac and vascular remodeling in middle-aged patients with abdominal obesity. J Hypertens. 2012;30(3):567–73.
Christensen RH, von Scholten BJ, Hansen CS, et al. Epicardial adipose tissue predicts incident cardiovascular disease and mortality in patients with type 2 diabetes. Cardiovasc Diabetol. 2019;18(1):1–10.
Wolf P, Winhofer Y, Krssak M, et al. Suppression of plasma free fatty acids reduces myocardial lipid content and systolic function in type 2 diabetes. Nutr Metab Cardiovasc Dis. 2016;26(5):387–92.
Chitiboi T, Axel L. Magnetic resonance imaging of myocardial strain: a review of current approaches. J Magn Reson Imaging. 2017;46(5):1263–80.
Gao Y, Yang Z, Ren Y, et al. Evaluation myocardial fibrosis in diabetes with cardiac magnetic resonance T1-mapping: correlation with the high-level hemoglobin A1c. Diabetes Res Clin Pract. 2019;150:72–80.
Wei J, Nelson MD, Szczepaniak EW, et al. Myocardial steatosis as a possible mechanistic link between diastolic dysfunction and coronary microvascular dysfunction in women. Am J Physiol Hear Circ Physiol. 2016;310(1):H14–9.
Rijzewijk LJ, van der Meer RW, Smit JWA, et al. Myocardial steatosis is an independent predictor of diastolic dysfunction in type 2 diabetes mellitus. J Am Coll Cardiol. 2008;52(22):1793–9.
Dobson R, Burgess MI, Sprung VS, et al. Metabolically healthy and unhealthy obesity: differential effects on myocardial function according to metabolic syndrome, rather than obesity. Int J Obes. 2016;40(1):153–61.
van de Weijer T, Schrauwen-Hinderling VB, Schrauwen P. Lipotoxicity in type 2 diabetic cardiomyopathy. Cardiovasc Res. 2011;92(1):10–8.
Pappachan JM, Varughese GI, Sriraman R, Arunagirinathan G. Diabetic cardiomyopathy: pathophysiology, diagnostic evaluation and management. World J Diabetes. 2017;4(5):177.
Levelt E, Mahmod M, Piechnik SK, et al. Relationship between left ventricular structural and metabolic remodeling in type 2 diabetes. Diabetes. 2016;65(1):44–52.
Mahmod M, Pal N, Rayner J, et al. The interplay between metabolic alterations, diastolic strain rate and exercise capacity in mild heart failure with preserved ejection fraction: a cardiovascular magnetic resonance study. J Cardiovasc Magn Reson. 2018;20(1):88.
Levelt E, Gulsin G, Neubauer S, McCann GP. Diabetic cardiomyopathy: pathophysiology and potential metabolic interventions state of the art review. Eur J Endocrinol. 2018;178(4):R127–39.
This work was supported by the National Natural Science Foundation of China (81771887, 81771897, 81471721, 81471722, 81971586, and 81901712), Program for New Century Excellent Talents in University (No: NCET-13-0386), Program for Young Scholars and Innovative Research Team in Sichuan Province of China (2017TD0005), 1·3·5 project for disciplines of excellence, West China Hospital, Sichuan University (ZYGD18013).
Yue Gao, Yan Ren and Ying-kun Guo contributed equally to this work
Department of Radiology, West China Hospital, Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, 610041, China
Yue Gao, Xi Liu, Li Jiang, Meng-ting Shen & Zhi-gang Yang
Department of Radiology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of Ministry of Education, West China Second University Hospital, Sichuan University, Chengdu, China
Yue Gao, Ying-kun Guo & Lin-jun Xie
Department of Endocrinology and Metabolism, West China Hospital, Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, 610041, China
Yan Ren & Ming-yan Deng
Yue Gao
Yan Ren
Ying-kun Guo
Lin-jun Xie
Meng-ting Shen
Ming-yan Deng
Zhi-gang Yang
YG and YR designed the study. YG performed the experiments, and wrote the manuscript. GYK participated in the study design, analyzed the data, drafted the manuscript and editing and review of the manuscript. YZG supervised the overall study and contributed to study design, editing and review of the manuscript. RY, LX performed the experiments and review the manuscript. HBY, LJ, LJX, MTS and DMY performed the experiments and was responsible for collecting, sorting and statistical data. ZGY is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors read and approved the final manuscript.
Correspondence to Zhi-gang Yang.
The study complied with the Declaration of Helsinki and was approved by the West-China hospital of Sichuan University biomedical research ethics committee (Chengdu, Sichuan, China; No. 2016-24). Written informed consents were obtained from all the study participants.
Gao, Y., Ren, Y., Guo, Yk. et al. Metabolic syndrome and myocardium steatosis in subclinical type 2 diabetes mellitus: a 1H-magnetic resonance spectroscopy study. Cardiovasc Diabetol 19, 70 (2020). https://doi.org/10.1186/s12933-020-01044-1
Received: 22 November 2019
Accepted: 17 May 2020
Myocardial steatosis
Subclinical myocardial dysfunction
1H-magnetic resonance spectroscopy | CommonCrawl |
by Yu · Published 08/02/2017
Is the Linear Transformation Between the Vector Space of 2 by 2 Matrices an Isomorphism?
Let $V$ denote the vector space of all real $2\times 2$ matrices.
Suppose that the linear transformation from $V$ to $V$ is given as below.
\[T(A)=\begin{bmatrix}
\end{bmatrix}A-A\begin{bmatrix}
\end{bmatrix}.\] Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism.
Unit Vectors and Idempotent Matrices
A square matrix $A$ is called idempotent if $A^2=A$.
(a) Let $\mathbf{u}$ be a vector in $\R^n$ with length $1$.
Define the matrix $P$ to be $P=\mathbf{u}\mathbf{u}^{\trans}$.
Prove that $P$ is an idempotent matrix.
(b) Suppose that $\mathbf{u}$ and $\mathbf{v}$ be unit vectors in $\R^n$ such that $\mathbf{u}$ and $\mathbf{v}$ are orthogonal.
Let $Q=\mathbf{u}\mathbf{u}^{\trans}+\mathbf{v}\mathbf{v}^{\trans}$.
Prove that $Q$ is an idempotent matrix.
(c) Prove that each nonzero vector of the form $a\mathbf{u}+b\mathbf{v}$ for some $a, b\in \R$ is an eigenvector corresponding to the eigenvalue $1$ for the matrix $Q$ in part (b).
A Positive Definite Matrix Has a Unique Positive Definite Square Root
Prove that a positive definite matrix has a unique positive definite square root.
Find All the Square Roots of a Given 2 by 2 Matrix
Let $A$ be a square matrix. A matrix $B$ satisfying $B^2=A$ is call a square root of $A$.
Find all the square roots of the matrix
\[A=\begin{bmatrix}
No/Infinitely Many Square Roots of 2 by 2 Matrices
(a) Prove that the matrix $A=\begin{bmatrix}
\end{bmatrix}$ does not have a square root.
Namely, show that there is no complex matrix $B$ such that $B^2=A$.
(b) Prove that the $2\times 2$ identity matrix $I$ has infinitely many distinct square root matrices.
How to Prove a Matrix is Nonsingular in 10 Seconds
Using the numbers appearing in
\[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix}
3 & 14 &1592& 65358\\
97932& 38462643& 38& 32\\
7950& 2& 8841& 9716\\
939937510& 5820& 974& 9
Prove that the matrix $A$ is nonsingular.
Let $A$ be a square matrix.
Prove that the eigenvalues of the transpose $A^{\trans}$ are the same as the eigenvalues of $A$.
The Inverse Matrix of the Transpose is the Transpose of the Inverse Matrix
Let $A$ be an $n\times n$ invertible matrix. Then prove the transpose $A^{\trans}$ is also invertible and that the inverse matrix of the transpose $A^{\trans}$ is the transpose of the inverse matrix $A^{-1}$.
Namely, show that
\[(A^{\trans})^{-1}=(A^{-1})^{\trans}.\]
The Formula for the Inverse Matrix of $I+A$ for a $2\times 2$ Singular Matrix $A$
Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix.
Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula:
\[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\]
Using the formula, calculate the inverse matrix of $\begin{bmatrix}
Every Diagonalizable Nilpotent Matrix is the Zero Matrix
Prove that if $A$ is a diagonalizable nilpotent matrix, then $A$ is the zero matrix $O$.
How to Use the Cayley-Hamilton Theorem to Find the Inverse Matrix
Find the inverse matrix of the $3\times 3$ matrix
7 & 2 & -2 \\
-6 &-1 &2 \\
6 & 2 & -1
\end{bmatrix}\] using the Cayley-Hamilton theorem.
Click here if solved 338
10 True of False Problems about Nonsingular / Invertible Matrices
10 questions about nonsingular matrices, invertible matrices, and linearly independent vectors.
The quiz is designed to test your understanding of the basic properties of these topics.
The solutions will be given after completing all the 10 problems.
Click the View question button to see the solutions.
The Matrix for the Linear Transformation of the Reflection Across a Line in the Plane
Let $T:\R^2 \to \R^2$ be a linear transformation of the $2$-dimensional vector space $\R^2$ (the $x$-$y$-plane) to itself which is the reflection across a line $y=mx$ for some $m\in \R$.
Then find the matrix representation of the linear transformation $T$ with respect to the standard basis $B=\{\mathbf{e}_1, \mathbf{e}_2\}$ of $\R^2$, where
\[\mathbf{e}_1=\begin{bmatrix}
\end{bmatrix}, \mathbf{e}_2=\begin{bmatrix}
A Matrix Commuting With a Diagonal Matrix with Distinct Entries is Diagonal
\[D=\begin{bmatrix}
d_1 & 0 & \dots & 0 \\
0 &d_2 & \dots & 0 \\
\vdots & & \ddots & \vdots \\
0 & 0 & \dots & d_n
\end{bmatrix}\] be a diagonal matrix with distinct diagonal entries: $d_i\neq d_j$ if $i\neq j$.
Let $A=(a_{ij})$ be an $n\times n$ matrix such that $A$ commutes with $D$, that is,
\[AD=DA.\] Then prove that $A$ is a diagonal matrix.
Determine Whether There Exists a Nonsingular Matrix Satisfying $A^4=ABA^2+2A^3$
Determine whether there exists a nonsingular matrix $A$ if
\[A^4=ABA^2+2A^3,\] where $B$ is the following matrix.
\[B=\begin{bmatrix}
-1 & 1 & -1 \\
0 &-1 &0 \\
If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$.
(The Ohio State University, Linear Algebra Final Exam Problem)
Compute $A^{10}\mathbf{v}$ Using Eigenvalues and Eigenvectors of the Matrix $A$
1 & -14 & 4 \\
-1 &6 &-2 \\
-2 & 24 & -7
\end{bmatrix} \quad \text{ and }\quad \mathbf{v}=\begin{bmatrix}
-1 \\
\end{bmatrix}.\] Find $A^{10}\mathbf{v}$.
You may use the following information without proving it.
The eigenvalues of $A$ are $-1, 0, 1$. The eigenspaces are given by
\[E_{-1}=\Span\left\{\, \begin{bmatrix}
\end{bmatrix} \,\right\}, \quad E_{0}=\Span\left\{\, \begin{bmatrix}
\end{bmatrix} \,\right\}.\]
Given the Characteristic Polynomial, Find the Rank of the Matrix
Let $A$ be a square matrix and its characteristic polynomial is given by
\[p(t)=(t-1)^3(t-2)^2(t-3)^4(t-4).\] Find the rank of $A$.
Diagonalize the 3 by 3 Matrix Whose Entries are All One
Diagonalize the matrix
1 & 1 & 1 \\
1 &1 &1 \\
1 & 1 & 1
\end{bmatrix}.\] Namely, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
Find Values of $a, b, c$ such that the Given Matrix is Diagonalizable
For which values of constants $a, b$ and $c$ is the matrix
7 & a & b \\
0 &2 &c \\
\end{bmatrix}\] diagonalizable?
Find a Basis of the Vector Space of Polynomials of Degree 2 or Less Among Given Polynomials
Let $P_2$ be the vector space of all polynomials with real coefficients of degree $2$ or less.
Let $S=\{p_1(x), p_2(x), p_3(x), p_4(x)\}$, where
p_1(x)&=-1+x+2x^2, \quad p_2(x)=x+3x^2\\
p_3(x)&=1+2x+8x^2, \quad p_4(x)=1+x+x^2.
(a) Find a basis of $P_2$ among the vectors of $S$. (Explain why it is a basis of $P_2$.)
(b) Let $B'$ be the basis you obtained in part (a).
For each vector of $S$ which is not in $B'$, find the coordinate vector of it with respect to the basis $B'$.
Page 9 of 25« First«...678910111213...20...»Last »
Probability Problems about Two Dice
5 is Prime But 7 is Not Prime in the Ring $\Z[\sqrt{2}]$
Quiz 3. Condition that Vectors are Linearly Dependent/ Orthogonal Vectors are Linearly Independent
Interchangeability of Limits and Probability of Increasing or Decreasing Sequence of Events
Are these vectors in the Nullspace of the Matrix? | CommonCrawl |
MARS: improving multiple circular sequence alignment using refined sequences
Lorraine A. K. Ayad1 &
Solon P. Pissis1
A fundamental assumption of all widely-used multiple sequence alignment techniques is that the left- and right-most positions of the input sequences are relevant to the alignment. However, the position where a sequence starts or ends can be totally arbitrary due to a number of reasons: arbitrariness in the linearisation (sequencing) of a circular molecular structure; or inconsistencies introduced into sequence databases due to different linearisation standards. These scenarios are relevant, for instance, in the process of multiple sequence alignment of mitochondrial DNA, viroid, viral or other genomes, which have a circular molecular structure. A solution for these inconsistencies would be to identify a suitable rotation (cyclic shift) for each sequence; these refined sequences may in turn lead to improved multiple sequence alignments using the preferred multiple sequence alignment program.
We present MARS, a new heuristic method for improving Multiple circular sequence Alignment using Refined Sequences. MARS was implemented in the C++ programming language as a program to compute the rotations (cyclic shifts) required to best align a set of input sequences. Experimental results, using real and synthetic data, show that MARS improves the alignments, with respect to standard genetic measures and the inferred maximum-likelihood-based phylogenies, and outperforms state-of-the-art methods both in terms of accuracy and efficiency. Our results show, among others, that the average pairwise distance in the multiple sequence alignment of a dataset of widely-studied mitochondrial DNA sequences is reduced by around 5% when MARS is applied before a multiple sequence alignment is performed.
Analysing multiple sequences simultaneously is fundamental in biological research and multiple sequence alignment has been found to be a popular method for this task. Conventional alignment techniques cannot be used effectively when the position where sequences start is arbitrary. We present here a method, which can be used in conjunction with any multiple sequence alignment program, to address this problem effectively and efficiently.
The one-to-one mapping of a DNA molecule to a sequence of letters suggests that sequence comparison is a prerequisite to virtually all comparative genomic analyses. Due to this, sequence comparison has been used to identify regions of similarity which may be a byproduct of evolutionary, structural, or functional relationships between the sequences under study [1]. Sequence comparison is also useful in fields outside of biology, for example, in pattern recognition [2] or music analysis [3]. Several techniques exist for sequence comparison; alignment techniques consist of either global alignment [4, 5] or local alignment [6] techniques. Alignment-free techniques also exist; they are based on measures referring to the composition of sequences in terms of their constituent patterns [7]. Pairwise sequence alignment algorithms analyse a pair of sequences, commonly carried out using dynamic-programming techniques [5]; whereas multiple sequence alignment (MSA) involves the simultaneous comparison of three or more sequences (see [8] for a comprehensive review).
Analysing multiple sequences simultaneously is fundamental in biological research and MSA has been found to be a popular method for this task. One main application of MSA is to find conserved patterns within protein sequences [9] and also to infer homology between specific groups of sequences [10]. MSA may also be used in phylogenetic tree reconstruction [11] as well as in protein structure prediction [12].
Using a generalisation of the dynamic-programming technique for pairwise sequence alignments works efficiently for MSA for only up to a few short sequences. Specifically, MSA with the sum-of-pairs score (SP-score) criterion is known to be NP-hard [13]; and, therefore, heuristic techniques are commonly used [14–16], which may not always lead to optimal alignments. As a result, suboptimal alignments may lead to unreliable tree estimation during phylogenetic inference. To this end, several methods aimed to have shown that removing unreliable sites (columns) of an alignment may lead to better results [17].
Several discussions of existing filtering methods provide evidence that the removal of blocks in alignments of sufficient length leads to better phylogenetic trees. These filtering methods take a variety of mathematical and heuristic approaches. Most of the methods are fully automated and they remove entire columns of the alignment. A few of these programs, found in [18, 19], are based on site-wise summary statistics. Several filtering programs, found in [20–24], are based on mathematical models. However, experimental results found in [17] oppose these findings, suggesting that generally, not only do the current alignment filtering methods not lead to better trees, but there also exist many cases where filtering worsened the trees significantly.
Circular molecular structures are present, in abundance, in all domains of life: bacteria, archaea, and eukaryotes; and in viruses. They can be composed of both amino and nucleic acids. Exhaustive reviews can be found in [25] (proteins) and [26] (DNA). The most common examples of such structures in eukaryotes are mitochondrial DNA (mtDNA). mtDNA is generally conserved from parent to offspring and replication of mtDNA occurs frequently in animal cells [27]. This is key in phylogenetic analysis and the study of evolutionary relationships among species [11]. Several other example applications exist including MSA of viroid or viral genomes [28] and MSA of naturally-occurring circular proteins [29].
A fundamental assumption of all widely-used MSA techniques is that the left- and right-most positions of the input sequences are relevant to the alignment. However, the position where a sequence starts (left-most) or ends (right-most) can be totally arbitrary due to a number of reasons: arbitrariness in the linearisation (sequencing) of a circular molecular structure; or inconsistencies introduced into sequence databases due to different linearisation standards. In these cases, existing MSA programs, such as Clustal Ω [30], MUSCLE [31], or T-Coffee [16], may produce an MSA with a higher average pairwise distance than the expected one for closely-related sequences. A rather surprising such instance is the published human (NC_001807) and chimpanzee (NC_001643) mtDNA sequences, which do not start in the same genetic region [32]. It may be more relevant to align mtDNA based on gene order [33], however, the tool we present in this paper may be used to align sequences of a broader type. Hence, for a set of input sequences, a solution for these inconsistencies would be to identify a suitable rotation (cyclic shift) for each sequence; the sequences output would in turn produce an MSA with a lower average pairwise distance.
Due to the abundance of circular molecular structures in nature as well as the potential presence of inconsistencies in sequence databases, it becomes evident that multiple circular sequence alignment (MCSA) techniques for analysing such sequences are desirable. Since MCSA is a generalisation of MSA it is easily understood that MCSA with the SP-score criterion is also NP-hard. To this end, a few programs exist which aim to improve MCSA for a set of input sequences. These programs can be used to first obtain the best-aligned rotations, and then realign these rotations by using conventional alignment programs, such as Clustal Ω, MUSCLE, or T-Coffee. Note that unlike other filtering programs, these programs do not remove any information from the sequences or from their alignment: they merely refine the sequences by means of rotation.
The problem of finding the optimal (linear) alignment of two circular sequences of length n and m≤n under the edit distance model can be solved in time O(n m logm) [34]. The same problem can trivially be solved in time O(n m 2) with substitution matrices and affine gap penalty scores [5]. To this end, alignment-free methods have been considered to speed-up the computation [35, 36]. The more general problem of searching for a circular pattern in a text under the edit distance model has also been studied extensively [37], and an average-case optimal algorithm is known [38].
Progressive multiple sequence alignments can be constructed by generalising the pairwise sequence alignment algorithms to profiles, similar to Clustal Ω [30]. This generalisation is implemented in Cyclope [39], a program for improving multiple circular sequence alignment. The cubic runtime of the pairwise alignment stage becomes a bottleneck in practical terms. Other fast heuristic methods were also implemented in Cyclope, but they are only based on some (e.g. the first two) sequences from the input dataset.
Another approach to improve MCSA was implemented in CSA [32]; a program that is based on the generalised circular suffix tree construction [40]. The best-aligned rotations are found based on the largest chain of non-repeated blocks that belong to all sequences. Unfortunately, CSA is no longer maintained; it also has the restriction that there can be only up to 32 sequences in the input dataset, and that there must exist a block that occurs in every sequence only once.
BEAR [41] is another program aimed to improve MCSA computation in terms of the inferred maximum-likelihood-based phylogenies. The authors presented two methods; the first extends an approximate circular string matching algorithm for conducting approximate circular dictionary matching. A matrix M is outputted from this computation. For a set of d input sequences s 0,…,s d−1, M holds values e and r between circular sequences s i and s j , where M[i,j].e holds the edit distance between the two sequences and M[i,j].r holds the rotation of sequence s i which will result in the best alignment of s i with s j . Agglomerative hierarchical clustering is then used on all values M[i,j].e, to find sufficiently good rotations for each sequence cluster. The second method presented is suitable for more divergent sequences. An algorithm for fixed-length approximate string matching is applied to every pair of sequences to find most similar factors of fixed length. These factors can then determine suitable rotations for all input sequences via the same method of agglomerative hierarchical clustering.
Our contributions. We design and implement MARS, a new heuristic method for improving Multiple circular sequence Alignment using Refined Sequences. MARS is based on a non-trivial coupling of a state-of-the-art pairwise circular sequence comparison algorithm [35] with the classic progressive alignment paradigm [42]. Experimental results presented here, using real and synthetic data, show that MARS improves the alignments and outperforms state-of-the-art methods both in terms of accuracy and efficiency. Specifically, to support our claims, we analyse these results with respect to standard genetic measures as well as with respect to the inferred maximum-likelihood-based phylogenies. For instance, we show here that the average pairwise distance in the MSA of a dataset of widely-studied mtDNA sequences is reduced by around 5% when MARS is applied before MSA is performed.
Definitions and notation
We begin with a few definitions, following [43], to allow further understanding. We think of a string (or sequence) x of length m as an array x[0.. m−1] where every x[i], 0≤i<m, is a letter drawn from some fixed alphabet Σ of size |Σ|=O(1). String ε denotes the empty string which has length 0. Given string y, a string x is considered a factor of y if there exist two strings u and v, such that y=u x v. Consider the strings x,y,u, and v, such that y=u x v. We call x a prefix of y if u=ε; we call x a suffix of y if v=ε. When x is a factor of y, we say that x occurs in y. Each occurrence of x can be denoted by a position in y. We say that x occurs at the starting position i in y when y[ i.. i+m−1]=x; alternatively we may refer to the ending position i+m−1 of x in y.
A circular string of length m may be informally defined as a standard linear string where the first- and last-occurring letters are wrapped around and positioned next to each other. Considering this definition, the same circular string can be seen as m different linear strings, which would all be considered equivalent. Given a string x of length m, we denote by x i=x[i.. m−1]x[0.. i−1], 0<i<m, the ith rotation of x and x 0=x. By looking at the string x=x 0=baababac; this string has the following rotations: x 1=aababacb, x 2=ababacba, x 3=babacbaa, etc.
Given a string x of length m and a string y of length n, the edit distance [44], denoted by δ E (x,y), is defined as the minimum total cost of operations required to transform string x into string y. In general, the allowed edit operations are as follows:
Insertion: insert a letter in y, not present in x; (ε,b), b≠ε
Deletion: delete a letter in y, present in x; (a,ε), a≠ε
Substitution: replace a letter in y with a letter in x; (a,b), a≠b,and a,b≠ε.
A q-gram is defined as any string of length q over alphabet Σ. The set of all such q-grams is denoted by Σ q. The q-gram profile of a string x of length m is the vector G q (x), where q>0, and G q (x)[v] denotes the total number of occurrences of q-gram v∈Σ q in x.
Given strings x of length m and y of length n≥m and an integer q>0, the q-gram distance D q (x,y) is defined as:
$$ \sum\limits_{v \in \Sigma^{q}} \left\vert G_{q}(x)[v] - G_{q}(y)[v] \right\vert. $$
For a given integer parameter β≥1, a generalisation of the q-gram distance can be defined by partitioning x and y in β blocks as evenly as possible, and computing the q-gram distance between each pair of blocks, one from x and one from y. The rationale is to enforce locality in the resulting overall distance [35]. Given strings x of length m and y of length n≥m and integers β≥1 and q>0, the β -blockwise q-gram distance D β,q (x,y) is defined as:
$${} \sum_{j=0}^{\beta-1}D_{q}\left(\!x\left[\!\frac{jm}{\beta} \ldots \frac{(j+1)m}{\beta}-1\!\right], y\left[\!\frac{jn}{\beta} \ldots \frac{(j+1)n}{\beta}-1\!\right]\right). $$
We assume that the lengths m of x and n of y are both multiples of β, so that x and y are partitioned into β blocks, each of size \(\frac {m}{\beta }\) and \(\frac {n}{\beta }\), respectively.
Algorithm MARS
We present MARS; a heuristic algorithm for improving MCSA using refined sequences. For a set of d input sequences s 0,…,s d−1, the task is to output an array R of size d such that s R[i], for all 0≤i<d, denotes the set of rotated sequences, which are then input into the preferred MSA algorithm to obtain an improved alignment. MARS is based on a three-stage heuristic approach:
Initially a d×d matrix M holding two values e and r per cell, is computed; where M[i,j].e holds the edit distance between sequences \(s_{i}^{M[i,j].r}\) and s j . Intuitively, we try to compute the value r that minimises e, that is, the cyclic edit distance.
The neighbour-joining clustering method is carried out on the computed distances to produce a guide tree.
Finally, progressive sequence alignment using refined sequences is carried out using the sequence ordering in the guide tree.
Stage 1. Pairwise cyclic edit distance
In this stage, we make use of a heuristic method for computing the cyclic edit distance between two strings. This method is based on Grossi et al's alignment-free algorithm [35] for circular sequence comparison, where the β-blockwise q-gram distance between two circular sequences x and y is computed. Specifically, the algorithm finds the rotation r of x such that the β-blockwise q-gram distance between x r and y is minimal.
The second step of this stage involves a refinement of the rotation for a pair of sequences, to obtain a more accurate value for r. An input parameter \(0 < P \leq \frac {\beta }{3}\) is used to create refined sequences of length \(3 \times P \times \frac {m}{\beta }\) using x r and y, where m is the length of x r. The first refined sequence is \({x^{r}_{0}}{x^{r}_{1}}{x^{r}_{2}}\): \({x^{r}_{0}}\) is a prefix (of P out of β blocks) of string x r; \({x^{r}_{1}}\) is a string of the same length as the prefix consisting only of letter $∉Σ; and \({x^{r}_{2}}\) is a suffix (of P out of β blocks) of string x r. The same is done for string y, resulting in a refined sequence of the same form y 0 y 1 y 2. Note that large values for P would result in long sequences, improving the refinement of the rotation, but slowing down the computation. A score is calculated for all rotations of these two smaller sequences using Needleman-Wunsch [4] or Gotoh's algorithm [5], making use of substitution matrices for nucleotides or amino acids accordingly. The rotation with the maximum score is identified as the new best-aligned rotation and r is updated if required.
The final step of this stage involves computing the edit distance between the new pair of refined sequences. For unit costs, this is done using Myers bit-vector algorithm [45] in time \(O\left (\left \lceil {\frac {m}{w}}\right \rceil n\right)\), where w is the word size of the machine. For non-unit costs this is computed using the standard dynamic programming solution for edit distance [44] computation in time O(m n). Hence, for a dataset with d sequences, a d×d matrix M is populated with the edit distance e and rotation r for each pair of sequences.
Remark for Stage 1
The simple cost scheme used in Stage 1 for the pairwise cyclic edit distance is sufficient for computing fast approximate rotations. A more complex (biologically relevant) scoring scheme is used in Stage 3 for refining these initial rotations. A yet more complex scoring scheme, required for the final MSA of the sequences output by MARS, can be carried out later on by using any MSA program, and is therefore beyond the scope of this article.
Stage 2. Guide tree
The guide tree is constructed using Saitou and Nei's neighbour-joining algorithm [46], where a binary tree is produced using the edit distance data from matrix M.
Stage 3. Progressive alignment
The guide tree is used to determine the ordering of the alignment of the sequences. Three types of alignments may occur:
Case 1: A sequence with another sequence;
Case 2: A sequence with a profile;
Case 3: A profile with another profile;
where a profile is an alignment viewed as a sequence by regarding each column as a letter [14]. We also need to extend the alphabet to Σ ′=Σ∪{−} to represent insertions or deletions of letters (gaps). For the rest of this stage, we describe our method using the Needleman-Wunsch algorithm for simplicity although Gotoh's algorithm is also applicable.
For Case 1, where only two sequences are to be aligned, note that rotation r has been previously computed and stored in matrix M during Stage 1 of the algorithm. These two sequences are aligned using Needleman-Wunsch algorithm and stored as a new profile made up of the alignment of two individual sequences which now include gaps. In this case, for two sequences s i and s j , we set R[i]:=M[i,j].r and R[j]:=0, as the second sequence is left unrotated.
The remaining two cases of alignments are a generalisation of the pairwise circular sequence alignment to profiles. In the alignment of a pair of sequences, matrix M provides a reference as to which rotation r is required. In the case of a sequence and a profile (Case 2), this may also indirectly be used as we explain below.
As previously seen, when two sequences s i and s j are aligned, one sequence s j remains unrotated. This pair then becomes a profile which we will call profile A. Given the same occurs for another pair of sequences, profile B is created also with one unrotated sequence, \(s_{j^{\prime }}\phantom {\dot {i}\!}\). When profile A is aligned with profile B, another profile, profile C is created. In this case, only the sequences in profile B are rotated to be aligned with profile A. This results in s j to be left unrotated in profile C where s j previously occurred in profile A. Given a sequence s k to be aligned with profile C, this sequence has a current rotation of 0 as has not yet been aligned with another sequence or a profile. We can identify which rotation is needed to rotate sequence s k to be aligned with profile C, by using the single rotation M[ k,j].r.
The same condition applies when aligning two profiles (Case 3). All sequences in profile B will need to be rotated to be aligned with profile A. However, once a single sequence s j in profile A as well as a single sequence \(s_{j^{\prime }}\phantom {\dot {i}\!}\) in profile B with r=0 have been identified, in this case \(s_{j^{{\prime }}}\phantom {\dot {i}\!}\) has already been aligned with other sequences. This means gaps may have been inserted and M[j ′,j].r will no longer be an accurate rotation. By counting the total number g of individual gaps inserted in \(s_{j^{{\prime }}}\phantom {\dot {i}\!}\), between positions 0 and the single rotation M[j ′,j].r of \(s_{j^{\prime }}\phantom {\dot {i}\!}\), the new suitable rotation for profile B would be M[j ′,j].r+g.
Consider the following sequences:
s 0: TAGTAGCT
s 1: AAGTAAGCTA
s 2: AAGCCTTTAGT
s 3: AAGTAAGCT
s 4: TTAATATAGCC
Let profile A be:
s 0: A - G - C - - TTA - GT
s 1: AAG - C - - TAAAGT
s 2: AAGCC - TTTA - GT
Let profile B be:
s 3: A - - - AGTAAG - C - - T
s 4: A - ATA - TA - GCC - TT
Profile C:
s 0: A - G - C - - TT - A - - GT
s 1: AAG - C - - TA - A - AGT
s 2: AAGCC - TTT - A - - GT
s 3: AAG - C - - TA - - - AGT
s 4: A - GCC - TTA - ATA - T
By looking at the original set of sequences, it is clear s 2 in profile A and s 3 in profile B have a rotation of 0. The other sequences have been rotated and aligned with the remaining sequences in their profile. It is also clear from the original sequences that M[3,2].r=4. When aligning profile B with profile A, the rotation of r=4 is no longer accurate due to gaps inserted in s 3. As g=3 gaps have been inserted between positions 0 and r of sequence s 3, the final accurate rotation for profile B is M[ 3,2].r+g=4+3=7 (see profile C). □
In the instance when a sequence is to be aligned with a profile or two profiles are to be aligned, a generalisation of the Needleman-Wunsch algorithm is used, similar to that by [47], to compute the alignment score. Profile A will always hold the largest number of sequences, allowing profile B with fewer sequences to be rotated.
A frequency matrix F is stored, which holds the frequency of the occurrence of each letter in each column in profile A. Equation 3 shows the scoring scheme used for each alignment, where S[i,j] holds the alignment score for column i in profile A and column j in profile B. gA is the cost of inserting a gap into profile A and gB likewise for profile B. Matrix S is initialised in the same way as in the Needleman-Wunsch algorithm; and sim(B[k,j],c) denotes the similarity score between letter c∈Σ ′ and the letter at column j of row k (representing sequence s k ) in profile B.
$$\begin{array}{@{}rcl@{}} S[\!i,j] &=& \max \left\{ {\begin{array}{l} S[i-1,j-1] + \textit{pScore}(i,j) \\ S[i-1,j] + \textit{gB} \\ S[i,j-1] + \textit{gA} \end{array}} \right.\\ \textit{pScore}(i,j) &=& \sum_{c \in \Sigma'}{}\! \textit{sim}(B[\!k,j],c)\times F[\!c,i] 0 \leq k \!< |B| \end{array} $$
This scoring scheme can be applied naïvely for profile A and every rotation of profile B to find the maximum score, equating to the best-aligned rotation. However, as information about rotations has already been computed in Stage 1, we may use only some part of profile B to further refine these rotations. This refinement is required due to the heuristic computation of all pairwise cyclic edit distances in Stage 1 of the algorithm. To this end, we generalise the second step of Stage 1 to profiles. This step of Stage 1 involves a refinement of the rotation for a pair of sequences via considering only the two ends of each sequence, to create two refined sequences. Similarly here we generalise this idea to refine the rotation for a pair of profiles via considering only the two ends of each profile, to recreate the profiles into profiles with refined sequences. The rotation r with the maximum score according to the aforementioned scoring scheme is identified as the best-aligned rotation and array R is updated by adding r to the current rotation in R[i], for all s i in profile B.
MARS was implemented in the C++ programming language as a program to compute the rotations (cyclic shifts) required to best align a set of input sequences. Given a set of d sequences in multiFASTA format, the length ℓ of the β blocks to be used, the length q of the q-grams, and a real number P for the refinement, MARS computes an array R according to the algorithm described in the "Implementation" section. There is also a number of optional input parameters related to Gotoh's algorithm, such as the gap opening and extension penalty scores for pairwise and multiple sequence alignment. A different substitution matrix can be used for scoring nucleotide or amino acid matches and mismatches. The output of MARS is another multiFASTA file consisting of d refined sequences, produced using the rotations computed in R. The output of MARS can then be used as input to the preferred MSA program, such as Clustal Ω, MUSCLE, or T-Coffee.
The implementation is distributed under the GNU General Public License (GPL), and it is available freely at http://github.com/lorrainea/mars. Experimental results were also produced for Cyclope and BEAR to compare their performance against MARS. The experiments were conducted on a computer using an Intel Core i5-4690 CPU at 3.50 GHz under GNU/Linux. All programs were compiled with g++ version 4.8.5 at optimisation level 3 (O3).
DNA datasets were simulated using INDELible [48], which produces sequences in a multiFASTA file. A rate for insertions, deletions, and substitutions are defined by the user to vary similarity between the sequences. All datasets used in the experiments are denoted in the form A.B.C, where A represents the number of sequences in the dataset; B the average length of the sequences; and C the percentage of dissimilarity between the sequences. Substitution rates of 5, 20, and 35% were used to produce the datasets under the Jukes and Cantor (JC69) [49] substitution model. The insertion and deletion rates were set to 4 and 6% respectively, relative to a substitution rate of 1.
Nine datasets were simulated to evaluate the accuracy of the proposed method. Each dataset consists of a file with a varying number of sequences, all with an average length of 2500 base pairs (bp). The files in Datasets 1−3 each contain 12 sequences. Those in Datasets 4−6 each contain 25 sequences; and Datasets 7−9 contain 50 sequences. All input datasets referred to in this section are publicly maintained at the MARS website.
For all datasets, we made use of the following values for the mandatory parameters of MARS: q=5, ℓ=50, and P=1.0. Table 1 shows the results for the synthetic datasets made up of three files which each contained 12 sequences (Datasets 1–3). The first column shows results for the original datasets aligned using Clustal Ω. All sequences in these datasets were then randomly rotated, denoted in Table 1 by A.B.C.rot. The second column shows the results produced when MARS was first used to refine the sequences in the A.B.C.rot dataset, to find the best-aligned rotations; and then aligned them again using Clustal Ω. The third and fourth columns show likewise using MUSCLE to align the sequences. Tables 2 and 3 show the results produced for Datasets 4–6 and 7–9, respectively.
Table 1 Standard genetic measures for Datasets 1-3
Table 2 Standard genetic measures for Datasets 4–6
To evaluate the accuracy of MARS seven standard genetic measures were used: the length of the MSA; the number of polymorphic sites (PM sites); the number of transitions and transversions; substitutions, insertions, and deletions were also counted; as well as the average distance between each pair of sequences in the dataset (AVPD).
Remark for accuracy
The use of standard genetic measures to test the accuracy of MARS with synthetic data is sufficient. This is due to the fact that the main purpose of this test is not to check whether we obtain an MSA that is biologically relevant. The ultimate task here was to show that when MARS is applied on the A.B.C.rot datasets before MSA is performed we obtain MSAs whose standard genetic measures are similar or even identical to the MSAs of the A.B.C datasets (and this cannot occur by chance) when using the same MSA program.
The results show indeed that MARS performs extremely well for all datasets. This can be seen through the high similarity between the measurements for the original and the refined datasets. Notice that, in particular with MUSCLE, we obtain an identical or less average pairwise distance in 8 out of 9 cases between the original and the refined datasets produced by using MARS, despite the fact that we had first randomly rotated all sequences (compare the A.B.C to the A.B.C.rot columns).
RAxML [50], a maximum-likelihood-based program for phylogenetic analyses, was used to identify the similarity between the phylogenetic trees inferred using the original and refined datasets. A comparison with respect to the phylogenetic trees obtained using MUSCLE and RAxML was made between the alignment of the original datasets and that of the datasets produced by refining the A.B.C.rot datasets using MARS, BEAR, and Cyclope. The Robinson–Foulds (RF) metric was used to measure the distance between each pair of trees. The same parameter values were used for MARS: q=5, ℓ=50, and P=1.0. The fixed-length approximate string matching method with parameter values w=40 and k=25 under the edit distance model, were used for BEAR, where w is the factor length used and k is the maximum distance allowed. Parameter v was used for Cyclope to compute, similar to MARS, a tree-guided alignment.
Table 4 shows that the relative RF distance between the original datasets and those refined with MARS is 0 in all cases, showing that MARS has been able to identify the best-aligned rotations, with respect to the inferred trees, for all nine datasets, outperforming BEAR and Cyclope, for which we obtain non-zero values in some cases.
Table 4 Relative RF distance between trees obtained with original and refined datasets
Real data
In this section we present the results for three datasets used to evaluate the effectiveness of MARS with real data. The first dataset (Mammals) includes 12 mtDNA sequences of mammals, the second dataset (Primates) includes 16 mtDNA sequences of primates, and the last one (Viroids) includes 18 viroid RNA sequences. All input datasets referred to in this section are publicly maintained at the MARS website. The average sequence length for Mammals is 16,777 bp, for Primates is 16,581 bp, and for Viroids is 363 bp.
Table 5 shows the results from the original alignments and the alignments produced after refining these datasets with MARS. It is evident that using MARS produces a significantly better alignment for these real datasets, which can specifically be seen through the results produced by aligning with MUSCLE. For instance, the average pairwise distance in the MSA of Primates is reduced by around 5% when MARS is applied before MSA is performed with MUSCLE.
Table 5 Standard genetic measures for real data
Since time-accuracy is a standard trade-off of heuristic methods, in order to evaluate the time performance of the programs, we compared the resulting MSA along with the time taken to produce it using BEAR, Cyclope, and MARS with MUSCLE. Parameter values h=100 and k=60 were used to measure accuracy for the Mammals and Primates datasets for BEAR; w=40 and k=25 were used for the Viroids dataset. Parameter v was used for Cyclope to compute a tree-guided alignment. The following parameter values were used to test the Mammals and Primates datasets for MARS: q=5, ℓ=100, and P=2.0; q=4, ℓ=25, and P=1.0 were used to test the Viroids dataset.
Table 6 shows the time taken to execute the datasets; for the sake of succinctness, Table 6 only presents the average pairwise distance measure for the quality of the MSAs. The results show that MARS has the best time-accuracy performance: BEAR is the fastest program for two of the three datasets, but produces very low-quality MSAs; Cyclope is very slow but produces much better MSAs than BEAR; and MARS produces better MSAs than both BEAR and Cyclope and is more than four times faster than Cyclope.
Table 6 Elapsed-time comparison using real data
A common reliability measure of MSAs is the computation of the transitive consistency score (TCS) [51]. The TCS has been shown to outperform existing programs used to identify structurally correct portions of an MSA, as well as programs which aim to improve phylogenetic tree reconstruction [8]. BEAR, Cyclope, and MARS were used to identify the best rotations for the sequences in the Viroids dataset; the output of each, as well as the unrotated dataset was then aligned using MUSCLE. The following TCS was computed for the Viroids dataset when unrotated: 260, as well as when rotated with BEAR, Cyclope, and MARS, respectively: 249, 271, and 293. The same was done using Clustal Ω to align the output sequences, with a TCS of 249 for the unrotated dataset. The following scores were computed for the rotated dataset in the respective order: 233, 244, and 269. These results show that when using two different MSA programs, MARS obtains a higher TCS than the unrotated dataset in both cases, outperforming BEAR and Cyclope, which do not always obtain a higher TCS compared to that of the unrotated dataset.
A fundamental assumption of all widely-used MSA techniques is that the left- and right-most positions of the input sequences are relevant to the alignment. This is not always the case in the process of MSA of mtDNA, viroid, viral or other genomes, which have a circular molecular structure.
We presented MARS, a new heuristic method for improving Multiple circular sequence Alignment using Refined Sequences. Experimental results, using real and synthetic data, show that MARS improves the alignments, with respect to standard genetic measures and the inferred maximum-likelihood-based phylogenies, and outperforms state-of-the-art methods both in terms of accuracy and efficiency. We anticipate that further development of MARS would be desirable upon dissemination. Our immediate target is to employ low-level code optimisation and thread-level parallelism to further enhance the performance of MARS. A web-service for improving multiple circular sequence alignment based on MARS is already under way.
Availability and requirements
Project name: MARS Project home page: https://github.com/lorrainea/mars Operating system: GNU/Linux Programming language: C++ Other requirements: N/A License: GNU GPL
AVPD:
Average pairwise distance
Base pairs
MARS:
Multiple sequence alignment using refined sequences
MCSA:
Multiple circular sequence alignment
MSA:
mtDNA:
PM sites:
Polymorphic sites
RF:
Robinson-Foulds
SP-score:
Sum-of-pairs score
TCS:
Transitive consistency score
Fitch WM. Distinguishing homologous from analogous proteins. Syst Biol. 1970; 19(2):99–113. doi:10.2307/2412448.
Maes M. Polygonal shape recognition using string-matching techniques. Pattern Recogn. 1991; 24(5):433–40. doi:10.1016/0031-3203(91)90056-B.
Cambouropoulos E, Crawford T, Iliopoulos CS. Pattern processing in melodic sequences: Challenges, caveats and prospects. Comput Hum. 2001; 35(1):9–21. doi:10.1023/A:1002646129893.
Needleman SB, Wunsch CD. A general method applicable tothe search for similarities in the amino acid sequences of two proteins. J Mol Biol. 1970; 48:443–53. doi:10.1016/0022-2836(70)90057-4.
Gotoh O. An improved algorithm for matching biological sequences. J Mol Biol. 1982; 162:705–8. doi:10.1016/0022-2836(82)90398-9.
Smith TF, Waterman MS. Identification of common molecular subsequences. J Mol Biol. 1981; 147(1):195–7. doi:10.1016/0022-2836(81)90087-5.
Vinga S, Almeida J. Alignment-free sequence comparison—a review. Bioinformatics. 2003; 19(4):513–23. doi:10.1093/bioinformatics/btg005.
Chatzou M, Magis C, Chang J, Kemena C, Bussotti G, Erb I, Notredame C. Multiple sequence alignment modeling: methods and applications. Brief Bioinform. 2015:1–15. doi:10.1093/bib/bbv099.
Xiong J. Essential Bioinformatics. Texas A&M University: Cambridge University Press; 2006. doi:10.1017/CBO9780511806087 http://dx.doi.org/10.1017/CBO9780511806087. Cambridge Books Online.
Kumar S, Filipski A. Multiple sequence alignment: in pursuit of homologous DNA positions. Genome Res. 2007; 17(2):127–35. doi:10.1101/gr.5232407.
Phillips A, Janies D, Wheeler W. Multiple sequence alignment in phylogenetic analysis. Mol Phylogenet Evol. 2000; 16(3):317–30. doi:10.1006/mpev.2000.0785.
Simossis VA, Heringa J. Integrating protein secondary structure prediction and multiple sequence alignment. Curr Protein Pept Sci. 2004; 5(4):249–66. doi:10.2174/1389203043379675.
Wang L. On the complexity of multiple sequence alignment. J Comput Biol. 1994; 1:337–48. doi:10.1089/cmb.1994.1.337.
Thomson JD, Higgins DG, Gibson TJ. CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice. Nucleic Acids Res. 1994; 22:4673–680. doi:10.1093/nar/22.22.4673.
Edgar RC. MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 2004; 32:1792–1797. doi:10.1093/nar/gkh340.
Notredame C, Higgins DG, Heringa J. T-coffee: a novel method for fast and accurate multiple sequence alignment. J Mol Biol. 2000; 302(1):205–17. doi:10.1006/jmbi.2000.4042.
Tan G, Muffato M, Ledergerber C, Herrero J, Goldman N, Gil M, Dessimoz C. Current methods for automated filtering of multiple sequence alignments frequently worsen single-gene phylogenetic inference. Syst Biol. 2015; 64(5):778–91. doi:10.1093/sysbio/syv033.
Talavera G, Castresana J. Improvement of phylogenies after removing divergent and ambiguously aligned blocks from protein sequence alignments. Syst Biol. 2007; 56(4):564–77. doi:10.1080/10635150701472164.
Capella-Gutierrez S, Silla-Martinez JM, Gabaldon T. trimAl: a tool for automated alignment trimming in large-scale phylogenetic analyses. Bioinformatics. 2009; 25:1972–3. doi:10.1093/bioinformatics/btp348.
Dress AWM, Flamm C, Fritzsch G, Grünewald S, Kruspe M, Prohaska SJ, Stadler PF. Noisy: Identification of problematic columns in multiple sequence alignments. Algorithm Mol Biol. 2008; 3:1–10. doi:10.1186/1748-7188-3-7.
Kuck P, Meusemann K, Dambach J, Thormann B, von Reumont BM, Wagele JW, Misof B. Parametric and non-parametric masking of randomness in sequence alignments can be improved and leads to better resolved trees. Front Zool. 2010; 7:1–12. doi:10.1186/1742-9994-7-10.
Criscuolo A, Gribaldo S. BMGE (block mapping and gathering with entropy): a new software for selection of phylogenetic informative regions from multiple sequence alignments. BMC Evol Biol. 2010; 10:1–21. doi:10.1186/1471-2148-10-210.
Wu M, Chatterji S, Eisen JA. Accounting for alignment uncertainty in phylogenomics. PLoS ONE. 2012; 7:1–10. doi:10.1371/journal.pone.0030288.
Penn O, Privman E, Ashkenazy H, Landan G, Graur D, Pupko T. GUIDANCE: a web server for assessing alignment confidence scores. Nucleic Acids Res. 2010; 38(suppl 2):23–8. doi:10.1093/nar/gkq443.
Craik DJ, Allewell NM. Thematic minireview series on circular proteins. J Biol Chem. 2012; 287:26999–7000. doi:10.1074/jbc.R112.390344.
Helinski DR, Clewell DB. Circular DNA. Ann Rev Biochem. 1971; 40:899–942. doi:10.1146/annurev.bi.40.070171.004343.
Kasamatsu H, Vinograd J. Replication of circular DNA in eukaryotic cells. Ann Rev Biochem. 1974; 43:695–719. doi:10.1146/annurev.bi.43.070174.003403.
Brodie R, Smith AJ, Roper RL, Tcherepanov V, Upton C. Base-By-Base: Single nucleotide-level analysis of whole viral genome alignments. BMC Bioinform. 2004; 5(1):96. doi:10.1186/1471-2105-5-96.
Weiner J, Bornberg-Bauer E. Evolution of circular permutations in multidomain proteins. Mol Biol Evol. 2006; 23(4):734–43. doi:10.1093/molbev/msj091.
Sievers F, Wilm A, Dineen D, Gibson TJ, Karplus K, Li W, Lopez R, McWilliam H, Remmert M, Söding J, Thompson JD, Higgins DG. Fast, scalable generation of high-quality protein multiple sequence alignments using clustal omega. Mol Syst Biol. 2011; 7:539. doi:10.1038/msb.2011.75.
Edgar RC. MUSCLE: a multiple sequence alignment method with reduced time and space complexity. BMC Bioinforma. 2004; 5:1–19. doi:10.1186/1471-2105-5-113.
Fernandes F, Pereira L, Freitas AT. CSA: an efficient algorithm to improve circular DNA multiple alignment. BMC Bioinforma. 2009; 10:1–13. doi:10.1186/1471-2105-10-230.
Fritzsch G, Schlegel M, Stadler PF. Alignments of mitochondrial genome arrangements: Applications to metazoan phylogeny. J Theor Biol. 2006; 240(4):511–20. doi:10.1016/j.jtbi.2005.10.010.
Maes M. On a cyclic string-to-string correction problem. Inf Process Lett. 1990; 35(2):73–8. doi:10.1016/0020-0190(90)90109-B.
Grossi R, Iliopoulos CS, Mercas R, Pisanti N, Pissis SP, Retha A, Vayani F. Circular sequence comparison: algorithms and applications. Algorithm Mol Biol. 2016; 11:12. doi:10.1186/s13015-016-0076-6.
Crochemore M, Fici G, Mercas R, Pissis SP. Linear-time sequence comparison using minimal absent words & applications In: Kranakis E, Navarro G, Chávez E, editors. LATIN 2016: Theoretical Informatics: 12th Latin American Symposium, Ensenada, Mexico, April 11-15, 2016, Proceedings. Lecture Notes in Computer Science. Springer Berlin Heidelberg: 2016. p. 334–46. doi:10.1007/978-3-662-49529-2_25.
Barton C, Iliopoulos CS, Pissis SP. Fast algorithms for approximate circular string matching. Algorithm Mol Biol. 2014; 9:9. doi:10.1186/1748-7188-9-9.
Barton C, Iliopoulos CS, Pissis SP. Average-case optimal approximate circular string matching In: Dediu A-H, Formenti E, Martin-Vide C, Truthe B, editors. Language and Automata Theory and Applications. Lecture Notes in Computer Science. Springer Berlin Heidelberg: 2015. p. 85–96. doi:10.1007/978-3-319-15579-1_6.
Mosig A, Hofacker IL, Stadler PF. Comparative analysis of cyclic sequences: Viroids and other small circular RNAs In: Giegerich R, Stoye J, editors. Lecture Notes in Informatics. Proceedings GCB: 2006. p. 93–102. http://subs.emis.de/LNI/Proceedings/Proceedings83/article5487.html.
Ukkonen E. On-line construction of suffix trees. Algorithmica. 1995; 14:249–60. doi:10.1007/BF01206331.
Barton C, Iliopoulos CS, Kundu R, Pissis SP, Retha A, Vayani F. Accurate and efficient methods to improve multiple circular sequence alignment In: Bampis E, editor. Experimental Algorithms. Lecture Notes in Computer Science. Springer International Publishing Switzerland: 2015. p. 247–58. doi:10.1007/978-3-319-20086-6_19.
Hogeweg P, Hesper B. The alignment of sets of sequences and the construction of phyletic trees: An integrated method. J Mol Evol. 1984; 20(2):175–86. doi:10.1007/BF02257378.
Crochemore M, Hancart C, Lecroq T. Algorithms on Strings. New York: Cambridge University Press; 2014.
Damerau FJ. A technique for computer detection and correction of spelling errors. Commun ACM. 1964; 7:171–6. doi:10.1145/363958.363994.
Myers G. A fast bit-vector algorithm for approximate string matching based on dynamic programming. J ACM. 1999; 46:395–415. doi:10.1145/316542.316550.
Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987; 4:406–25.
Wang G, Dunbrack RL. Scoring profile-to-profile sequence alignments. Protein Sci. 2004; 13(6):1612–1626. doi:10.1110/ps.03601504.
Fletcher W, Yang Z. INDELible: a flexible simulator of biological sequence evolution. Mol Biol Evol. 2009; 8:1879–88. doi:10.1093/molbev/msp098.
Jukes TH, Cantor CR. Evolution of Protein Molecules. New York: Academy Press; 1969.
Stamatakis A. RAxML Version 8: A tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics. 2014; 30:1312–3. doi:10.1093/bioinformatics/btu033.
Chang JM, Tommaso PD, Notredame C. TCS: A new multiple sequence alignment reliability measure to estimate alignment accuracy and improve phylogenetic tree reconstruction. Mol Biol Evol. 2014. doi:10.1093/molbev/msu117.
We would like to acknowledge King's College London for funding open access for this article.
LAKA is supported by an EPSRC grant (Doctoral Training Grant #EP/M50788X/1).
The datasets generated during and/or analysed during the current study are available in the GitHub repository, https://github.com/lorrainea/mars.
SPP conceived the study. LAKA and SPP designed the solution. LAKA implemented the solution and conducted the experiments. LAKA and SPP wrote the manuscript. The final version of the manuscript is approved by all authors.
Department of Informatics, King's College London, Strand, London, WC2R 2LS, UK
Lorraine A. K. Ayad & Solon P. Pissis
Lorraine A. K. Ayad
Solon P. Pissis
Correspondence to Solon P. Pissis.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Ayad, L.A., Pissis, S.P. MARS: improving multiple circular sequence alignment using refined sequences. BMC Genomics 18, 86 (2017). https://doi.org/10.1186/s12864-016-3477-5
Circular sequences
q-grams
Progressive alignment | CommonCrawl |
Physics Meta
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. Join them; it only takes a minute:
Conserved quantity along geodesic and metric
I'm studying General Relativity on Schutz's book. On Chapter 7 he talks about conserved quantities along geodesics, with the equation \begin{equation} m\frac{dp_{\beta}}{d\lambda}=\frac{1}{2}g_{\nu\alpha,\beta}p^\nu p^\alpha \end{equation} and he concludes that "if all the components of the metric are independent of $x^\beta$ for some index $\beta$ then $p_\beta$ is a constant along any particle's trajectory".
For example, in the well-known Schwarzschild metric, $p_0$ is conserved as the metric is independent of $t$. But I could perform a coordinate change to make the metric "time dependent". Does this mean that this concept of conserved quantities along geodesics is coordinate-dependent? Is there a preferred reference frame in which this quantity is conserved?
I'm probably confused about the concept of reference frame and coordinate system. I'll try to state the source of my confusion. Schutz says that, with the Schwarzschild metric \begin{equation} ds^2=-e^{2\Phi(r)}dt^2+e^{2\Lambda(r)}dr^2+r^2d\Omega^2 \end{equation} "since the metric is independent of $t$ any particle that follows a geodesic has constant momentum component $p_0\equiv -E$". Then he states that "a local inertial observer at rest (momentarily) at any radius $r$ of the spacetime measures a different energy, namely $E^*=Ee^{-\Phi}$".
What does this imply? Does this imply that when an observer which is at rest at some point of the spacetime measures a quantity I should use a locally Minkowskian coordinate system (tangent space of the point P of the observer on the Manifold) and in that coordinate system the metric is not independent of time, since he sees that this quantity changes according to the point of spacetime he measures the quantity from? (Indeed, $\Phi$ is a function of $r$). Will any observer ever see this quantity conserved when he measures it or is it just a mathematical construct?
general-relativity differential-geometry conservation-laws metric-tensor geodesics
Luthien
LuthienLuthien
$\begingroup$ Notice that if you make a coordinate transformation then the component $p_0$ would also change--and that changed component would not be conserved. $\endgroup$ – Feynmans Out for Grumpy Cat Aug 15 '18 at 11:07
$\begingroup$ There is a coordinate-independent way to state all this, which is that we get a conserved quantity if the spacetime has a Killing vector, defined by the (coordinate-independent) Killing equation. Is there a preferred reference frame in which this quantity is conserved? As a side note, coordinate systems are not frames of reference, and frames of reference are not coordinate systems. We don't have frames of reference in GR, except locally. $\endgroup$ – Ben Crowell Aug 15 '18 at 14:06
$\begingroup$ Thanks for both your comments, I edited my question to better specify the source of my confusion $\endgroup$ – Luthien Aug 15 '18 at 14:56
Does this imply that when an observer which is at rest at some point of the spacetime measures a quantity I should use a locally Minkowskian coordinate system (tangent space of the point P of the observer on the Manifold) and in that coordinate system the metric is not independent of time, since he sees that this quantity changes according to the point of spacetime he measures the quantity from?
I haven't read Schutz, but from reading your question it sounds like his presentation of this topic has some deficiencies, and your confusion may be natural given those deficiencies. He's discussing this in terms of the (non-covariant) derivative of the metric with respect to a coordinate, which immediately creates some serious problems. That quantity simply isn't measurable. If you want to measure the metric or its derivatives, you end up with the following restrictions:
A local observer can't measure the metric. (This is for the same reason that you can't measure an absolute potential energy. The metric plays a role in GR analogous to that of the potential in Newtonian gravity.)
A local observer can't measure the derivative of the metric. (That derivative would basically be the gravitational field, which is not measurable because of the equivalence principle.)
A local observer can measure the second derivative of the metric, which is essentially a measure of tidal stresses.
So Schutz's presentation makes use of a derivative that has no physical interpretation.
Yes, any time any observer measures a vector quantity (such as the energy-momentum vector), they are implicitly doing so in some local Minkowski frame. (They could, for example, measure the vector's inner product with some other vector, but then they are effectively using this other vector as a coordinate axis of some Minkowski frame.)
Will any observer ever see this quantity conserved when he measures it or is it just a mathematical construct?
To verify this conservation law, the observer needs to have global information, not just local information. Basically they need to measure the quantity $E^*$ in a local static frame, then use their global knowledge (of the metric and of their position in the spacetime) to determine $E$. They can then verify that $E$ is conserved.
The inability to determine such a conservation law based on purely local information is baked in to the structure of GR. Energy-momentum is a vector, and you can't compare vectors at different points in spacetime except by parallel transport. You can certainly verify that a test particle's energy-momentum vector is preserved under parallel transport along its own geodesic of motion, but you end up with a triviality, which is essentially that the test particle had the same free-fall motion that you did. This is just a test of the equivalence principle, and it holds even in a spacetime that does not have any symmetry.
To resolve the issues you're talking about in a more satisfactory way than in Schutz's presentation, you really need to use the notion of a Killing vector.
Ben CrowellBen Crowell
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged general-relativity differential-geometry conservation-laws metric-tensor geodesics or ask your own question.
Gravitational effects and metric spaces
Why can certain functions be absorbed into the Schwarzschild metric, while others can't?
Extent of coordinate freedom to set metric components along a spacetime path
Metric to describe an expanding spacetime from coordinates reflecting the perspective of a local observer
Geodesic deviation equation - why does the ordinary second derivative give the correct answer?
How to find the null geodesics?
Geodesic deviation in Schutz's book: a typo?
Timelike, spacelike, and null geodesics using Euler-Lagrange equations
Unique null geodesic between two points
Rotationally invariant metrics and conservation of angular momentum | CommonCrawl |
Journal of Fluid Mechanics (15)
Laser and Particle Beams (1)
Ryan Test (15)
Richtmyer–Meshkov instability on a quasi-single-mode interface
Yu Liang, Zhigang Zhai, Juchun Ding, Xisheng Luo
Journal: Journal of Fluid Mechanics / Volume 872 / 10 August 2019
Experiments on Richtmyer–Meshkov instability of quasi-single-mode interfaces are performed. Four quasi-single-mode air/ $\text{SF}_{6}$ interfaces with different deviations from the single-mode one are generated by the soap film technique to evaluate the effects of high-order modes on amplitude growth in the linear and weakly nonlinear stages. For each case, two different initial amplitudes are considered to highlight the high-amplitude effect. For the single-mode and saw-tooth interfaces with high initial amplitude, a cavity is observed at the spike head, providing experimental evidence for the previous numerical results for the first time. For the quasi-single-mode interfaces, the fundamental mode is the dominant one such that it determines the amplitude linear growth, and subsequently the impulsive theory gives a reasonable prediction of the experiments by introducing a reduction factor. The discrepancy in linear growth rates between the experiment and the prediction is amplified as the quasi-single-mode interface deviates more severely from the single-mode one. In the weakly nonlinear stage, the nonlinear model valid for a single-mode interface with small amplitude loses efficacy, which indicates that the effects of high-order modes on amplitude growth must be considered. For the saw-tooth interface with small amplitude, the amplitudes of the first three harmonics are extracted from the experiment and compared with the previous theory. The comparison proves that each initial mode develops independently in the linear and weakly nonlinear stages. A nonlinear model proposed by Zhang & Guo (J. Fluid Mech., vol. 786, 2016, pp. 47–61) is then modified by considering the effects of high-order modes. The modified model is proved to be valid in the weakly nonlinear stage even for the cases with high initial amplitude. More high-order modes are needed to match the experiment for the interfaces with a more severe deviation from the single-mode one.
Effects of non-periodic portions of interface on Richtmyer–Meshkov instability
Xisheng Luo, Yu Liang, Ting Si, Zhigang Zhai
Journal: Journal of Fluid Mechanics / Volume 861 / 25 February 2019
The development of a non-periodic $\text{air}\text{/}\text{SF}_{6}$ gaseous interface subjected to a planar shock wave is investigated experimentally and theoretically to evaluate the effects of the non-periodic portions of the interface on the Richtmyer–Meshkov instability. Experimentally, five kinds of discontinuous chevron-shaped interfaces with or without non-periodic portions are created by the extended soap film technique. The post-shock flows and the interface morphologies are captured by schlieren photography combined with a high-speed video camera. A periodic chevron-shaped interface, which is multi-modal (81 % fundamental mode and 19 % high-order modes), is first considered to evaluate the impulsive linear model and several typical nonlinear models. Then, the non-periodic chevron-shaped interfaces are investigated and the results show that the existence of non-periodic portions significantly changes the balanced position of the initial interface, and subsequently disables the nonlinear model which is applicable to the periodic chevron-shaped interface. A modified nonlinear model is proposed to consider the effects of the non-periodic portions. It turns out that the new model can predict the growth of the shocked non-periodic interface well. Finally, a method is established using spectrum analysis on the initial shape of the interface to separate its bubble structure and spike structure such that the new model can apply to any random perturbed interface. These findings can facilitate the understanding of the evolution of non-periodic interfaces which are more common in reality.
Mach stem deformation in pseudo-steady shock wave reflections
Xiaofeng Shi, Yujian Zhu, Jiming Yang, Xisheng Luo
The deformation of the Mach stem in pseudo-steady shock wave reflections is investigated numerically and theoretically. The numerical simulation provides the typical flow patterns of Mach stem deformation and reveals the differences caused by high-temperature gas effects. The results also show that the wall jet, which causes Mach stem deformation, can be regarded as a branch of the mainstream from the first reflected shock. A new theoretical model for predicting the Mach stem deformation is developed by considering volume conservation. The theoretical predictions agree well with the numerical results in a wide range of test conditions. With this model, the wall-jet velocity and the inflow velocity from the Mach stem are identified as the two dominating factors that convey the influence of high-temperature thermodynamics. The mechanism of high-temperature gas effects on the Mach stem deformation phenomenon are then discussed.
An elaborate experiment on the single-mode Richtmyer–Meshkov instability
Lili Liu, Yu Liang, Juchun Ding, Naian Liu, Xisheng Luo
Journal: Journal of Fluid Mechanics / Volume 853 / 25 October 2018
Published online by Cambridge University Press: 23 August 2018, R2
Print publication: 25 October 2018
High-fidelity experiments of Richtmyer–Meshkov instability on a single-mode air/ $\text{SF}_{6}$ interface are carried out at weak shock conditions. The soap-film technique is extended to create single-mode gaseous interfaces which are free of small-wavelength perturbations, diffusion layers and three-dimensionality. The interfacial morphologies captured show that the instability evolution evidently involves the smallest experimental uncertainty among all existing results. The performances of the impulsive model and other nonlinear models are thoroughly examined through temporal variations of the perturbation amplitude. The individual growth of bubbles or spikes demonstrates that all nonlinear models can provide a reliable forecast of bubble development, but only the model of Zhang & Guo (J. Fluid Mech., vol. 786, 2016, pp. 47–61) can reasonably predict spike development. The distinct images of the interface morphology obtained also provide a rare opportunity to extract interface contours such that a spectral analysis of the interfacial contours can be performed, which realizes the first direct validation of the high-order nonlinear models of Zhang & Sohn (Phys. Fluids, vol. 9, 1997, pp. 1106–1124) and Vandenboomgaerde et al. (Phys. Fluids, vol. 14 (3), 2002, pp. 1111–1122) in terms of the fundamental mode and high-order harmonics. It is found that both models show a very good and almost identical accuracy in predicting the first two modes. However, the model of Zhang & Sohn (1997) becomes much more accurate in modelling the third-order harmonics due to the fewer simplifications used.
Long-term effect of Rayleigh–Taylor stabilization on converging Richtmyer–Meshkov instability
Xisheng Luo, Fu Zhang, Juchun Ding, Ting Si, Jiming Yang, Zhigang Zhai, Chih-yung Wen
The Richtmyer–Meshkov instability on a three-dimensional single-mode light/heavy interface is experimentally studied in a converging shock tube. The converging shock tube has a slender test section so that the non-uniform feature of the shocked flow is amply exhibited in a long testing time. A deceleration phenomenon is evident in the unperturbed interface subjected to a converging shock. The single-mode interface presents three-dimensional characteristics because of its minimum surface feature, which leads to the stratified evolution of the shocked interface. For the symmetry interface, it is quantitatively found that the perturbation amplitude experiences a rapid growth to a maximum value after shock compression and finally drops quickly before the reshock. This quick reduction of the interface amplitude is ascribed to a significant Rayleigh–Taylor stabilization effect caused by the deceleration of the light/heavy interface. The long-term effect of the Rayleigh–Taylor stabilization even leads to a phase inversion on the interface before the reshock when the initial interface has sufficiently small perturbations. It is also found that the amplitude growth is strongly suppressed by the three-dimensional effect, which facilitates the occurrence of the phase inversion.
On the interaction of a planar shock with a three-dimensional light gas cylinder
Juchun Ding, Ting Si, Mojun Chen, Zhigang Zhai, Xiyun Lu, Xisheng Luo
Experimental and numerical investigations on the interaction of a planar shock wave with two-dimensional (2-D) and three-dimensional (3-D) light gas cylinders are performed. The effects of initial interface curvature on flow morphology, wave pattern, vorticity distribution and interface movement are emphasized. In experiments, a wire-restriction method based on the soap film technique is employed to generate N $_{2}$ cylinders surrounded by SF $_{6}$ with well-characterized shapes, including a convex cylinder, a concave cylinder with a minimum-surface feature and a 2-D cylinder. The high-speed schlieren pictures demonstrate that fewer disturbance waves exist in the flow field and the evolving interfaces develop in a more symmetrical way relative to previous studies. By combining the high-order weighted essentially non-oscillatory construction with the double-flux scheme, numerical simulation is conducted to explore the detailed 3-D flow structures. It is indicated that the shape and the size of 3-D gas cylinders in different planes along the vertical direction change gradually due to the existence of both horizontal and vertical velocities of the flow. At very early stages, pressure oscillations in the vicinity of evolving interfaces induced by complex waves contribute much to the deformation of the 3-D gas cylinders. As time proceeds, the development of the shocked volume would be dominated by the baroclinic vorticity deposited on the interface. In comparison with the 2-D case, the oppositely (or identically) signed principal curvatures of the concave (or convex) SF $_{6}$ /N $_{2}$ boundary cause complex high pressure zones and additional vorticity deposition, and the upstream interface from the symmetric slice of the concave (or convex) N $_{2}$ cylinder moves with an inhibition (or a promotion). Finally, a generalized 3-D theoretical model is proposed for predicting the upstream interface movements of different gas cylinders and the present experimental and numerical findings are well predicted.
Experimental study on a sinusoidal air/SF $_{6}$ interface accelerated by a cylindrically converging shock
Fan Lei, Juchun Ding, Ting Si, Zhigang Zhai, Xisheng Luo
Journal: Journal of Fluid Mechanics / Volume 826 / 10 September 2017
Print publication: 10 September 2017
Ritchmyer–Meshkov instability on an air/SF $_{6}$ interface is experimentally studied in a coaxial converging shock tube by a high-speed laser sheet imaging technique. An unperturbed case is first examined to obtain the characteristics of the converging shock and the shocked interface. For sinusoidal interfaces, the wave pattern and the interface morphology of the whole process are clearly observed. It is quantitatively found that the perturbation amplitude first decreases due to the shock compression, then experiences a rapid growth to a maximum value and finally drops quickly before the reshock. The reduction of growth rate is ascribed to the Rayleigh–Taylor stabilization caused by the interface deceleration motion that is present in the converging circumstance. It is noted that the influence of the wavenumber on the amplitude growth is very little before the reshock, but becomes significant after the reshock.
The Richtmyer–Meshkov instability of a 'V' shaped air/ $\text{SF}_{6}$ interface
Xisheng Luo, Ping Dong, Ting Si, Zhigang Zhai
The Richtmyer–Meshkov instability on a 'V' shaped air/SF $_{6}$ gaseous interface is experimentally studied in a shock tube. By the soap film technique, a discontinuous interface without supporting mesh is formed so that the initial conditions of the interface can be accurately controlled. Five 'V' shaped air/ $\text{SF}_{6}$ interfaces with different vertex angles ( $60^{\circ }$ , $90^{\circ }$ , $120^{\circ }$ , $140^{\circ }$ and $160^{\circ }$ ) are created where the ratio of the initial interface amplitude to the wavelength varies to highlight the effects of initial condition on the flow characteristics. The wave patterns and interface morphologies are clearly identified in the high-speed schlieren sequences, which show that the interface deforms in a less pronounced manner with less vortices generated as the vertex angle increases. A regime change is observed in the interface width growth rate near a vertex angle of $160^{\circ }$ , which provides an experimental evidence for the numerical results obtained by McFarland et al. (Phys. Scr. vol. T155, 2013, 014014). The growth rate of interface width in the linear phase is compared with the theoretical predictions from the classical impulsive model and a modified linear model, and the latter is proven to be effective for a moderate to large initial amplitude. It is found that the initial growth rate of the interface width is a non-monotone function of the initial vertex angle (amplitude–wavelength ratio), i.e. the interface width growth rate in the linear stage experiences an increase and then a decrease as the vertex angle increases. A similar conclusion was also reached by Dell et al. (Phys. Plasmas, vol. 22, 2015, 092711) numerically for a sinusoidal interface. Finally, the general behaviour of the interface width growth in the nonlinear stage can be well captured by the nonlinear model proposed by Dimonte & Ramaprabhu (Phys. Fluids, vol. 22, 2010, 014104).
Experimental investigation of cylindrical converging shock waves interacting with a polygonal heavy gas cylinder
Ting Si, Tong Long, Zhigang Zhai, Xisheng Luo
Journal: Journal of Fluid Mechanics / Volume 784 / 10 December 2015
Print publication: 10 December 2015
The interaction of cylindrical converging shock waves with a polygonal heavy gas cylinder is studied experimentally in a vertical annular diaphragmless shock tube. The reliability of the shock tube facility is verified in advance by capturing the cylindrical shock movements during the convergence and reflection processes using high-speed schlieren photography. Three types of air/SF6 polygonal interfaces with cross-sections of an octagon, a square and an equilateral triangle are formed by the soap film technique. A high-speed laser sheet imaging method is employed to monitor the evolution of the three polygonal interfaces subjected to the converging shock waves. In the experiments, the Mach number of the incident cylindrical shock at its first contact with each interface is maintained to be 1.35 for all three cases. The results show that the evolution of the polygonal interfaces is heavily dependent on the initial conditions, such as the interface shapes and the shock features. A theoretical model for circulation initially deposited along the air/SF6 polygonal interface is developed based on the theory of Samtaney & Zabusky (J. Fluid Mech., vol. 269, 1994, pp. 45–78). The circulation depositions along the initial interface result in the differences in flow features among the three polygonal interfaces, including the interface velocities and the perturbation growth rates. In comparison with planar shock cases, there are distinct phenomena caused by the convergence effects, including the variation of shock strength during imploding and exploding (geometric convergence), consecutive reshocks on the interface (compressibility), and special behaviours of the movement of the interface structures (phase inversion).
On the interaction of a planar shock with an $\text{SF}_{6}$ polygon
Xisheng Luo, Minghu Wang, Ting Si, Zhigang Zhai
Journal: Journal of Fluid Mechanics / Volume 773 / 25 June 2015
The interaction of a planar shock wave ( $M\approx 1.2$ ) with an $\text{SF}_{6}$ polygonal inhomogeneity surrounded by air is experimentally investigated. Six polygons including a square, two types of rectangle, two types of triangle, and a diamond are generated by the soap film technique developed in our previous work, in which thin pins are used as angular vertexes to avoid the pressure singularities caused by the surface tension. The evolutions of the shock-accelerated $\text{SF}_{6}$ polygons are captured by a high-speed schlieren system from which wave systems and the interface characteristics can be clearly identified. Both regular and irregular refraction phenomena are observed outside the volume, and more complex wave patterns, including transmitted shock, refracted shock, Mach stem and the interactions between them, are found inside the volume. Two typical irregular refraction phenomena (free precursor refraction, FPR, and free precursor von Neumann refraction, FNR) are observed and analysed, and the transition from FPR to FNR is found, providing the experimental evidence for the transition between different wave patterns numerically found in the literature. Combined with our previous work (Zhai et al., J. Fluid Mech., vol. 757, 2014, pp. 800–816), the reciprocal transitions between FPR and FNR are experimentally confirmed. The velocities and trajectories of the triple points are further measured and it is found that the motions of the triple points are self-similar or pseudo-stationary. Besides the shock dynamics phenomena, the evolutions of these shocked heavy polygonal volumes, which are quite different from the light ones, are captured and found to be closely related to their initial shapes. Specifically, for square and rectangular geometries, the different width–height ratios result in different behaviours of shock–shock interaction inside the volume, and subsequently different features for the outward jet and the interface. Quantitatively, the time-variations of the interface scales, such as the width and the normalized displacements of the edges, are obtained and compared with those from previous work. The comparison illustrates the superiority of the interface formation method and the significant effect of the initial interface shape on the interface features. Furthermore, the characteristics of the vortex core, including the velocity and vortex spacing, are experimentally measured, and the vortex velocity is compared with those from some circulation models to check the validity of the models. The results in the present work enrich understanding of the shock refraction phenomenon and the database of research into Richtmyer–Meshkov instability (RMI).
On the interaction of a planar shock with a light polygonal interface
Zhigang Zhai, Minghu Wang, Ting Si, Xisheng Luo
The interaction of a planar shock wave with a polygonal $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}{\mathrm{N}}_2$ volume surrounded by ${\mathrm{SF}}_6$ is investigated experimentally and numerically. Three polygonal interfaces (square, triangle and diamond) are formed by the soap film technique developed in our previous work, in which thin pins are introduced as angular vertexes to connect adjacent sides of polygonal soap films. The evolutions of the shock-accelerated polygonal interfaces are then visualized by a high-speed schlieren system. Wave systems and interface structures can be clearly identified in experimental schlieren images, and agree well with the numerical ones. Quantitatively, the movement of the distorted interface, and the length and height of the interface structures are further compared and good agreements are achieved between experimental and numerical results. It is found that the evolution of these polygonal interfaces is closely related to their initial shapes. In the square interface, two vortices are generated shortly after the shock impact around the left corner and dominate the flow field at late stages. In the triangular and diamond cases, the most remarkable feature is the small ' ${\mathrm{SF}}_6$ jet' which grows constantly with time and penetrates the downstream boundary of the interface, forming two independent vortices. These distinct morphologies of the three polygonal interfaces also lead to the different behaviours of the interface features including the length and height. It is also found that the velocities of the vortex pair predicted from the theory of Rudinger and Somers (J. Fluid Mech., vol. 7, 1960, pp. 161–176) agree with the experimental ones, especially for the square case. Typical free precursor irregular refraction phenomena and the transitions among them are observed and analysed, which gives direct experimental evidence for wave patterns and their transitions at a slow/fast interface. The velocities of triple points and shocks are experimentally measured. It is found that the transmitted shock near the interface boundary has weakened into an evanescent wave.
Experimental study of Richtmyer-Meshkov instability in a cylindrical converging shock tube
Ting Si, Zhigang Zhai, Xisheng Luo
Journal: Laser and Particle Beams / Volume 32 / Issue 3 / September 2014
The interaction of a cylindrical converging shock wave with an initially perturbed gaseous interface is studied experimentally. The cylindrical converging shock is generated in an ordinary shock tube but with a specially designed test section, in which the incident planar shock wave is directly converted into a cylindrical one. Two kinds of typical initial interfaces involving gas bubble and gas cylinder are employed. A high-speed video camera combined with schlieren or planar Mie scattering photography is utilized to capture the evolution process of flow structures. The distribution of baroclinic vorticity on the interface induced by the cylindrical shock and the reflected shock from the center of convergence results in distinct phenomena. In the gas bubble case, the shock focusing and the jet formation are observed and the turbulent mixing of two fluids is promoted because of the gradually changed shock strength and complex shock structures in the converging part. In the gas cylinder case, a counter-rotating vortex pair is formed after the impact of the converging shock and its rotating direction may be changed when interacting with the reflected shock for a relatively long reflection distance. The variations of the interface displacements and structural dimensions with time are further measured. It is found that these quantities are different from those in the planar counterpart because of the shock curvature, the Mach number effect and the complex shock reflection within the converging shock tube test section. Therefore, the experiments reported here exhibit the great potential of this experimental method in study of the Richtmyer-Meshkov instability induced by converging shock waves.
The Richtmyer–Meshkov instability of a three-dimensional air/SF6 interface with a minimum-surface feature
Xisheng Luo, Xiansheng Wang, Ting Si
Journal: Journal of Fluid Mechanics / Volume 722 / 10 May 2013
Published online by Cambridge University Press: 04 April 2013, R2
Print publication: 10 May 2013
A novel method to create a discontinuous gaseous interface with a minimum-surface feature by the soap film technique is developed for three-dimensional (3D) Richtmyer–Meshkov instability (RMI) studies. The interface formed is free of supporting mesh and the initial condition can be well controlled. Five air/SF6 interfaces with different amplitude are realized in shock-tube experiments. Time-resolved schlieren and planar Mie-scattering photography are employed to capture the motion of the shocked interface. It is found that the instability at the linear stage in the symmetry plane grows much slower than the predictions of previous two-dimensional (2D) impulsive models, which is ascribed to the opposite principal curvatures of the minimum surface. The 2D impulsive model is extended to describe the general 3D RMI. A quantitative analysis reveals a good agreement between experiments and the extended linear model for all the configurations including both the 2D and 3D RMIs at their early stages. An empirical model that combines the early linear growth with the late-time nonlinear growth is also proposed for the whole evolution process of the present configuration.
On condensation-induced waves
WAN CHENG, XISHENG LUO, M. E. H. van DONGEN
Complex wave patterns caused by unsteady heat release due to cloud formation in confined compressible flows are discussed. Two detailed numerical studies of condensation-induced waves are carried out. First, the response of a flow of nitrogen in a slender Laval nozzle to a sudden addition of water vapour at the nozzle entrance is considered. Condensation occurs just downstream of the nozzle throat, which initially leads to upstream- and downstream-moving shocks and an expansion fan downstream of the condensation front. Then, the flow becomes oscillatory and the expansion fan disappears, while upstream and much weaker downstream shocks are repeatedly generated. For a lower initial humidity, only a downstream starting shock is formed and a steady flow is established. Second, homogeneous condensation in an unsteady expansion fan in humid nitrogen is considered. In the initial phase, two condensation-induced shocks are found, moving upstream and downstream. The upstream-moving shock changes the shape of the expansion fan and has a strong influence on the condensation process itself. It is even quenching the nucleation process locally, which leads to a renewed condensation process more downstream. This process is repeated with asymptotically decreasing strength. The repeated interaction of the condensation-induced shocks with the main expansion wave leads to a distortion of the expansion wave towards its shape that can be expected on the basis of phase equilibrium, i.e. a self-similar wave structure consisting of dry part, a plateau of constant state and a wet part. The strengths of the condensation-induced waves, as well for the Laval nozzle flow as for the expansion fan, appear to be in qualitative agreement with the results from the analytical Rayleigh–Bartlmä model.
Effects of homogeneous condensation in compressible flows: Ludwieg-tube experiments and simulations
XISHENG LUO, GRAZIA LAMANNA, A. P. C. HOLTEN, M. E. H. VAN DONGEN
Journal: Journal of Fluid Mechanics / Volume 572 / February 2007
Effects of homogeneous nucleation and subsequent droplet growth in compressible flows in humid nitrogen are investigated numerically and experimentally. A Ludwieg tube is employed to produce expansion flows. Corresponding to different configurations, three types of experiment are carried out in such a tube. First, the phase transition in a strong unsteady expansion wave is investigated to demonstrate the mutual interaction between the unsteady flow and the condensation process and also the formation of condensation-induced shock waves. The role of condensation-induced shocks in the gradual transition from a frozen initial structure to an equilibrium structure is explained. Second, the condensing flow in a slender supersonic nozzle G2 is considered. Particular attention is given to condensation-induced oscillations and to the transition from symmetrical mode-1 oscillations to asymmetrical mode-2 oscillations in a starting nozzle flow, as first observed by Adam & Schnerr. The transition is also found numerically, but the amplitude, frequency and transition time are not yet well predicted. Third, a sharp-edged obstacle is placed in the tube to generate a starting vortex. Condensation in the vortex is found. Owing to the release of latent heat of condensation, an increase in the pressure and temperature in the vortex core is observed. Condensation-induced shock waves are found, for a sufficiently high initial saturation ratio, which interact with the starting vortex, resulting in a very complex flow. As time proceeds, a subsonic or transonic free jet is formed downstream of the sharp-edged obstacle, which becomes oscillatory for a relatively high main-flow velocity and for a sufficiently high humidity.
On phase transition in compressible flows: modelling and validation
XISHENG LUO, BART PRAST, M. E. H. van DONGEN, H. W. M. HOEIJMAKERS, JIMING YANG
A physical model for compressible flows with phase transition is described, in which all the processes of phase transition, i.e. nucleation, droplet growth, droplet evaporation and de-nucleation, are incorporated. The model is focused on dilute mixtures of vapour and droplets in a carrier gas with typical maximum liquid mass fraction smaller than 0.02. The new model is based on a reinterpretation of Hill's method of moments of the droplet size distribution function. Starting from the general dynamic equation, it is emphasized that nucleation or de-nucleation correspond to the rates at which droplets enter or leave droplet size space, respectively. Nucleation and de-nucleation have to be treated differently in agreement with their differences in physical nature. Attention is given to the droplet growth model that takes into account Knudsen effects and temperature differences between droplets and gas. The new phase transition model is then combined with the Euler equations and results in a new numerical method: ASCE2D. The numerical method is first applied to the problem of shock/expansion wave formation in a closed shock tube with humid nitrogen as a driver gas. Nucleation and droplet growth are induced by the expansion wave, and in turn affect the structure of the expansion wave. When the main shock, reflected from the end wall of the low-pressure section, passes the condensation zone, evaporation and de-nucleation occur. As a second example, the problem of the flow of humid nitrogen in a pulse-expansion wave tube, designed to study nucleation and droplet growth in monodisperse clouds, is investigated experimentally and numerically. | CommonCrawl |
Aaron Hoffman 1, and Matt Holzer 2,
Franklin W. Olin College of Engineering, Needham, MA 02492, USA
Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA
Received June 2017 Revised January 2018 Published June 2018
We study the dynamics of the Fisher-KPP equation on the infinite homogeneous tree and Erdős-Réyni random graphs. We assume initial data that is zero everywhere except at a single node. For the case of the homogeneous tree, the solution will either form a traveling front or converge pointwise to zero. This dichotomy is determined by the linear spreading speed and we compute critical values of the diffusion parameter for which the spreading speed is zero and maximal and prove that the system is linearly determined. We also study the growth of the total population in the network and identify the exponential growth rate as a function of the diffusion coefficient, α. Finally, we make predictions for the Fisher-KPP equation on Erdős-Rényi random graphs based upon the results on the homogeneous tree. When α is small we observe via numerical simulations that mean arrival times are linearly related to distance from the initial node and the speed of invasion is well approximated by the linear spreading speed on the tree. Furthermore, we observe that exponential growth rates of the total population on the random network can be bounded by growth rates on the homogeneous tree and provide an explanation for the sub-linear exponential growth rates that occur for small diffusion.
Keywords: Invasion fronts, linear spreading speed, homogeneous tree, random graph, Fisher-KPP equation.
Mathematics Subject Classification: Primary: 37L60; Secondary: 35R02, 35C07, 05C80.
Citation: Aaron Hoffman, Matt Holzer. Invasion fronts on graphs: The Fisher-KPP equation on homogeneous trees and Erdős-Réyni graphs. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 671-694. doi: 10.3934/dcdsb.2018202
A.-L. Barabási and R. Albert, Emergence of scaling in random networks, Science, 286 (1999), 509-512. doi: 10.1126/science.286.5439.509. Google Scholar
V. Batagelj and U. Brandes, Efficient generation of large random networks, Phys. Rev. E, 71 (2005), 036113. doi: 10.1103/PhysRevE.71.036113. Google Scholar
A. Bers, Space-time evolution of plasma instabilities-absolute and convective, in Basic Plasma Physics: Selected Chapters, Handbook of Plasma Physics, Volume 1 eds. A. A. Galeev & R. N. Sudan, (1984), 451-517.Google Scholar
B. Bollobás, Random Graphs, volume 73 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge, second edition, 2001. doi: 10.1017/CBO9780511814068. Google Scholar
M. Bramson, Convergence of solutions of the Kolmogorov equation to travelling waves, Mem. Amer. Math. Soc., 44 (1983), iv+190pp. doi: 10.1090/memo/0285. Google Scholar
L. Brevdo and T. J. Bridges, Absolute and convective instabilities of spatially periodic flows, Philos. Trans. Roy. Soc. London Ser. A, 354 (1996), 1027-1064. doi: 10.1098/rsta.1996.0040. Google Scholar
R. J. Briggs, Electron-Stream Interaction with Plasmas, MIT Press, Cambridge, 1964.Google Scholar
D. Brockmann and D. Helbing, The hidden geometry of complex, network-driven contagion phenomena, Science, 342 (2013), 1337-1342. doi: 10.1126/science.1245200. Google Scholar
R. Burioni, S. Chibbaro, D. Vergni and A. Vulpiani, Reaction spreading on graphs, Phys. Rev. E, 86 (2012), 055101. doi: 10.1103/PhysRevE.86.055101. Google Scholar
X. Chen, Existence, uniqueness, and asymptotic stability of traveling waves in nonlocal evolution equations, Adv. Differential Equations, 2 (1997), 125-160. Google Scholar
G. Chinta, J. Jorgenson and A. Karlsson, Heat kernels on regular graphs and generalized Ihara zeta function formulas, Monatsh. Math., 178 (2015), 171-190. doi: 10.1007/s00605-014-0685-4. Google Scholar
F. Chung and S. -T. Yau, Coverings, heat kernels and spanning trees, Electron. J. Combin., 6 (1999), Research Paper 12, 21 pp. Google Scholar
V. Colizza, R. Pastor-Satorras and A. Vespignani, Reaction—diffusion processes and metapopulation models in heterogeneous networks, Nat Phys, 3 (2007), 276-282. doi: 10.1038/nphys560. Google Scholar
R. Durrett, Random Graph Dynamics, volume 20 of Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, Cambridge, 2007. Google Scholar
P. Erdős and A. Rényi, On random graphs. I, Publ. Math. Debrecen, 6 (1959), 290-297. Google Scholar
R. A. Fisher, The wave of advance of advantageous genes, Annals of Human Genetics, 7 (1937), 355-369. doi: 10.1111/j.1469-1809.1937.tb02153.x. Google Scholar
J. Hindes, S. Singh, C. R. Myers and D. J. Schneider, Epidemic fronts in complex networks with metapopulation structure, Phys. Rev. E, 88 (2013), 012809.Google Scholar
M. Holzer, A proof of anomalous invasion speeds in a system of coupled Fisher-KPP equations, Discrete Contin. Dyn. Syst., 36 (2016), 2069-2084. doi: 10.3934/dcds.2016.36.2069. Google Scholar
M. Holzer and A. Scheel, Criteria for pointwise growth and their role in invasion processes, J. Nonlinear Sci., 24 (2014), 661-709. doi: 10.1007/s00332-014-9202-0. Google Scholar
A. Kolmogorov, I. Petrovskii and N. Piscounov, Etude de l'equation de la diffusion avec croissance de la quantite' de matiere et son application a un probleme biologique, Moscow Univ. Math. Bull., 1 (1937), 1-25. Google Scholar
N. E. Kouvaris, H. Kori and A. S. Mikhailov, Traveling and pinned fronts in bistable reaction-diffusion systems on networks, PLoS ONE, 7 (2012), e45029. doi: 10.1371/journal.pone.0045029. Google Scholar
H. Matano, F. Punzo and A. Tesei, Front propagation for nonlinear diffusion equations on the hyperbolic space, J. Eur. Math. Soc. (JEMS), 17 (2015), 1199-1227. doi: 10.4171/JEMS/529. Google Scholar
B. Mohar and W. Woess, A survey on spectra of infinite graphs, Bull. London Math. Soc., 21 (1989), 209-234. doi: 10.1112/blms/21.3.209. Google Scholar
M. E. J. Newman, The structure and function of complex networks, SIAM Review, 45 (2003), 167-256. doi: 10.1137/S003614450342480. Google Scholar
M. A. Porter and J. P. Gleeson, Dynamical Systems on Networks, volume 4 of Frontiers in Applied Dynamical Systems: Reviews and Tutorials. Springer, Cham, 2016. A tutorial. doi: 10.1007/978-3-319-26641-1. Google Scholar
B. Sandstede and A. Scheel, Absolute and convective instabilities of waves on unbounded and large bounded domains, Phys. D, 145 (2000), 233-277. doi: 10.1016/S0167-2789(00)00114-7. Google Scholar
S. H. Strogatz, Exploring complex networks, Nature, 410 (2001), 268-276. doi: 10.1038/35065725. Google Scholar
W. van Saarloos, Front propagation into unstable states, Physics Reports, 386 (2003), 29-222. Google Scholar
A. Vespignani, Modelling dynamical processes in complex socio-technical systems, Nature Physics, 8 (2012), 32-39. doi: 10.1038/nphys2160. Google Scholar
D. J. Watts and S. H. Strogatz, Collective dynamics of "small-world" networks, nature, 393 (1998), 440-442. Google Scholar
H. F. Weinberger, Long-time behavior of a class of biological models, SIAM Journal on Mathematical Analysis, 13 (1982), 353-396. doi: 10.1137/0513028. Google Scholar
B. Zinner, G. Harris and W. Hudson, Traveling wavefronts for the discrete Fisher's equation, J. Differential Equations, 105 (1993), 46-62. doi: 10.1006/jdeq.1993.1082. Google Scholar
Figure 1. The linear spreading speed for (5), calculated numerically as a function of $\alpha$ for $k = 3$ (red), $k = 4$ (black) and $k = 5$ (blue). Note the critical values $\alpha_2(k)$ for which the spreading speed is zero and $\alpha_1(k)$ where the speed is maximal. Also note that as $\alpha\to 0$, these spreading speeds appear to approach a common curve
Figure 2. Critical rates of diffusion for period trees with period $m = 2$. On the left, we plot $\alpha_1$ as a function of $k_1$ with $k_2$ fixed to preserve the mean degree. On the right, we plot $\alpha_2$ as a function of $k_1$. Note that in both case the periodic heterogeneity increases the critical diffusion rates
Figure 3. Numerical simulations of (2) with $k = 3$ and for $\alpha = 0.2$ (left), $\alpha = 0.8$ (middle) and $\alpha = 2.2$ (right). The blue curves are $u_n(t)$ while the red curves depict the normalized population at each level, i.e. $w_n(t)/\max_n(w_n(t))$. Note that $0.2 < \alpha_1(3) < 0.8 < \alpha_2(3) < 2.2$. For $\alpha = 0.2$, we observe that the maximal population is concentrated at the front interface. For $\alpha = 0.8$, the maximal population is concentrated ahead of the front interface. Finally, for $\alpha = 2.2$ the local population at any fixed node converges to zero, but the total population grows and eventually is concentrated at the final level of the tree
Figure 4. On the left, we compare predictions for the exponential growth rate of the maximum of $w_n(t)$ as a function of $\alpha$ (blue line) against the exponential growth rates of $M(t)$ observed in direct numerical simulations (asterisks) for $k = 5$. On the right, we compare numerically observed spreading speeds for $w_n(t)$ (asterisks) versus linear spreading speeds determined numerically from the pinched double root criterion applied to $\tilde{d}_s(\gamma,\lambda)$ (blue line). Here we have taken $k = 5$
Figure 5. Arrival times for an Erdős-Réyni graph with $N = 60,000$ and expected degree $k_{ER} = 2$. Various values of $\alpha$ are considered. In green is the best fit linear approximation for the mean arrival times for nodes with distance between $3$ and $12$ from the initial location
Figure 6. Speed associated to the mean arrival times in numerical simulations on an Erdős-Réyni graph with $N = 60,000$ and expected degree $k_{ER} = 2$ are shown in asterisks. The blue curve is the spreading speed predicted by the analysis in Section 2 for the homogeneous tree with $k = 2.54$, found by numerically computing roots of (6). This value is chosen since it is one less than the mean degree of the network over those nodes with distance between $3$ and $12$ from the original location
Figure 7. Growth rate of the total population for Erdős-Rényi graph. On the left, $N = 500,000$ and $\alpha = 0.1,0.35,0.6,0.85$. Larger values of $\alpha$ correspond to faster growth rates. On the right is the case of $N = 60,000$ with the same values of $\alpha$
Figure 8. Numerically calculated exponential growth rate for the Erdős-Rényi graph. On the left, $N = 500,000$ and observed growth rates are plotted as circles. The asterisks are the corresponding growth rates in the homogeneous tree with depth $13$. The lower curve is degree $k = 3$ while the larger curve is degree $k = 4$. On the right are the same computations, but for the Erdős-Rényi graph with $N = 60,000$ and for homogeneous trees with $k = 2$ and $k = 3$
Gregoire Nadin. How does the spreading speed associated with the Fisher-KPP equation depend on random stationary diffusion and reaction terms?. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1785-1803. doi: 10.3934/dcdsb.2015.20.1785
Lina Wang, Xueli Bai, Yang Cao. Exponential stability of the traveling fronts for a viscous Fisher-KPP equation. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 801-815. doi: 10.3934/dcdsb.2014.19.801
Matt Holzer. A proof of anomalous invasion speeds in a system of coupled Fisher-KPP equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2069-2084. doi: 10.3934/dcds.2016.36.2069
Matthieu Alfaro, Arnaud Ducrot. Sharp interface limit of the Fisher-KPP equation. Communications on Pure & Applied Analysis, 2012, 11 (1) : 1-18. doi: 10.3934/cpaa.2012.11.1
Wenxian Shen, Zhongwei Shen. Transition fronts in nonlocal Fisher-KPP equations in time heterogeneous media. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1193-1213. doi: 10.3934/cpaa.2016.15.1193
Hiroshi Matsuzawa. A free boundary problem for the Fisher-KPP equation with a given moving boundary. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1821-1852. doi: 10.3934/cpaa.2018087
Christian Kuehn, Pasha Tkachov. Pattern formation in the doubly-nonlocal Fisher-KPP equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 2077-2100. doi: 10.3934/dcds.2019087
Matthieu Alfaro, Arnaud Ducrot. Sharp interface limit of the Fisher-KPP equation when initial data have slow exponential decay. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 15-29. doi: 10.3934/dcdsb.2011.16.15
Benjamin Contri. Fisher-KPP equations and applications to a model in medical sciences. Networks & Heterogeneous Media, 2018, 13 (1) : 119-153. doi: 10.3934/nhm.2018006
François Hamel, James Nolen, Jean-Michel Roquejoffre, Lenya Ryzhik. A short proof of the logarithmic Bramson correction in Fisher-KPP equations. Networks & Heterogeneous Media, 2013, 8 (1) : 275-289. doi: 10.3934/nhm.2013.8.275
Margarita Arias, Juan Campos, Cristina Marcelli. Fastness and continuous dependence in front propagation in Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 11-30. doi: 10.3934/dcdsb.2009.11.11
James Nolen, Jack Xin. KPP fronts in a one-dimensional random drift. Discrete & Continuous Dynamical Systems - B, 2009, 11 (2) : 421-442. doi: 10.3934/dcdsb.2009.11.421
Feng Cao, Wenxian Shen. Spreading speeds and transition fronts of lattice KPP equations in time heterogeneous media. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4697-4727. doi: 10.3934/dcds.2017202
Patrick Martinez, Jean-Michel Roquejoffre. The rate of attraction of super-critical waves in a Fisher-KPP type model with shear flow. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2445-2472. doi: 10.3934/cpaa.2012.11.2445
Aijun Zhang. Traveling wave solutions with mixed dispersal for spatially periodic Fisher-KPP equations. Conference Publications, 2013, 2013 (special) : 815-824. doi: 10.3934/proc.2013.2013.815
Karel Hasik, Sergei Trofimchuk. Slowly oscillating wavefronts of the KPP-Fisher delayed equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3511-3533. doi: 10.3934/dcds.2014.34.3511
Tzong-Yow Lee and Fred Torcaso. Wave propagation in a lattice KPP equation in random media. Electronic Research Announcements, 1997, 3: 121-125.
Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 1-26. doi: 10.3934/dcds.2019226
Mei Li, Zhigui Lin. The spreading fronts in a mutualistic model with advection. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2089-2105. doi: 10.3934/dcdsb.2015.20.2089
Yanni Zeng, Kun Zhao. On the logarithmic Keller-Segel-Fisher/KPP system. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5365-5402. doi: 10.3934/dcds.2019220
Aaron Hoffman Matt Holzer | CommonCrawl |
Delayed payment policy in multi-product single-machine economic production quantity model with repair failure and partial backordering
JIMO Home
Continuous-time mean-variance portfolio selection with no-shorting constraints and regime-switching
doi: 10.3934/jimo.2018190
A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs
Yu-Hong Dai 1,2, , Xin-Wei Liu 3,, and Jie Sun 4,5,6,
LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
School of Mathematical Sciences, University of Chinese Academy of Sciences
Institute of Mathematics, Hebei University of Technology, Tianjin 300401, China
Institute of Mathematics, Hebei University of Technology, Tianjin China
School of Science, Curtin University, Perth, Australia
School of Business, National University of Singapore, Singapore
Received July 2018 Revised August 2018 Published December 2018
Fund Project: The first draft of this paper was completed on December 2, 2014. The first author is supported by the Chinese NSF grants (nos. 11631013, 11331012 and 81173633) and the National Key Basic Research Program of China (no. 2015CB856000). The second author is supported by the Chinese NSF grants (nos. 11671116 and 11271107) and the Major Research Plan of the NSFC (no. 91630202). The third author is supported by Grant DP-160101819 of Australia Research Council
Full Text(HTML)
With the help of a logarithmic barrier augmented Lagrangian function, we can obtain closed-form solutions of slack variables of logarithmic-barrier problems of nonlinear programs. As a result, a two-parameter primal-dual nonlinear system is proposed, which corresponds to the Karush-Kuhn-Tucker point and the infeasible stationary point of nonlinear programs, respectively, as one of two parameters vanishes. Based on this distinctive system, we present a primal-dual interior-point method capable of rapidly detecting infeasibility of nonlinear programs. The method generates interior-point iterates without truncation of the step. It is proved that our method converges to a Karush-Kuhn-Tucker point of the original problem as the barrier parameter tends to zero. Otherwise, the scaling parameter tends to zero, and the method converges to either an infeasible stationary point or a singular stationary point of the original problem. Moreover, our method has the capability to rapidly detect the infeasibility of the problem. Under suitable conditions, the method can be superlinearly or quadratically convergent to the Karush-Kuhn-Tucker point if the original problem is feasible, and it can be superlinearly or quadratically convergent to the infeasible stationary point when the problem is infeasible. Preliminary numerical results show that the method is efficient in solving some simple but hard problems, where the superlinear convergence to an infeasible stationary point is demonstrated when we solve two infeasible problems in the literature.
Keywords: Nonlinear programming, constrained optimization, infeasibility, interior-point method, global and local convergence.
Mathematics Subject Classification: Primary: 90C30, 90C51; Secondary: 90C26.
Citation: Yu-Hong Dai, Xin-Wei Liu, Jie Sun. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2018190
R. Andreani, E. G. Birgin, J. M. Martinez and M. L. Schuverdt, Augmented Lagrangian methods under the constant positive linear dependence constraint qualification, Math. Program., 111 (2008), 5-32. doi: 10.1007/s10107-006-0077-1. Google Scholar
P. Armand and J. Benoist, A local convergence property of primal-dual methods for nonlinear programming, Math. Program., 115 (2008), 199-222. doi: 10.1007/s10107-007-0136-2. Google Scholar
P. Armand, J. C. Gilbert and S. Jan-Jégou, A feasible BFGS interior point algorithm for solving convex minimization problems, SIAM J. Optim., 11 (2000), 199-222. doi: 10.1137/S1052623498344720. Google Scholar
I. Bongartz, A. R. Conn, N. I. M. Gould and P. L. Toint, CUTE: Constrained and Unconstrained Testing Environment, ACM Tran. Math. Software, 21 (1995), 123-160. Google Scholar
J. V. Burke, F. E. Curtis and H. Wang, A sequential quadratic optimization algorithm with rapid infeasibility detection, SIAM J. Optim., 24 (2014), 839-872. doi: 10.1137/120880045. Google Scholar
J. V. Burke and S. P. Han, A robust sequential quadratic programming method, Math. Program., 43 (1989), 277-303. doi: 10.1007/BF01582294. Google Scholar
R. H. Byrd, Robust Trust-Region Method for Constrained Optimization, Paper presented at the SIAM Conference on Optimization, Houston, TX, 1987. Google Scholar
R. H. Byrd, F. E. Curtis and J. Nocedal, Infeasibility detection and SQP methods for nonlinear optimization, SIAM J. Optim., 20 (2010), 2281-2299. doi: 10.1137/080738222. Google Scholar
R. H. Byrd, J. C. Gilbert and J. Nocedal, A trust region method based on interior point techniques for nonlinear programming, Math. Program., 89 (2000), 149-185. doi: 10.1007/PL00011391. Google Scholar
R. H. Byrd, M. E. Hribar and J. Nocedal, An interior point algorithm for large-scale nonlinear programming, SIAM J. Optim., 9 (1999), 877-900. doi: 10.1137/S1052623497325107. Google Scholar
R. H. Byrd, G. Liu and J. Nocedal, On the local behaviour of an interior point method for nonlinear programming, In, Numerical Analysis 1997 (eds. D. F. Griffiths and D. J. Higham), Addison-Wesley Longman, Reading, MA, 380 (1998), 37-56. Google Scholar
R. H. Byrd, M. Marazzi and J. Nocedal, On the convergence of Newton iterations to non-stationary points, Math. Program., 99 (2004), 127-148. doi: 10.1007/s10107-003-0376-8. Google Scholar
L. F. Chen and D. Goldfarb, Interior-point $\ell_2$-penalty methods for nonlinear programming with strong global convergence properties, Math. Program., 108 (2006), 1-36. doi: 10.1007/s10107-005-0701-5. Google Scholar
F. E. Curtis, A penalty-interior-point algorithm for nonlinear constrained optimization, Math. Program. Comput., 4 (2012), 181-209. doi: 10.1007/s12532-012-0041-4. Google Scholar
A. S. El-Bakry, R. A. Tapia, T. Tsuchiya and Y. Zhang, On the formulation and theory of the Newton interior-point method for nonlinear programming, J. Optim. Theory Appl., 89 (1996), 507-541. doi: 10.1007/BF02275347. Google Scholar
A. V. Fiacco and G. P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimization Techniques, John Wiley and Sons, New York, 1968; republished as Classics in Appl. Math. 4, SIAM, Philadelphia, 1990. doi: 10.1137/1.9781611971316. Google Scholar
R. Fletcher, Practical Methods for Optimization. Vol. 2: Constrained Optimization, John Wiley and Sons, Chichester, 1980. Google Scholar
A. Forsgren and P. E. Gill, Primal-dual interior methods for nonconvex nonlinear programming, SIAM J. Optim., 8 (1998), 1132-1152. doi: 10.1137/S1052623496305560. Google Scholar
A. Forsgren, Ph. E. Gill and M. H. Wright, Interior methods for nonlinear optimization, SIAM Review, 44 (2002), 525-597. doi: 10.1137/S0036144502414942. Google Scholar
D. M. Gay, M. L. Overton and M. H. Wright, A primal-dual interior method for nonconvex nonlinear programming, in Advances in Nonlinear Programming, (ed. Y.-X. Yuan), Kluwer Academic Publishers, Dordrecht, 14 (1998), 31-56. doi: 10.1007/978-1-4613-3335-7_2. Google Scholar
E. M. Gertz and Ph. E. Gill, A primal-dual trust region algorithm for nonlinear optimization, Math. Program., 100 (2004), 49-94. doi: 10.1007/s10107-003-0486-3. Google Scholar
N. I. M. Gould, D. Orban and Ph. L. Toint, An interior-point $\ell_1$-penalty method for nonlinear optimization, in Recent Developments in Numerical Analysis and Optimization, Proceedings of NAOIII 2014, Springer, Verlag, 134 (2015), 117-150. doi: 10.1007/978-3-319-17689-5_6. Google Scholar
W. Hock and K. Schittkowski, Test Examples for Nonlinear Programming Codes, Lecture Notes in Eco. and Math. Systems 187, Springer-Verlag, Berlin, New York, 1981. doi: 10.1007/BF00934594. Google Scholar
X.-W. Liu, G. Perakis and J. Sun, A robust SQP method for mathematical programs with linear complementarity constraints, Comput. Optim. Appl., 34 (2006), 5-33. doi: 10.1007/s10589-005-3075-y. Google Scholar
X.-W. Liu and J. Sun, A robust primal-dual interior point algorithm for nonlinear programs, SIAM J. Optim., 14 (2004), 1163-1186. doi: 10.1137/S1052623402400641. Google Scholar
X.-W. Liu and Y.-X. Yuan, A robust algorithm for optimization with general equality and inequality constraints, SIAM J. Sci. Comput., 22 (2000), 517-534. doi: 10.1137/S1064827598334861. Google Scholar
X.-W. Liu and Y.-X. Yuan, A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties, Math. Program., 125 (2010), 163-193. doi: 10.1007/s10107-009-0272-y. Google Scholar
J. Nocedal, F. Öztoprak and R. A. Waltz, An interior point method for nonlinear programming with infeasibility detection capabilities, Optim. Methods Softw., 29 (2014), 837-854. doi: 10.1080/10556788.2013.858156. Google Scholar
J. Nocedal and S. Wright, Numerical Optimization, Springer-Verlag New York, Inc., 1999. doi: 10.1007/b98874. Google Scholar
[30] J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York and London, 1970. Google Scholar
D. F. Shanno and R. J. Vanderbei, Interior-point methods for nonconvex nonlinear programming: Orderings and higher-order methods, Math. Program., 87 (2000), 303-316. doi: 10.1007/s101070050116. Google Scholar
P. Tseng, Convergent infeasible interior-point trust-region methods for constrained minimization, SIAM J. Optim., 13 (2002), 432-469. doi: 10.1137/S1052623499357945. Google Scholar
M. Ulbrich, S. Ulbrich and L. N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming, Math. Program., 100 (2004), 379-410. doi: 10.1007/s10107-003-0477-4. Google Scholar
A. Wächter and L. T. Biegler, Failure of global convergence for a class of interior point methods for nonlinear programming, Math. Program., 88 (2000), 565-574. doi: 10.1007/PL00011386. Google Scholar
A. Wächter and L. T. Biegler, Line search filter methods for nonlinear programming: Motivation and global convergence, SIAM J. Optim., 16 (2005), 1-31. doi: 10.1137/S1052623403426556. Google Scholar
A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Program., 106 (2006), 25-57. doi: 10.1007/s10107-004-0559-y. Google Scholar
M. H. Wright, Why a pure primal Newton barrier step may be infeasible?, SIAM J. Optim., 5 (1995), 1-12. doi: 10.1137/0805001. Google Scholar
S. J. Wright, On the convergence of the Newton/Log-barrier method, Math. Program., 90 (2001), 71-100. doi: 10.1007/PL00011421. Google Scholar
Y.-X. Yuan, On the convergence of a new trust region algorithm, Numer. Math., 70 (1995), 515-539. doi: 10.1007/s002110050133. Google Scholar
Y. Zhang, Solving large-scale linear programs by interior-point methods under the MATLAB environment, Optim. Methods Softw., 10 (1998), 1-31. doi: 10.1080/10556789808805699. Google Scholar
Table 1. Output for test problem (TP1)
$ l $ $ f_l $ $ v_l $ $ \|\phi_l\|_{\infty} $ $ \|\psi_l\|_{\infty} $ $ \beta_l $ $ \rho_l $ $ k $
0 5 16.6132 129.6234 129.6234 0.1000 3.3226 -
1 0.1606 2.0205 4.8082 0.7313 0.1000 0.0972 3
2 -0.0149 2.0002 0.0989 0.0445 0.1000 0.0020 4
3 -0.0036 2.0000 0.0029 0.0018 0.1000 3.1595e-06 3
4 -0.0029 2.0000 3.1674e-06 2.8185e-06 0.1000 1.0000e-09 1
5 0.0018 2.0000 1.0011e-09 6.7212e-10 - - -
0 -20 126.6501 2.8052e+03 2.8052e+03 0.1000 6.3325 -
1 -172.5829 172.7978 1.0948e+03 6.2866 0.1000 0.8719 6
8 -0.1999 0.4472 9.2732e-10 9.2732e-10 - - -
0 20 2.8284 9.9557 9.9557 0.1000 1 -
1 0.2305 0.4167 0.8900 0.7008 0.0100 1 4
3 0.1690 0.1630 0.0503 0.0022 0.0100 4.7328e-06 1
4 0.8561 2.9531e-04 3.1379e-06 3.1379e-06 0.0100 1.0000e-09 14
5 0.9028 1.2372e-04 9.3463e-08 9.3463e-08 - - -
Table 4. The last $ 4 $ inner iterations corresponding to $ l = 4 $ for test problem (TP4)
$ k $ $ f_k $ $ v_k $ $ \|\phi_k\|_{\infty} $ $ \|\psi_k\|_{\infty} $ $ x_{k1} $ $ x_{k2} $
11 0.8500 5.7136e-04 5.6703e-04 5.6703e-04 1.0780 0.0001
12 0.8548 3.0434e-04 1.2222e-05 1.2222e-05 1.0754 -0.0002
Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949
Yanqin Bai, Pengfei Ma, Jing Zhang. A polynomial-time interior-point method for circular cone programming based on kernel functions. Journal of Industrial & Management Optimization, 2016, 12 (2) : 739-756. doi: 10.3934/jimo.2016.12.739
Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015
Soodabeh Asadi, Hossein Mansouri. A Mehrotra type predictor-corrector interior-point algorithm for linear programming. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 147-156. doi: 10.3934/naco.2019011
Yanqin Bai, Xuerui Gao, Guoqiang Wang. Primal-dual interior-point algorithms for convex quadratic circular cone optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 211-231. doi: 10.3934/naco.2015.5.211
Behrouz Kheirfam. A full Nesterov-Todd step infeasible interior-point algorithm for symmetric optimization based on a specific kernel function. Numerical Algebra, Control & Optimization, 2013, 3 (4) : 601-614. doi: 10.3934/naco.2013.3.601
Yanqin Bai, Lipu Zhang. A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. Journal of Industrial & Management Optimization, 2011, 7 (4) : 891-906. doi: 10.3934/jimo.2011.7.891
Siqi Li, Weiyi Qian. Analysis of complexity of primal-dual interior-point algorithms based on a new kernel function for linear optimization. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 37-46. doi: 10.3934/naco.2015.5.37
Yinghong Xu, Lipu Zhang, Jing Zhang. A full-modified-Newton step infeasible interior-point algorithm for linear optimization. Journal of Industrial & Management Optimization, 2016, 12 (1) : 103-116. doi: 10.3934/jimo.2016.12.103
Yongjian Yang, Zhiyou Wu, Fusheng Bai. A filled function method for constrained nonlinear integer programming. Journal of Industrial & Management Optimization, 2008, 4 (2) : 353-362. doi: 10.3934/jimo.2008.4.353
Behrouz Kheirfam, Guoqiang Wang. An infeasible full NT-step interior point method for circular optimization. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 171-184. doi: 10.3934/naco.2017011
Guoqiang Wang, Zhongchen Wu, Zhongtuan Zheng, Xinzhong Cai. Complexity analysis of primal-dual interior-point methods for semidefinite optimization based on a parametric kernel function with a trigonometric barrier term. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 101-113. doi: 10.3934/naco.2015.5.101
Songqiang Qiu, Zhongwen Chen. An adaptively regularized sequential quadratic programming method for equality constrained optimization. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-14. doi: 10.3934/jimo.2019075
Igor Griva, Roman A. Polyak. Proximal point nonlinear rescaling method for convex optimization. Numerical Algebra, Control & Optimization, 2011, 1 (2) : 283-299. doi: 10.3934/naco.2011.1.283
Wen-ling Zhao, Dao-jin Song. A global error bound via the SQP method for constrained optimization problem. Journal of Industrial & Management Optimization, 2007, 3 (4) : 775-781. doi: 10.3934/jimo.2007.3.775
Changjun Yu, Kok Lay Teo, Liansheng Zhang, Yanqin Bai. On a refinement of the convergence analysis for the new exact penalty function method for continuous inequality constrained optimization problem. Journal of Industrial & Management Optimization, 2012, 8 (2) : 485-491. doi: 10.3934/jimo.2012.8.485
Chunlin Hao, Xinwei Liu. Global convergence of an SQP algorithm for nonlinear optimization with overdetermined constraints. Numerical Algebra, Control & Optimization, 2012, 2 (1) : 19-29. doi: 10.3934/naco.2012.2.19
Qiang Long, Changzhi Wu. A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1279-1296. doi: 10.3934/jimo.2014.10.1279
Liming Sun, Li-Zhi Liao. An interior point continuous path-following trajectory for linear programming. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1517-1534. doi: 10.3934/jimo.2018107
Zheng-Hai Huang, Shang-Wen Xu. Convergence properties of a non-interior-point smoothing algorithm for the P*NCP. Journal of Industrial & Management Optimization, 2007, 3 (3) : 569-584. doi: 10.3934/jimo.2007.3.569
HTML views (684)
Yu-Hong Dai Xin-Wei Liu Jie Sun | CommonCrawl |
mirror symmetry with algebraic geometry?
Why is it that mirror symmetry has many relations with algebraic geometry, rather than with complex geometry or differential geometry? (In other words, how is it that solutions to polynomials become relevant, given that these do not appear in the physics which motivates mirror symmetry?) I would especially appreciate nontechnical answers.
mirror-symmetry ag.algebraic-geometry mp.mathematical-physics
Ryan Budney
yuyuyuyu
$\begingroup$ This question has undergone a close/rewrite/reopen cycle, and so I've just deleted the remaining (now irrelevant comments). These comments, and the discussion on the rewrite, are preserved at tea.mathoverflow.net/discussion/826/…. $\endgroup$ – Scott Morrison♦ Dec 12 '10 at 2:48
Here are a few scattered observations:
Our ability to construct examples (e.g. of CY manifolds) is limited, and the tools of algebraic geometry are perfectly suited to doing so (as has been noted).
Toric varieties are a source of many examples -- Batyrev-Borisov pairs -- and they are even "more" than algebraic, they're combinatorial. In fact, the whole business is really about integers in the end, so combinatorics reigns supreme.
The fuzziness of $A_\infty$ structures is more suited to algebraic topology rather than geometry.
Continuity of certain structures (which are created from counting problems) across walls, scores some points for analysis over combinatorics and algebra.
Elliptic curves are only "kinda" algebraic, and the mirror phenomenon there is certainly transcendental.
Physics indeed does not care too much about how the spaces are constructed, but (as has been noted) even the non-topological version of mirror symmetry is an equivalence of a very algebraic structure (which includes representations of superconformal algebras).
I was hoping to unify these idle thoughts into a coherent response, but I don't think I can. Maybe the algebraic geometric aspects just grew faster because the mathematics is "easier" (or at least better understood by more mathematicians): witness the slow uptake of BCOV and its antiholomorphicity within mathematics.
To respond personally: these days, I try to transfer the algebraic and symplectic structures to combinatorics so that I can hold them in my hand and try to understand them better.
Eric ZaslowEric Zaslow
$\begingroup$ Thanks for this answer, Eric. It cleared some things up for me. $\endgroup$ – Spiro Karigiannis Dec 15 '10 at 14:05
Here is my impression ...
(I am very much a non-expert in the physics (and probably the mathematics too) so I may well be wrong about some of these things.)
Algebraic geometry sometimes enters the picture in string theory and physics because, while we start with, say, a compact Kähler manifold, for some reason or another we maybe get an integral Kähler class (for example see this MO question), and thus our manifold is projective by the Kodaira embedding theorem, and thus it is algebraic by Chow's theorem. Conversely, we may be actually interested in possibly non-algebraic compact Kähler manifolds in the physics or string theory, but the algebraic manifolds will provide at least a pretty big class of nice examples to play with.
And at least for smooth projective algebraic varieties, GAGA theorems tell us that many things (like for example, sheaf cohomology) are the same whether we consider our space as an algebraic variety or as an analytic thing. For the B-model side of mirror symmetry, I think this is how algebraic geometry (as opposed to complex analytic geometry) generally comes into play --- via GAGA theorems or at least "GAGA principles". For example, it is a fact that analytic coherent sheaves on smooth projective varieties are algebraic. From this it follows that, at least for smooth projective varieties, the derived category of coherent sheaves is the same whether we look at things algebraically or analytically. (I'm guessing, but I don't know for a fact, that in the physics the analytic objects are the a priori relevant ones.)
Another interesting issue is the fact that algebraic geometry often appears even on the A-model side of mirror symmetry, which is supposed to be the symplectic side of the story. I don't really know anything about this, so maybe someone else can say more, but there's some work on, for example, the relation between the symplectic version of Gromov-Witten theory and the algebraic geometry version of Gromov-Witten theory -- they're supposed to coincide in the case of smooth projective varieties. It's perhaps not too surprising, since the symplectic version of GW theory involves J-holomorphic curves after all, but it's definitely not a trivial result.
I suppose the naive explanation for the appearance of surfaces is that they're worldsheets of strings, but I don't really know the explanation for why the surfaces should have complex structures, i.e. why they should be Riemann surfaces (and by the way, it is also a basic fact that any compact Riemann surface is algebraic) nor do I know why the maps from the curves to the target manifolds should be holomorphic or J-holomorphic. I hope that other MO users, especially people who know about string theory and physics, can say more about these things...
122 silver badges33 bronze badges
Kevin H. LinKevin H. Lin
$\begingroup$ Great answer. I am also not an expert in the physics, but I think I remember reading somewhere that the worldsheets should be oriented (hence they admit conformal structures) and that perhaps (?) the physics doesn't care which conformal structure we put? This is part (perhaps faulty) recollection and part guessing. $\endgroup$ – Spiro Karigiannis Dec 11 '10 at 18:25
$\begingroup$ Yeah, I think it's something like that... in my imagination, the conformal/complex structure comes from having, say, an orientation and a Riemannian metric, and then the complex structure is automatically integrable by dimension reasons. As for the choice of conformal/complex structure, I imagine that the relevant path integrals are over the space of all such - I think this is how moduli spaces come into play. $\endgroup$ – Kevin H. Lin Dec 11 '10 at 18:37
$\begingroup$ I think (but I'm absolutely no expert) that the worldsheet of the string naturally comes equipped with a Riemannian metric (and an orientation), and some relevant Lagrangian is (at least in "topological field theory"?) conformally invariant, so what you actually care of is the conformal structure. Conformal structure in dim=1 gives holomorphic structure. Algebraicity follows -as KL said- merely because compact Riemann surfaces are algebraic. $\endgroup$ – Qfwfq Dec 24 '10 at 1:31
Kevin Lin gave a great technical answer to this question (which should be accepted) but I would like to add some more "philosophical" reasons for this:
[1] algebraic geometry methods are easier to apply and much more well-developed. I don't mean algebraic geometry is easy, I just mean that the tools, by their nature, give more concrete results (for example, on toric varieties), as opposed to geometric analysis methods, which by their nature often yield non-constructive or non-explicit results.
[2] the algebraic geometers got into the Mirror Symmetry game much earlier and made more rapid progress than the differential geometers. (And they wrote many of the books.)
I would still appreciate more answers, though. Perhaps people who are very adept at having a dual existence (like Eric Zaslow) can contribute their opinions?
Spiro KarigiannisSpiro Karigiannis
Part of the physics motivation for mirror symmetry involves properties of the chiral ring of N=2 superconformal field theories. Some of these have a description in terms of the polynomials appearing in algebraic geometry. One of the earliest references on this is Algebraic Geometry and Effective Lagrangians, Emil J. Martinec, Phys.Lett.B217:431,1989. There are many papers discussing the relation between these "Landau-Ginzburg" models and mirror symmetry. See for example the paper by Berglund and Katz, http://arXiv.org/pdf/hep-th/9406008.
Jeff HarveyJeff Harvey
The following is a rough outline of the most elementary structures that appear in a physics discussion of mirror symmetry. It turns out that the physics actually leads in two ways directly to polynomial equations that describe the varieties. On the most basic level string theory deals with Calabi-Yau manifolds that provide the extra dimensions needed to go from 10 dimensions to the physical 4 dimensions that we live in. Calabi-Yau manifolds are described by polynomials, hence it is not too unexpected (in retrospect) that the pheonomenon of mirror symmetry would be discovered by constructing enough polynomials describing enough Calabi-Yau spaces. And so it was. On a more fundamental, string theoretic level, the conformal field theory on the string worldsheet has a mean field theory limit in which is described by a so-called Landau-Ginzburg potential. This Landau-Ginzburg potential in turn has a classical limit in which it describes a polynomial that defines a hypersurface in a toric variety. It is precisely this polynomial that describes the Calabi-Yau variety corresponding to the underlying conformal field theory. Mirror symmetry is a simple operation on the worldsheet, defining a sign flip in the charge of the fields, but it is not too surprising that this operation on the worldsheet is reflected in the form of the polynomial, hence the precise structure of the Calabi-Yau space.
LaieLaie
$\begingroup$ "Calabi-Yau manifolds are described by polynomials" --- Aaron Bergman's answer here mathoverflow.net/questions/30629/… suggests that this issue is a bit subtle... $\endgroup$ – Kevin H. Lin Dec 11 '10 at 18:57
$\begingroup$ It's true that in complex dimension greater than or equal to 3, all Calabi-Yau manifolds are projective. (It's not true for K3 surfaces.) However, I didn't think this was crucial. From what I've heard, it's the existence of a parallel spinor (for supersymmetry) that forces one to use Calabi-Yau manifolds. In fact, I believe currently many string theorists are considering non-Kahler complex manifolds with trivial canonical bundle, and these are probably not all algebraic (although I am not sure...) $\endgroup$ – Spiro Karigiannis Dec 11 '10 at 19:09
$\begingroup$ @Kevin Lin: Thanks for pointing out that other MO question. Indeed, I always assume that Calabi-Yau means holonomy exactly SU(n), because otherwise it really reduces to a "simpler" situation. It's holonomy SU(n) that yields exactly 2 parallel spinors. See hal.archives-ouvertes.fr/docs/00/12/60/70/PDF/2000jmp.pdf for example $\endgroup$ – Spiro Karigiannis Dec 11 '10 at 19:18
$\begingroup$ Is it really true that all or even most of the conformal field theories on the string worldsheet have a limit giving an LG potential that defines a Calabi-Yau hypersurface in a toric variety? This seems to me like it should put serious restrictions on the possible Hodge numbers for such Calabi-Yaus that would not be apparent (to me) from starting out with a type II sigma model. Or perhaps there are just a lot more toric ambient spaces than I assume. $\endgroup$ – Chris Brav Dec 12 '10 at 18:07
$\begingroup$ As I mentioned, the idea was to very briefly explain very roughly in two different ways how it can be understood that polynomial structures appear in the context of mirror symmetry. For this purpose I focused on certain relevant classes of CYs that have been important in the past. It is at this point in time not possible for many reasons to make universally valid statements about the relation between CYs and CFTs. It is intriguing though that the limits for the Hodge numbers of weighted CY hypersurfaces obtained in one of the 1990 mirror papers are still valid limits for all known CYs. $\endgroup$ – Laie Dec 12 '10 at 19:47
It's probably slightly offbeat, verging on the mystical, and my apologies if it sounds a bit ridiculous, but I reckon mirror symmetry may ultimately derive from sets of degrees of freedom $x_i$ satisfying:
$ x_1 + x_2 .. + x_N = 0 $
$ x_1 x_2 .. x_N = 1 $
For small N, such as N = 4, this variety is birationally equivalent to all kinds of different forms, some with a tantalizingly "physical" appearance.
Also, for larger N, it can clearly "split" (either exactly or approximately) into the union of lower-dimensional varieties of the same form.
Obvious symmetries are $ x_i \rightarrow 1 / x_i $ and (for even N) $ x_i \rightarrow - x_i $, and I dare say there are others.
It would be very interesting to know if these varieties are Calibi-Yau manifolds. But that would be better discussed in another thread.
John R RamsdenJohn R Ramsden
$\begingroup$ Well, it does sound a bit ridiculous. The second equation is not even homogeneous, so it does not describe a projective variety. I really do not think mirror symmetry has anything at all to do with the Calabi-Yau manifold being symmetric in its own right in any way. This is a quite mysterious and very complicated duality between different Calabi-Yau manifolds which can have different topologies. $\endgroup$ – Spiro Karigiannis Dec 12 '10 at 13:13
$\begingroup$ Well, one can easily make the second one homogeneous. But I take your point about the different topologies, especially if these can differ for manifolds with the same dimension. $\endgroup$ – John R Ramsden Dec 12 '10 at 17:20
$\begingroup$ Mirror symmetry in the context of the question is a technical term that arises from equivalences of certain superconformal field theories. It is not specifically about varieties that possess lots of automorphisms, as you seem to suggest. $\endgroup$ – S. Carnahan♦ Dec 15 '10 at 8:37
$\begingroup$ Perhaps my comment wasn't quite as ridiculous as I/we suspected. See Definition 1, on page 8, of the recent ArXiv paper arxiv.org/abs/1105.2052 titled "Topological recursion and mirror curves". Their second "multiplicative" equation is slightly different to the one I quoted, involving as it does the exponents which they say represent charges. But aside from that, and a scaling which introduces a constant in the "additive" equation, their pair closely resembles mine! $\endgroup$ – John R Ramsden May 14 '11 at 11:01
Not the answer you're looking for? Browse other questions tagged mirror-symmetry ag.algebraic-geometry mp.mathematical-physics or ask your own question.
Mirror symmetry mod p?! … Physics mod p?!
The 'real' use of Quantum Algebra, Non-commutative Geometry, Representation Theory, and Algebraic Geometry to Physics
Analytic tools in algebraic geometry
How has modern algebraic geometry affected other areas of math?
A question on chiral rings and geometry of the vacuum bundle
What is the mirror of symplectic field theory?
Research in applied algebraic geometry that essentially needs a background of modern algebraic geometry at Hartshorne's level
How to explain to an engineer what algebraic geometry is? | CommonCrawl |
Enhancing docosahexaenoic acid production of Schizochytrium sp. by optimizing fermentation using central composite design
Jun Ding1,
Zilin Fu1,
Yingkun Zhu1,
Junhao He1,
Lu Ma1 &
Dengpan Bu1
BMC Biotechnology volume 22, Article number: 39 (2022) Cite this article
Docosahexaenoic acid (DHA) can improve human and animal health, particularly including anti-inflammatory, antioxidant, anticancer, neurological, and visual functions. Schizochytrium sp. is a marine heterotrophic protist producing oil with high DHA content, which is widely used in animal and food production. However, different fermentation conditions have intensive impacts on the growth and DHA content of Schizochytrium sp. Thus, this study aimed to enhance the DHA yield and concentration of Schizochytrium sp. I-F-9 by optimizing the fermentation medium. First, a single-factor design was conducted to select a target carbon and nitrogen source from several generic sources (glucose, sucrose, glycerol, maltose, corn syrup, yeast extract, urea, peptone, and ammonium sulfate). The Plackett–Burman design and the central composite design (CCD) were utilized to optimize the fermentation mediums. Schizochytrium sp. in 50-mL fermentation broth was cultured in a 250 mL shake flask at 28 °C and 200 rpm for 120 h before collecting the cell pellet. Subsequently, the cell walls were destroyed with hydrochloric acid to extract the fatty acid using n-hexane. The DHA content was detected by gas chromatography. The single-factor test indicated that glucose and peptone, respectively, significantly improved the DHA content of Schizochytrium sp. compared to the other carbon and nitrogen sources. Glucose, sodium glutamate, and sea crystal were the key factors affecting DHA production in the Plackett–Burman test (P = 0.0247). The CCD result showed that DHA production was elevated by 34.73% compared with the initial yield (from 6.18 ± 0.063 to 8.33 ± 0.052 g/L). Therefore, the results of this study demonstrated an efficient strategy to increase the yield and content of DHA of Schizochytrium sp.
The beneficial effects of docosahexaenoic acid (C22:6 n-3; DHA) have been extensively and systematically explored in humans and animals for decades [1]. According to by Zhang and Spite [2] and Zhang et al. [3], DHA can regulate inflammation, oxidative stress, immunity and cholesterol metabolism, which can efficiently prevent cancer, diabetes and thrombosis. In addition, as a long-chain unsaturated fatty acid, DHA is an essential substrate of phospholipids, triglycerides and some free fatty acids in vertebrate animals. Thus, DHA plays an important role in human and animal health [4]. The rapidly increasing requirement for DHA worldwide has intensified the demand for DHA production [5].
The main sources of DHA is seafood, mainly including fish and algae [6]. DHA yield from fish oil has been limited due to the increasing environmental and food safety concerns, such as ecological diversity maintaining and heavy metal pollution, which leads to insufficient production of DHA to meet the growing demands [7, 8]. Therefore, Schizochytrium sp. has been developed as alternative sources for DHA production [9, 10]. In 1964, Goldstein and Belsky isolated Schizochytrium sp. from Long Island Sound and classified it to Thraustochytriaceae [11]. Subsequently, many studies [12] have confirmed that Schizochytrium sp. is one of the most commercially attractive and valuable sources of DHA [13] and a heterotrophic unicellular strain that can be safely used as dietary supplement [7, 14]. Clinical trials have shown that the bioactivities of microbial-derived DHA can be comparable to that from fish oil in reducing plasma triglycerides, promoting redox properties, and protecting cardiovascular systems [8, 15]. Compared with other marine heterotrophic protists, Schizochytrium sp. is more potential in DHA production with high lipid concentration accounting for 36–84% of biomass, in which the DHA concentration exceeds 62% of the total lipid [13, 16, 17].
As Schizochytrium sp. has more advantages than fish oil, extensive studies have been conducted to promote the DHA biosynthesis of Schizochytrium sp. [18] using mutagenesis screening, adaptive evolution, multi-omics technologies, and metabolic engineering methods [13]. Furthermore, multiple studies have focused on the optimization of the fermentation process to improve DHA production and biomass, including improving the nutritional conditions [19] (carbon, nitrogen and exogenous additives) and growth conditions [20, 21] (osmotic pressure, dissolved oxygen (DO), pH and aeration). For use in animal production, a Schizochytrium strain should have high biomass with efficient capability of lipid and preferably DHA accumulation. Therefore, it is necessary to optimize the culture conditions to maximize biomass and DHA yield. Fu et al. [22] obtained DHA-rich Schizochytrium sp. S1 by mutagenesis and then carried out the optimization of fermentation to improve the DHA yield of Schizochytrium sp. S1 from 5.41 to 6.52 g/L. Zhao et al. [23] obtained a strain with high DHA by atmospheric and room temperature plasma (ARTP) mutagenesis combined with malonic acid chemical screening. Then, they used an optimized culture strategy to increase the DHA production by 1.8-fold. Because efficient microbial-derived DHA production depends on the growth period, the composition of the medium and the mode of fermentation, it is thus essential that each new strain of Schizochytrium sp. should be optimized for individual culture conditions. For efficient microbial-derived DHA production, the suitable fermentation conditions for DHA yield by the Schizochytrium sp. I-F-9 were investigated. The influence of the fermentation medium in DHA production was investigated by a single-factor experimental design in conjunction with a central composite experimental design.
This study determined the values of critical process parameters affecting the DHA production in Schizochytrium sp. utilizing sodium glutamate as the main stimulator. The fermentation process and experimental design are shown in Fig. 1. First, the best carbon and nitrogen medium affecting the fermentation of Schizochytrium sp. were determined from common carbon and nitrogen sources using a single-factor experiment. Under shake-flask fermentation conditions, 100 g/L glucose, sucrose, glycerol, and maltose as carbon sources and 10 g/L corn steep liquor, yeast extract, urea, peptone, and ammonium sulfate as nitrogen sources were used for fermentation to screen the best carbon and nitrogen sources (other conditions remained unchanged). Each trial was set up with three replicates.
Experimental design process
The optimal impact factor was then decided by a sequence of experiments utilizing a Plackett–Burman design (Design-Expert version 11.0.0). The evaluated factors were as follows: glucose, peptone, sodium glutamate, KH2PO4, MgSO4 7H2O, and sea salt. Each independent variable was tested at a low (−) and a high (+) level. The low levels of a variable were taken as the current fermentation conditions. The high level was 1.25 times the low level. Table 1 demonstrates a series of the analyzed factors, their values and corresponding levels. The 12 tests output by the Design-Expert software, and are listed in Table 2. The effects of each factor (A–F), the significant value (P-value), and the F-value (F-test results) were presented in this study. When P < 0.05, the factor was considered as the most significant parameter to influence DHA production.
Table 1 Variables range of Plackett–Burman design
Table 2 Plackett–Burman design of the experiments
After screening for the most significant factors that influence the DHA content, the central composite design (CCD) was utilized to determine the parameter values that resulted in the optimal DHA yield. The Design-Expert software generated this process to table a list of experiments. The CCD design involved five coded values: − 2, − 1, 0, 1, and 2; a trial design was established using the central and axial points (Table 3). Based on the results of the Plackett–Burman design test, the insignificant factors were maintained at the low level in the CCD experiment. Six replications of the central point were employed (Table 4). The trial results of the CCD design were fitted with a quadratic polynomial equation by multiple regression modeling utilizing the Design-Expert software, and the optimal point was predicted.
Table 3 Variables range of CCD experiments
Table 4 CCD of the experiments
Microbial strain
The strain Schizochytrium sp. I-F-9 (addressed as I-F-9 henceforth) was obtained by ARTP mutagenesis of Schizochytrium sp. (ATCC 20888) in our laboratory earlier. Schizochytrium sp. (ATCC 20888) was bought from the China Guangdong Microbial Culture Center and preserved in the Ruminant Nutrition laboratory (Institute of Animal Science, Chinese Academy of Agricultural Sciences). For cell preservation and transfer refer to the method of Zhao et al. (2017) [24].
Fermentation condition
The seed culture medium consists of 30 g/L glucose, 10 g/L peptone, 5 g/L yeast extract, and 15 g/L sea salt. The initial fermentation medium consists of 100 g/L glucose, 5.6 g/L peptone, 20 g/L sodium glutamate, 2.5 g/L KH2PO4, 7.2 g/L MgSO4, 12.8 g/L Na2SO4, 0.4 g/L CaCl2, and 15 g/L sea salt. The medium was autoclaved at 115 °C for 30 min. The vitamin solution contained 0.1 g/L VB1, 0.1 g/L VB6, and 0.01 g/L VB12. All chemicals were purchased from Solarbio (Beijing, China), except for sea salt, which was purchased from Jiangxi Haiding Technology Company Limited (Jiangxi, China). It was filtered by a 0.22 micron filter and added to the medium. The stored cells were transferred into a 50-mL seeding medium (in 250-mL flasks) cultured for 48 h with 200 rpm stirring at 28 °C. After 48 h of seed broth culture, 10% v/v inoculum was injected into the initial fermentation medium that was incubated for 120 h at 28 °C with 200 rpm agitation (in 250-mL shake flask).
Assay of dry cell weight
The I-F-9 growth was monitored using the dry cell weight (DCW). A sample of 30-mL fermentation broth was harvested every 24 h to test the DCW, total lipids and DHA production. In the cell growth curves test, all flasks were incubated in the same condition under 24, 48, 72, 96, 120, 144 and 168 h respectively. Three shake flasks were randomly selected to collect cells at the time of each sampling. The fermentation broth was put into a weighed 50-mL-tube and centrifuged at 8000 rpm for 15 min. The precipitate was then washed two times with double-distilled water and centrifuged twice, and the dry weight of the cells was measured after 24 h of freeze-drying. The DCW was calculated as follows:
$${\text{DCW}}\,\left( {\text{g/L}} \right) = \frac{{{\text{Freeze}}\,{\text{dried}}\,{\text{cell}}\,{\text{weight}}\,}}{{{\text{Fermentation}}\,{\text{broth}}\,{\text{volume}}\,\left( {\text{L}} \right)}}$$
Total lipid extraction
The total lipid extraction was improved following the method by Zhao et al. [24]. The details were as follows: the freeze-drying powder were disrupted by incubating a mixture of 1 g freeze-dried powder and 8 mL of hydrochloric acid (6 mol/L) in a hot water bath at 65 °C for 1 h. The total fatty acid was extracted with 10 mL of n-hexane. Repeated the extraction three times, evaporating the n-hexane with a rotary nitrogen blower to harvest total lipids. The total lipid yield was calculated as follows:
$${\text{Total}}\,{\text{lipid}}\,{\text{yield}}\left( {\text{g/L}} \right) = \left( {{\text{Total}}\,{\text{lipid}}\,{\text{weight}}\,\left( {\text{g}} \right)} \right)/\left( { 1 \left( {\text{g}} \right)} \right) \times {\text{DCW}}\,\left( {\text{g/L}} \right)$$
DHA yield and fatty acid analysis
Operate fatty acid methylation referring to the improved method of previous articles [25] as follows: 80 µL oil samples were added to tubes having 1 mL of 1 M KOH–methanol. The tubes were heated in a water bath at 65 °C for 30 min. After cooling the tube to indoor temperature, 2 mL of BF3–methanol was injected into it in a water bath at 65 °C for 30 min. When the tubes cooled to indoor temperature, 1 mL of n-hexane was injected to extract fatty acid methyl esters (FAMEs). The tubes were mixed through a vertex for 1 min, and then 1 mL of saturated sodium chloride was added to remove moisture from the tubes. The FAMEs samples were centrifuged at 3000 rpm for 2 min to separate the precipitate. The qualitative and quantitative the FAMEs are referenced from our previous study [26] using Agilent MassHunter Workstation Software (B.07.01, Agilent Technologies). The FAMEs were identified by comparing the retention times of methyl cis-4,7,10,13,16,19-DHA standard (CAS:301-01-9, Solarbio Beijing, China) and GLC NESTLE 37MIX (BYG8010, Solarbio, Beijing, China) (Additional file 1: Fig. S1). And then, the standard curves of the DHA standard were created based on the five different methyl-DHA content and corresponding peak areas (Additional file 1: Fig. S2).
Effects of the different fermentation times and different carbon and nitrogen sources on fermentation
As shown in Fig. 2a, the DHA yield remained constant at between 6.73 and 6.84 g/L even if the fermentation time increased from 120 to 168 h. From 24 to 168 h of fermentation, the highest fermentation efficiency was achieved at 56.51 ± 2.05 mg (L h)−1 in 120 h. Therefore, 120 h was determined the optimum fermentation time for I-F-9.
Biomass, total lipid, and DHA yields of I-F-9 at different fermentation times and with different carbon and nitrogen sources. a Effects of different fermentation time in biomass, total lipid and DHA production. b Fermentation medium containing 100 g/L glucose, sucrose, glycerol, and maltose as carbon sources and c Fermentation medium containing 10 g/L corn steep liquor, yeast extract, urea, peptone, and ammonium sulfate as nitrogen sources
The overview of biomass, total lipid, and DHA yield of I-F-9, incubated in 250-mL shake flask supplying 50 mL of fermentation broths, were demonstrated in Fig. 2b, c. Different carbon and nitrogen sources greatly influenced the fermentation of I-F-9. In the fermentation broth of different carbon sources, the biomass of I-F-9 was between 6.64 g/L and 34.83 g/L, and the DHA production was between 0.10 and 7.00 g/L. Glucose and glycerol were significantly higher than the other treatments in I-F-9 growth and DHA accumulation with supplemented as a carbon source. The biomass was 34.83 g/L when glycerol was utilized as the carbon source. And when glucose was utilized as the carbon source, DHA production was 7.00 g/L. In addition, peptone was the best among fermentation supplemented by different nitrogen sources, where the biomass and DHA yields were 34.83 and 6.22 g/L, respectively.
Screening significant growth parameters according to the plackett–burman design
The designed conditions for 12 tests and a data column (presenting the DHA production data of each test) are shown in Table 5. DHA production of I-F-9 varied between 4.551 and 8.443 g/L, depending on the culture parameters. The analysis of the data was done by the Design-Expert software. Relying on the software's defaults, the factors with P ≤ 0.05 were considered the most influential, while the factors with P > 0.05 were considered less significant. Table 6 illustrates the results of significance levels of parameters highly correlated with the DHA concentration (P < 0.05), in which glucose (B), sodium glutamate (C), and sea salt (F) were suggested as the most significant parameters for further optimization.
Table 5 Results of the Plackett–Burman design
Table 6 Significance (P values) of model and each variable using the Plackett–Burman design on the DHA production
Optimization of medium components by CCD
The purpose of the CCD design was to identify the effect of different combinations of glucose, sodium glutamate and sea salt on the DHA production of I-F-9 cultured in a 250-mL shake flask. The results are displayed in Table 7. The DHA production of I-F-9 varied between 3.557 and 8.238 g/L depending on the incubation parameters (Table 7).
Table 7 Results of the central composite design
The predicted values of DHA production utilizing the equation above and the data are given in Table 7. The experimental and predicted values of DHA yield were in good agreement. The corresponding analysis of variance (ANOVA) is given in Table 8. The regression model developed for DHA yield was significant (P = 0.0001); the lack-of-fit test demonstrated that the quadratic formula was the ideal model for data regression analysis (P = 0.2521 > 0.05) (Table 9). The regression equation R2 = 0.9598 indicated that the test results were plausible. The F test showed that the effects of B, BC, A2, B2, and C2 on DHA yield were significant; the effects of A, C, AB, and AC on DHA yield were insignificant (Table 8).
Table 8 Coefficients of the second-order polynomial model in 23 central composite design
Table 9 ANOVA for the central composite design (R2 = 0.9293; Adj R2 = 0.8656)
The DHA yields were obtained from a series of CCD experiments, and were analyzed by regression utilizing a quadratic polynomial equation. The two regression formulas were indicated as coding factors and actual factors. Factors are expressed by capital letters: A: Glucose, B: Sodium glutamate, and C: Sea salt.
The final equation in terms of coded factors:
$$\begin{aligned} {\text{DHA}}\,{\text{yield}} & = 7.598 + 0.107139{\text{A}} - 0.897631{\text{B}} + 0.128103{\text{C}} - 0.116089{\text{AB}} \\ & - 0.129967{\text{AC}} + 1.08156{\text{BC}} - 0.344184{\text{A}}^{2} - 0.888183{\text{B}}^{2} - 0.554962{\text{C}}^{2} \\ \end{aligned}$$
The final equation in terms of actual factors:
$$\begin{aligned} {\text{DHA }}\,{\text{yield}} & = - 63.5387 + 0.681357{\text{A}} + 2.56018{\text{B}} + 0.828327{\text{C}} - 0.00371486{\text{AB}} \\ & \quad - 0.00554526{\text{AC}} + 0.230732{\text{BC}} - 0.00220278{\text{A}}^{2} - 0.142109{\text{B}}^{2} \\ & \quad - 0.157856{\text{C}}^{2} \\ \end{aligned}$$
The regression analysis of the equations was performed utilizing the Design-Expert software (Design-Expert version 11.0.0) to obtain the model-optimized values of the medium components. Figure 3 shows the isoresponse contour lines of the medium components for optimized DHA production. The predicted optimal conditions were 118.71 g/L glucose, 20.00 g/L sodium glutamate, and 15.16 g/L sea salt.
Contour plots depicting the response surface of DHA yield correlated to the levels of the variables: a glucose and sodium glutamate (C = 0); b glucose and sea salt (B = 0); and c sodium glutamate and sea salt (A = 0)
Cultivation on an optimized medium
The cells were cultured in 250-mL shake flask and fermented for 120 h at 28 °C and 200 rpm using 50 mL of the optimized medium to evaluate the growth and DHA production of I-F-9 in medium optimized by CCD. The experimental results of 120 h cultivation using an optimal medium containing 118.71 g/L glucose, 20.00 g/L sodium glutamate, and 15.16 g/L sea salt revealed that biomass, DHA concentration, and DHA productivity were 39.23 ± 0.56 g/L, 21.23% ± 0.038 and 69.41 ± 0.43 mg/L/h, respectively. The DHA and lipid production after fermentation was 8.33 ± 0.074 g/L and 30.24 ± 2.66 g/L, which was 34.73% and 19.34% greater than that prior to optimization (6.18 ± 0.09 g/L and 25.34 ± 2.11 g/L).
Several studies have confirmed the significant benefits of seafood with high contents of long-chain polyunsaturated fatty acids for human and animal health [27,28,29], in which DHA, a polyunsaturated fatty acid (PUFA), is considered as a major factor [30]. At present, the main sources of DHA on the market are fish oils and microbial lipids [21]. Schizochytrium sp. is a promising producer of microbial DHA, which has been considered the Generally Recognized as Safe (GRAS) status by the United States Food and Drug Administration [31]. Variations in fermentation parameters have a significant impact on DHA production from Schizochytrium, including carbon and nitrogen sources, management DO, the osmotic pressure of the medium, pH management and temperature control. As such, the production of DHA can be greatly enhanced by optimizing the medium formula and the fermentation process. Culture environment optimization can improve the production of DHA from Schizochytrium, such as intermittent oxygen treatment [32] and low-temperature hatch [33]. In terms of pH regulation, Zhao et al. found that Schizochytrium grew best in neutral conditions, while DHA synthesis increased under acidic conditions. Therefore, a two-stage pH control was developed to achieve a DHA yield of 11.44 g/L in Schizochytrium sp. AB-610 [24]. ARTP mutagenized Schizochytrium sp. I-F-9 used in this study contained higher oil content and DHA production than the original wild strain. If DHA production is to be further increased, then optimization of the nutrition conditions to obtain higher cell densities will be an important issue.
The CCD is a response surface methodology widely used for the fermentation optimization of various products, including food, beverages, and pharmaceuticals [34]. It describes the effects of interactions between parameters in linear and quadratic models. Here, the optimization of I-F-9 for the production of DHA was conducted as three phases, including the single-factor test to select the optimum carbon and nitrogen sources, following with the Plackett–Burman design to screen the major impacts of the variables and the response surface optimization.
The carbon and nitrogen sources in the medium have a significant effect on lipid synthesis in fungi [35]. This study showed that I-F-9 produced highest content of DHA with glucose and peptone as carbon and nitrogen source respectively, which was consistent with Bajpai's findings [36]. However, sucrose, maltose, urea, yeast extract and ammonium nitrate as carbon and nitrogen sources had limited effects on the cell growth of I-F-9, resulted in significantly lower DHA production than the others [37]. As a monosaccharide, glucose is an important carbon source in microbial fermentation [38]. Previous studies have shown that sugar can affect the growth and metabolism of microorganisms. For example, bilberry yeast grows better in glucose, while Komagataella grows better in fructose [39]. The DHA production in the glucose medium was higher than that in glycerol in this study. Acetyl-CoA is the main substrate for PUFA synthesis, which can be generated by glucose and glycerol. The stoichiometry of glucose metabolism is about 1.1 M of acetyl-CoA per 100 g. But approximately 1.1 M acetyl-CoA is generated by 110 g of glycerol metabolism [40, 41]. Therefore, glucose can produce more acetyl-CoA than glycerol, which might be the reason of the results mentioned above. Peptones are a complex nitrogen source, containing various nutrients, including proteins, peptides and free amino acids, as well as lower levels of carbohydrates, lipids, minerals, vitamins and growth factors. Besides improving the cell biomass, peptones can promote overall cell development, compared to other nitrogen sources [42, 43].
Glucose, sodium glutamate and sea salt were selected to have the greatest effects on DHA production of I-F-9 in Plackett–Burman design, which is in agreement with the findings of Manikan et al. [44]. Bajpai et al. [45] have confirmed that glucose concentration has not affected on the proportion of DHA in lipids but it can significantly affect cellular biomass and lipid content, and consequently DHA production. Ethier et al. [46] have reported that carbon sources can affect the biomass of Schizochytrium sp. and may influence PUFA synthesis. Sodium glutamate, a simple organic nitrogen source, is thought to promote biomass and increase the lipid yield of Thraustochytrids. Glutamate is usually found in the ocean as sodium salts (C5H8NNaO4), which is the main nonessential amino acid for marine organisms. Manikan et al. [44] have reported that the optimum sodium glutamate concentration can possibly result in higher DHA yields in Aurantiochytrium. And it is vital for any new strains to achieve significant levels of DHA production as an important nutrient [47]. Sodium glutamate causes high DHA production because it could regulate the activity of the enzyme of acetyl-CoA carboxylase [48] and glucose-6-phosphate dehydrogenase [49] that both can produce substrates for fatty acid synthesis (acetyl coenzyme A and NADPH). Besides the carbon and nitrogen sources, trace minerals can also affect the growth and lipid production of Schizochytrium as well as [50]. Here, sea salt containing a variety of essential minerals were used as the main trace element supplement to I-F-9.
Response surface design has been widely utilized to optimize many fermentation process parameters, including medium composition [34]. Most variations in response surface optimization can be interpreted by the regression equation [51]. In this study, the ANOVA of the DHA yield for I-F-9 showed that the F value was 14.59, which explained that the parameters in the model had a significant impact in the response experiment. In the model, the P value was 0.0001 demonstrating that the regression formula was statistically highly significant at the 95% confidence interval. Moreover, the lack-of-fit F value (1.88) means that the lack of fit was non-significant relative to the pure error. The R2 and Adj R2 for DHA yield were 0.9293 and 0.8656, respectively. Adequate precision estimates the signal-to-noise ratio, and a ratio higher than 4 is desirable [52]. The ratio of 11.958 illustrated an adequate signal.
To date, many articles have been carried out to elevate the DHA yield of Schizochytrium sp. utilizing various fermentation models and strategic techniques. Table 10 showed that the DHA yields of various Thraustochytrid strains growing in glucose and glycerol as the main carbon sources compared with I-F-9. Although higher DHA yields were obtained in other studies, the results were obtained at bioreactor scale [53, 54] or using two-stage control [24]. Manikan et al. [44] screened the optimum medium components through response surface methodology in shake flasks, which was then applied in a 5L bioreactor, showing that the biomass, total fatty acid and DHA production of Aurantiochytrium sp. SW1 were 17.8 g/L, 9.6 g/L, and 4.23 g/L in the shake flask and 24.46 g/L, 9.4 g/L and 4.5 g/L in the bioreactor, respectively. Hang et al. [32] screened glycerol concentrations of medium by shaking flasks for the first time and then used a 5-L bioreactor with intermittent oxygen control (maintaining a 50% DO level), which finally increased DHA production from 1.4 to 20.3 g/L. Accordingly, it is speculated that the DHA yield and content of I-F-9 obtained in the bioreactor with high cell density cultivation can be further improved. To enhance DHA yield of Schizochytrium sp. by optimization, Fu et al. [22] used low energy ion mutagenesis combined with the staining selection method and fermentation optimization. The results demonstrated that the yield and content of DHA was 6.52 g/L and 11.78%, respectively. The increase of DHA production in this study may be attributed to the increased concentration of glucose in the medium. Yokochi et al. [10] demonstrated that there might be the optimum glucose concentration to promote the growth of Schizochytrium. Hong et al. [54] carried out one-factor design for glucose optimization finally to elevated the DHA yield to 2.8 g/L, with DHA yield efficiency of 38.9 mg/L/h. In the present study, new strain I-F-9 obtained by ARTP mutagenesis in our lab was optimized by a more sophisticated CCD experimental design for the fermentation medium. The DHA yield of the strain I-F-9 was 8.33 ± 0.074 g/L with DHA productivity of 69.41 ± 0.43 mg (L h)−1, which was higher than our original culture condition (improved by 34.73%) and most of the previous studies.
Table 10 Summary of DHA production of various thraustochytrids compared with I-F-9
Schizochytrium powder has been used in feed supplements in husbandry for decades [55]. Feeding lactating cows with Schizochytrium powder can improve the milk quality in dairy industry [56]. In goats, it can reduce methane production [57]. In lambs [58] and heifers [59], it can increase n-3 PUFA in the muscle. The concentrations of DHA in previous studies and our study has been summarized in Table 10. Xu et al. [60] have reported that the DHA content is over 10% of the dry matter in Aurantiochytrium sp. (Schizochytrium sp.), while 24% in fish oil. In this study, the optimized DHA concentration was 21.23%, which is better than most of the similar studies operated in shake flasks.
In conclusion, the CCD method was applied to optimize the DHA yield by the second-order response surface model for the experimental data. The model prediction value was 8.10 g/L in DHA production. The optimized DHA yield of I-F-9 was 8.33 ± 0.074 g/L in the shake flask, which is essentially close to the predicted value. High DHA yields of Schizochytrium sp. could be obtained by the present method, which is potentially applicable for future production. The present study provides a fundamental basis to potentially use the Schizochytrium sp as the direct-fed microbials for animal and food industry.
The data that support the findings of this study are available from the corresponding author on reasonable request.
Whelan J, Rust C. Innovative dietary sources of n-3 fatty acids. Annu Rev Nutr. 2006;26:75–103. https://doi.org/10.1146/annurev.nutr.25.050304.092605.
Zhang MJ, Spite M. Resolvins: anti-inflammatory and proresolving mediators derived from omega-3 polyunsaturated fatty acids. Annu Rev Nutr. 2012;32:203–27. https://doi.org/10.1146/annurev-nutr-071811-150726.
Zhang TT, et al. Health benefits of dietary marine DHA/EPA-enriched glycerophospholipids. Prog Lipid Res. 2019;75: 100997. https://doi.org/10.1016/j.plipres.2019.100997.
Li J, et al. Health benefits of docosahexaenoic acid and its bioavailability: a review. Food Sci Nutr. 2021;9(9):5229–43. https://doi.org/10.1002/fsn3.2299.
Salem N Jr, Eggersdorfer M. Is the world supply of omega-3 fatty acids adequate for optimal human nutrition? Curr Opin Clin Nutr Metab Care. 2015;18(2):147–54.
Castejon N, Senorans FJ. Enzymatic modification to produce health-promoting lipids from fish oil, algae and other new omega-3 sources: a review. N Biotechnol. 2020;57:45–54. https://doi.org/10.1016/j.nbt.2020.02.006.
Falk MC, et al. Developmental and reproductive toxicological evaluation of arachidonic acid (ARA)-Rich oil and docosahexaenoic acid (DHA)-Rich oil. Food Chem Toxicol. 2017;103:270–8. https://doi.org/10.1016/j.fct.2017.03.011.
Wijendran V, Hayes KC. Dietary n-6 and n-3 fatty acid balance and cardiovascular health. Annu Rev Nutr. 2004;24:597–615. https://doi.org/10.1146/annurev.nutr.24.012003.132106.
Russo GL, et al. Sustainable production of food grade omega-3 oil using aquatic protists: reliability and future horizons. N Biotechnol. 2021;62:32–9.
Yokochi T, Honda D, Higashihara T, Nakahara T. Optimization of docosahexaenoic acid production by Schizochytrium limacinum SR21. Appl Microbiol Biotechnol. 1998;49(1):72–6. https://doi.org/10.1007/s002530051139.
Darley WM, Porter D, Fuller MS. Cell wall composition and synthesis via Golgi-directed scale formation in the marine eucaryote, Schizochytrium aggregatum, with a note on Thraustochytrium sp. Arch Mikrobiol. 1973;90(2):89–106. https://doi.org/10.1007/BF00414512.
Heo S-W, et al. Application of Jerusalem artichoke and lipid-extracted algae hydrolysate for docosahexaenoic acid production by Aurantiochytrium sp. KRS101. J Appl Phycol. 2020;32(6):3655–66. https://doi.org/10.1007/s10811-020-02207-z.
Chi G, et al. Production of polyunsaturated fatty acids by Schizochytrium (Aurantiochytrium) spp. Biotechnol Adv. 2022;55: 107897. https://doi.org/10.1016/j.biotechadv.2021.107897.
Lewis KD, et al. Toxicological evaluation of arachidonic acid (ARA)-rich oil and docosahexaenoic acid (DHA)-rich oil. Food Chem Toxicol. 2016;96:133–44. https://doi.org/10.1016/j.fct.2016.07.026.
Erkkila AT, et al. Higher plasma docosahexaenoic acid is associated with reduced progression of coronary atherosclerosis in women with CAD. J Lipid Res. 2006;47(12):2814–9. https://doi.org/10.1194/jlr.P600005-JLR200.
Aasen IM, et al. Thraustochytrids as production organisms for docosahexaenoic acid (DHA), squalene, and carotenoids. Appl Microbiol Biotechnol. 2016;100(10):4309–21. https://doi.org/10.1007/s00253-016-7498-4.
Du F, et al. Biotechnological production of lipid and terpenoid from thraustochytrids. Biotechnol Adv. 2021;48: 107725. https://doi.org/10.1016/j.biotechadv.2021.107725.
Valentine REAM. Single-cell oils as a source of omega-3 fatty acids an overview of recent advances. J Am Oil Chem Soc. 2013;90:167–82.
Sukenik A, Wahnon R. Biochemical quality of marine unicellular algae with special emphasis on lipid composition. I. Isochrysis galbana. Aquaculture. 1991;97(1):61–72. https://doi.org/10.1016/0044-8486(91)90279-G.
Molina Grima E, et al. EPA from Isochrysis galbana. Growth conditions and productivity. Process Biochem. 1992;27(5):299–305. https://doi.org/10.1016/0032-9592(92)85015-T.
Nazir Y, et al. Optimization of culture conditions for enhanced growth, lipid and docosahexaenoic acid (DHA) production of Aurantiochytrium SW1 by response surface methodology. Sci Rep. 2018;8(1):8909. https://doi.org/10.1038/s41598-018-27309-0.
Fu J, et al. Enhancement of docosahexaenoic acid production by low-energy ion implantation coupled with screening method based on Sudan black B staining in Schizochytrium sp. Bioresour Technol. 2016;221:405–11. https://doi.org/10.1016/j.biortech.2016.09.058.
Zhao B, et al. Enhancement of Schizochytrium DHA synthesis by plasma mutagenesis aided with malonic acid and zeocin screening. Appl Microbiol Biotechnol. 2018;102(5):2351–61. https://doi.org/10.1007/s00253-018-8756-4.
Zhao B, et al. Improvement of docosahexaenoic acid fermentation from Schizochytrium sp. AB-610 by staged pH control based on cell morphological changes. Eng Life Sci. 2017;17(9):981–8. https://doi.org/10.1002/elsc.201600249.
Ren LJ, et al. Enhanced docosahexaenoic acid production by reinforcing acetyl-CoA and NADPH supply in Schizochytrium sp. HX-308. Bioprocess Biosyst Eng. 2009;32(6):837–43. https://doi.org/10.1007/s00449-009-0310-4.
Sun LL, et al. Odd- and branched-chain fatty acids in milk fat from Holstein dairy cows are influenced by physiological factors. Animal. 2022;16(6): 100545. https://doi.org/10.1016/j.animal.2022.100545.
Bos DJ, et al. Effects of omega-3 polyunsaturated fatty acids on human brain morphology and function: What is the evidence? Eur Neuropsychopharmacol. 2016;26(3):546–61. https://doi.org/10.1016/j.euroneuro.2015.12.031.
Mallick R, Basak S, Duttaroy AK. Docosahexaenoic acid,22:6n–3: its roles in the structure and function of the brain. Int J Dev Neurosci. 2019;79:21–31. https://doi.org/10.1016/j.ijdevneu.2019.10.004.
Yu X, et al. Effects of the application of general anesthesia with propofol during the early stage of pregnancy on brain development and function of SD rat offspring and the intervention of DHA. Neurol Res. 2019;41(11):1008–14. https://doi.org/10.1080/01616412.2019.1672381.
Swanson D, Block R, Mousa SA. Omega-3 fatty acids EPA and DHA: health benefits throughout life. Adv Nutr. 2012;3(1):1–7. https://doi.org/10.3945/an.111.000893.
Ratledge C. Omega-3 biotechnology: errors and omissions. Biotechnol Adv. 2012;30(6):1746–7.
Huang TY, Lu WC, Chu IM. A fermentation strategy for producing docosahexaenoic acid in Aurantiochytrium limacinum SR21 and increasing C22:6 proportions in total fatty acid. Biores Technol. 2012;123:8–14. https://doi.org/10.1016/j.biortech.2012.07.068.
Hu F, et al. Low-temperature effects on docosahexaenoic acid biosynthesis in Schizochytrium sp. TIO01 and its proposed underlying mechanism. Biotechnol Biofuels. 2020;13:172. https://doi.org/10.1186/s13068-020-01811-y.
Pal D, et al. Optimization of medium composition to increase the expression of recombinant human interferon-beta using the Plackett–Burman and central composite design in E. coli SE1. 3 Biotech. 2021;11(5):226. https://doi.org/10.1007/s13205-021-02772-1.
Holdsworth JE, Ratledge C. Lipid turnover in oleaginous yeasts. Microbiology. 1988;134(2):339–46. https://doi.org/10.1099/00221287-134-2-339.
Bajpai P, Bajpai PK, Ward OP. Production of docosahexaenoic acid by Thraustochytrium aureum. Appl Microbiol Biotechnol. 1991;35(6):706–10.
Li ZY, Ward OP. Production of docosahexaenoic acid by Thraustochytrium roseum. J Ind Microbiol. 1994;13(4):238–41. https://doi.org/10.1007/BF01569755.
Wang Z, et al. Sugar profile regulates the microbial metabolic diversity in Chinese Baijiu fermentation. Int J Food Microbiol. 2021;359: 109426. https://doi.org/10.1007/s00253-018-8756-4.
Liu C, et al. Raw material regulates flavor formation via driving microbiota in Chinese liquor fermentation. Front Microbiol. 2019;10:1520. https://doi.org/10.3389/fmicb.2019.01520.
Fakas S, et al. Evaluating renewable carbon sources as substrates for single cell oil production by Cunninghamella echinulata and Mortierella isabellina. Biomass Bioenerg. 2009;33(4):573–80. https://doi.org/10.1016/j.biombioe.2008.09.006.
Polbrat T, Konkol D, Korczynski M. Optimization of docosahexaenoic acid production by Schizochytrium SP.—a review. Biocatal Agric Biotechnol. 2021;35:66. https://doi.org/10.1016/j.bcab.2021.102076.
Kujawska N, et al. Optimizing docosahexaenoic acid (DHA) production by Schizochytrium sp. grown on waste glycerol. Energies. 2021;14(6):1685. https://doi.org/10.3390/en14061685.
Orak T, et al. Chicken feather peptone: a new alternative nitrogen source for pigment production by Monascus purpureus. J Biotechnol. 2018;271:56–62. https://doi.org/10.1016/j.jbiotec.2018.02.010.
Manikan V, Kalil MS, Hamid AA. Response surface optimization of culture medium for enhanced docosahexaenoic acid production by a Malaysian thraustochytrid. Sci Rep. 2015;5:8611. https://doi.org/10.1038/srep08611.
Bajpai P, Bajpai P, Ward O. Optimization of production of docosahexaenoic acid (DHA) byThraustochytrium aureum ATCC 34304. J Am Oil Chem Soc. 1991;68(7):509–14.
Ethier S, et al. Continuous culture of the microalgae Schizochytrium limacinum on biodiesel-derived crude glycerol for producing docosahexaenoic acid. Bioresour Technol. 2011;102(1):88–93. https://doi.org/10.1016/j.biortech.2010.05.021.
Shene C, et al. Microbial oils and fatty acids: effect of carbon source on docosahexaenoic acid (C22: 6 n-3, DHA) production by thraustochytrid strains. J Soil Sci Plant Nutr. 2010;10(3):207–16. https://doi.org/10.4067/S0718-95162010000100002.
Kowluru A, et al. Activation of acetyl-CoA carboxylase by a glutamate- and magnesium-sensitive protein phosphatase in the islet beta-cell. Diabetes. 2001;50(7):1580–7. https://doi.org/10.2337/diabetes.50.7.1580.
Lan WZ, Qin WM, Yu LJ. Effect of glutamate on arachidonic acid production from Mortierella alpina. Lett Appl Microbiol. 2002;35(4):357–60. https://doi.org/10.1046/j.1472-765x.2002.01195.x.
Nagano N, et al. Effect of trace elements on growth of marine eukaryotes, tharaustochytrids. J Biosci Bioeng. 2013;116(3):337–9. https://doi.org/10.1016/j.jbiosc.2013.03.017.
Wu K, et al. Application of the response surface methodology to optimize the fermentation parameters for enhanced docosahexaenoic acid (DHA) production by Thraustochytrium sp. ATCC 26185. Molecules. 2018;23(4):66. https://doi.org/10.3390/molecules23040974.
Muthukumar M, Mohan D, Rajendran M. Optimization of mix proportions of mineral aggregates using Box Behnken design of experiments. Cem Concr Compos. 2003;25(7):751–8.
Chang G, et al. Fatty acid shifts and metabolic activity changes of Schizochytrium sp. S31 cultured on glycerol. Bioresour Technol. 2013;142:255–60. https://doi.org/10.1016/j.biortech.2013.05.030.
Hong WK, et al. Production of lipids containing high levels of docosahexaenoic acid by a newly isolated microalga, Aurantiochytrium sp. KRS101. Appl Biochem Biotechnol. 2011;164(8):1468–80. https://doi.org/10.1007/s12010-011-9227-x.
Amorim ML, et al. Microalgae proteins: production, separation, isolation, quantification, and application in food and feed. Crit Rev Food Sci Nutr. 2021;61(12):1976–2002. https://doi.org/10.1080/10408398.2020.1768046.
Marques JA, et al. Increasing dietary levels of docosahexaenoic acid-rich microalgae: ruminal fermentation, animal performance, and milk fatty acid profile of mid-lactating dairy cows. J Dairy Sci. 2019;102(6):5054–65. https://doi.org/10.3168/jds.2018-16017.
Mavrommatis A, et al. Alterations in the rumen particle-associated microbiota of goats in response to dietary supplementation levels of Schizochytrium spp. Sustainability. 2021;13(2):66. https://doi.org/10.3390/su13020607.
Diaz MT, et al. Feeding microalgae increases omega 3 fatty acids of fat deposits and muscles in light lambs. J Food Compos Anal. 2017;56:115–23. https://doi.org/10.1016/j.jfca.2016.12.009.
Rodriguez-Herrera M, et al. Feeding microalgae at a high level to finishing heifers increases the long-chain n-3 fatty acid composition of beef with only small effects on the sensory quality. Int J Food Sci Technol. 2018;53(6):1405–13. https://doi.org/10.1111/ijfs.13718.
Xu XD, et al. The strategies to reduce cost and improve productivity in DHA production by Aurantiochytrium sp.: From biochemical to genetic respects. Appl Microbiol Biotechnol. 2020;104(22):9433–47. https://doi.org/10.1007/s00253-020-10927-y.
Chen W, et al. Improvement in the docosahexaenoic acid production of Schizochytrium sp. S056 by replacement of sea salt. Bioprocess Biosyst Eng. 2016;39(2):315–21. https://doi.org/10.1007/s00449-015-1517-1.
Lin Y, et al. Optimization of enzymatic cell disruption for improving lipid extraction from Schizochytrium sp. through response surface methodology. J Oleo Sci. 2018;67(2):215–24. https://doi.org/10.5650/jos.ess17166.
We thank project participants Zhiguo Guo, Zitai Guo, Ran Yi and Xiaowei Duan for discussions of experimental design and results. We acknowledge Institute of Animal Sciences, Chinese Academy of Agricultural Sciences for providing the research support needed for this work. Thank International Science Editing (http://www.internationalscienceediting.com) for editing this manuscript.
This research was partially supported by the National Key Research and Development Program of China (2018YFE0101400), the Key Research and Development Program of the Ningxia Hui Autonomous Region (2021BEF02018), the Scientific Research Project for Major Achievements of the Agricultural Science and Technology Innovation Pro-gram (ASTIP) (No. ASTIP-IAS07-1), and Beijing Dairy Industry Innovation Team (BAIC06-2022).
State Key Laboratory of Animal Nutrition, Institute of Animal Sciences, Chinese Academy of Agricultural Sciences, No. 2 Yuanmingyuan West Road, Beijing, 100193, China
Jun Ding, Zilin Fu, Yingkun Zhu, Junhao He, Lu Ma & Dengpan Bu
Jun Ding
Zilin Fu
Yingkun Zhu
Junhao He
Lu Ma
Dengpan Bu
JD collected the data, drafted conception and design of the study and edited manuscript. ZF, YZ, JH, LM and DB contributed to analysis and critically revised work. All authors read and approved the manuscript.
Correspondence to Lu Ma or Dengpan Bu.
All authors consent for publication.
The authors declare no conflicts or competing interests.
. Chromatogram of DHA methyl ester standard and standard curves of the DHA methyl ester standard.
Ding, J., Fu, Z., Zhu, Y. et al. Enhancing docosahexaenoic acid production of Schizochytrium sp. by optimizing fermentation using central composite design. BMC Biotechnol 22, 39 (2022). https://doi.org/10.1186/s12896-022-00769-z
Center composite design
Plackett–Burman design
Schizochytrium sp. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.