text
stringlengths
100
500k
subset
stringclasses
4 values
Archimedean ice DCDS Home Ergodicity of group actions and spectral gap, applications to random walks and Markov shifts September 2013, 33(9): 4271-4289. doi: 10.3934/dcds.2013.33.4271 Spectral multiplicities for ergodic flows Alexandre I. Danilenko 1, and Mariusz Lemańczyk 2, Institute for Low Temperature Physics & Engineering, National Academy of Sciences of Ukraine, 47 Lenin Ave., Kharkov, 61164, Ukraine Faculty of Mathematics and Computer Science, N. Copernicus University, ul. Chopina 12/18, 87-100 Toruń Received August 2010 Revised February 2011 Published March 2013 Let $E$ be a subset of positive integers such that $E\cap\{1,2\}\ne\emptyset$. A weakly mixing finite measure preserving flow $T=(T_t)_{t\in\Bbb R}$ is constructed such that the set of spectral multiplicities (of the corresponding Koopman unitary representation generated by $T$) is $E$. Moreover, for each non-zero $t\in\Bbb R$, the set of spectral multiplicities of the transformation $T_t$ is also $E$. These results are partly extended to actions of some other locally compact second countable Abelian groups. Keywords: Ergodic flow, spectral multiplicities.. Mathematics Subject Classification: Primary: 37A10, 37A3. Citation: Alexandre I. Danilenko, Mariusz Lemańczyk. Spectral multiplicities for ergodic flows. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4271-4289. doi: 10.3934/dcds.2013.33.4271 O. N. Ageev, On ergodic transformations with homogeneous spectrum,, J. Dynam. Control Systems 5 (1999), 5 (1999), 149. Google Scholar A. I. Danilenko, $(C,F)$-actions in ergodic theory,, Progr. Math. 265 (2008), 265 (2008), 325. Google Scholar A. I. Danilenko, Explicit solution of Rokhlin's problem on homogeneous spectrum and applications,, Ergod. Th. & Dyn. Syst. 26 (2006), 26 (2006), 1467. Google Scholar A. I. Danilenko, On new spectral multiplicities for ergodic maps,, Studia Math. 197 (2010), 197 (2010), 57. Google Scholar A. I. Danilenko and S. V. Solomko, Ergodic Abelian actions with homogeneous spectrum,, Contemp. Math. 532 (2010), 532 (2010), 137. Google Scholar I. Filipowicz, Product $Z^d$-actions on a Lebesgue space and their applications,, Studia Math. 122 (1997), 122 (1997), 289. Google Scholar K. Frączek, Cyclic space isomorphism of unitary operators,, Studia Math. 124 (1997), 124 (1997), 259. Google Scholar G. R. Goodson, A survey of recent results in the spectral theory of ergodic dynamical systems,, J. Dynam. Control Systems 5 (1999), 5 (1999), 173. Google Scholar E. Hewitt and K. A. Ross, "Abstract Harmonic Analysis'',, Vol. I, (1963). Google Scholar A. Katok and M. Lemańczyk, Some new cases of realization of spectral multiplicity function for ergodic transformations,, Fund. Math. 206 (2009), 206 (2009), 185. Google Scholar A. Katok and J.-P. Thouvenot, Spectral properties and combinatorial constructions in ergodic theory,, in, (2006), 649. Google Scholar R. A. Konev and V. V. Ryzhikov, On spectral multiplicities ${2,\ldots,2^n}$ for totally ergodic $\mathbbZ^2$-actions,, Preprint, (). Google Scholar J. Kwiatkowski jr and M. Lemańczyk, On the multiplicity function of ergodic group extensions. II,, Studia Math., 116 (1995), 207. Google Scholar A. del Junco and D. Rudolph, On ergodic actions whose self-joinings are graphs,, Erg. Th. & Dyn. Syst., 7 (1987), 531. Google Scholar M. Lemańczyk, Spectral theory of dynamical systems,, in, (2009), 8554. Google Scholar M. Lemańczyk and F. Parreau, Special flows over irrational rotations with the simple convolutions property,, preprint., (). Google Scholar G. Mackey, Induced representations of locally compact groups. I,, Ann. Math. 55 (1952), 55 (1952), 101. Google Scholar V. V. Ryzhikov, Transformations having homogeneous spectra,, J. Dynam. Control Systems, 5 (1999), 145. Google Scholar V. V. Ryzhikov, Spectral multiplicities and asymptotic operator properties of actions with an invariant measure,, Mat. Sb., 200 (2009), 107. Google Scholar R. Zimmer, Induced and amenable ergodic actions of Lie groups,, Ann. Sci. Ecole Norm. Sup., 11 (1978), 407. Google Scholar Xiongping Dai, Yu Huang, Mingqing Xiao. Realization of joint spectral radius via Ergodic theory. Electronic Research Announcements, 2011, 18: 22-30. doi: 10.3934/era.2011.18.22 Vladimir S. Matveev and Petar J. Topalov. Metric with ergodic geodesic flow is completely determined by unparameterized geodesics. Electronic Research Announcements, 2000, 6: 98-104. Joachim von Below, José A. Lubary. Isospectral infinite graphs and networks and infinite eigenvalue multiplicities. Networks & Heterogeneous Media, 2009, 4 (3) : 453-468. doi: 10.3934/nhm.2009.4.453 Oliver Jenkinson. Ergodic Optimization. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 197-224. doi: 10.3934/dcds.2006.15.197 Ryszard Rudnicki. An ergodic theory approach to chaos. Discrete & Continuous Dynamical Systems - A, 2015, 35 (2) : 757-770. doi: 10.3934/dcds.2015.35.757 Roy Adler, Bruce Kitchens, Michael Shub. Stably ergodic skew products. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 349-350. doi: 10.3934/dcds.1996.2.349 Doǧan Çömez. The modulated ergodic Hilbert transform. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 325-336. doi: 10.3934/dcdss.2009.2.325 Thierry de la Rue. An introduction to joinings in ergodic theory. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 121-142. doi: 10.3934/dcds.2006.15.121 John Kieffer and En-hui Yang. Ergodic behavior of graph entropy. Electronic Research Announcements, 1997, 3: 11-16. Karma Dajani, Cor Kraaikamp, Pierre Liardet. Ergodic properties of signed binary expansions. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 87-119. doi: 10.3934/dcds.2006.15.87 Roy Adler, Bruce Kitchens, Michael Shub. Errata to "Stably ergodic skew products". Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 456-456. doi: 10.3934/dcds.1999.5.456 Oliver Jenkinson. Every ergodic measure is uniquely maximizing. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 383-392. doi: 10.3934/dcds.2006.16.383 Luis Barreira, Christian Wolf. Dimension and ergodic decompositions for hyperbolic flows. Discrete & Continuous Dynamical Systems - A, 2007, 17 (1) : 201-212. doi: 10.3934/dcds.2007.17.201 Almut Burchard, Gregory R. Chambers, Anne Dranovski. Ergodic properties of folding maps on spheres. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1183-1200. doi: 10.3934/dcds.2017049 Ian D. Morris. Ergodic optimization for generic continuous functions. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 383-388. doi: 10.3934/dcds.2010.27.383 Gerhard Knieper, Norbert Peyerimhoff. Ergodic properties of isoperimetric domains in spheres. Journal of Modern Dynamics, 2008, 2 (2) : 339-358. doi: 10.3934/jmd.2008.2.339 Andreas Strömbergsson. On the deviation of ergodic averages for horocycle flows. Journal of Modern Dynamics, 2013, 7 (2) : 291-328. doi: 10.3934/jmd.2013.7.291 O. A. Veliev. Essential spectral singularities and the spectral expansion for the Hill operator. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2227-2251. doi: 10.3934/cpaa.2017110 Rafael Alcaraz Barrera. Topological and ergodic properties of symmetric sub-shifts. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4459-4486. doi: 10.3934/dcds.2014.34.4459 Jingzhen Liu, Ka Fai Cedric Yiu, Alain Bensoussan. Ergodic control for a mean reverting inventory model. Journal of Industrial & Management Optimization, 2018, 14 (3) : 857-876. doi: 10.3934/jimo.2017079 Alexandre I. Danilenko Mariusz Lemańczyk
CommonCrawl
Global regular solutions to three-dimensional thermo-visco-elasticity with nonlinear temperature-dependent specific heat CPAA Home Robin problems with indefinite linear part and competition phenomena July 2017, 16(4): 1315-1330. doi: 10.3934/cpaa.2017064 Non-topological solutions in a generalized Chern-Simons model on torus Youngae Lee , National Institute for Mathematical Sciences, Academic exchanges, KT Daeduk 2 Research Center, 70 Yuseong-daero 1689 beon-gil, Yuseong-gu, Daejeon, 34047, Republic of Korea * Corresponding author Received August 2016 Revised February 2017 Published April 2017 We consider a quasi-linear elliptic equation with Dirac source terms arising in a generalized self-dual Chern-Simons-Higgs gauge theory. In this paper, we study doubly periodic vortices with arbitrary vortex configuration. First of all, we show that under doubly periodic condition, there are only two types of solutions, topological and non-topological solutions as the coupling parameter goes to zero. Moreover, we succeed to construct non-topological solution with $k$ bubbles where $k\in\mathbb{N}$ is any given number. To find a solution, we analyze the structure of quasi-linear elliptic equation carefully and apply the method developed in the recent work [16]. Keywords: Generalized self-dual Chern-Simons model, doubly periodic vortices, bubbling non-topological solution. Mathematics Subject Classification: Primary: 35J47, 35J15; Secondary: 58J37. Citation: Youngae Lee. Non-topological solutions in a generalized Chern-Simons model on torus. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1315-1330. doi: 10.3934/cpaa.2017064 J. Burzlaff, A. Chakrabarti and D. H. Tchrakian, Generalized self-dual Chern-Simons vortices, Phys. Lett. B, 293 (1992), 127-131. doi: 10.1016/0370-2693(92)91490-Z. Google Scholar L. A. Caffarelli and Y. S. Yang, Vortex condensation in the Chern-Simons Higgs model: an existence theorem, Comm. Math. Phys., 168 (1995), 321-336. Google Scholar D. Chae and O. Y. Imanuvilov, The existence of non-topological multivortex solutions in the relativistic self-dual Chern-Simons theory, Comm. Math. Phys., 215 (2000), 119-142. doi: 10.1007/s002200000302. Google Scholar D. Chae and O. Y. Imanuvilov, Non-topological solutions in the generalized self-dual ChernSimons-Higgs theory, Calc. Var. Partial Differential Equations, 16 (2003), 47-61. doi: 10.1007/s005260100141. Google Scholar H. Chan, C. C. Fu and C. S. Lin, Non-topological multivortex solutions to the self-dual Chern-Simons-Higgs equation, Comm. Math. Phys., 231 (2002), 189-221. doi: 10.1007/s00220-002-0691-6. Google Scholar K. Choe, Uniqueness of the topological multivortex solution in the self-dual Chern-Simons theory, J. Math. Phys., 46 (2005), 012305, 22 pp. doi: 10.1063/1.1834694. Google Scholar K. Choe, Asymptotic behavior of condensate solutions in the Chern-Simons-Higgs theory, J. Math. Phys., 48 (2007), 48 (2007), 103501, 17 pp. doi: 10.1063/1.2785821. Google Scholar K. Choe and N. Kim, Blow-up solutions of the self-dual Chern-Simons-Higgs vortex equation, Ann. Inst. H. Poincaré Anal. Non Linaire, 25 (2008), 313-338. doi: 10.1016/j.anihpc.2006.11.012. Google Scholar W. Ding, J. Jost, J. Li, X. Peng and G. Wang, Self duality equations for Ginzburg-Landau and Seiberg-Witten type functionals with 6th order potentials, Comm. Math. Phys., 217 (2001), 383-407. doi: 10.1007/s002200100377. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, 224 second ed. , Springer, Berlin, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar X. Han, Existence of doubly periodic vortices in a generalized Chern-Simons model, Nonlinear Anal. Real World Appl., 16 (2014), 90-102. doi: 10.1016/j.nonrwa.2013.09.009. Google Scholar J. Hong, Y. Kim and P. Y. Pac, Multi-vortex solutions of the Abelian Chern-Simons-Higgs theory, Phys. Rev. Lett., 64 (1990), 2230-2233. doi: 10.1103/PhysRevLett.64.2230. Google Scholar R. Jackiw and E. J. Weinberg, Self-dual Chern-Simons vortices, Phys. Rev. Lett., 64 (1990), 2234-2237. doi: 10.1103/PhysRevLett.64.2234. Google Scholar A. Jaffe and C. Taubes, Vortices and Monopoles, Birkhäuser, Boston, 1980. Google Scholar C. S. Lin and S. Yan, Bubbling solutions for relativistic abelian Chern-Simons model on a torus, Comm. Math. Phys., 297 (2010), 733-758. doi: 10.1007/s00220-010-1056-1. Google Scholar C. S. Lin and S. Yan, Existence of Bubbling solutions for Chern-Simons model on a torus, Arch. Ration. Mech. Anal., 207 (2013), 353-392. doi: 10.1007/s00205-012-0575-7. Google Scholar M. Nolasco and G. Tarantello, On a sharp Sobolev-type inequality on two dimensional compact manifolds, Arch. Ration. Mech. Anal., 145 (1998), 161-195. doi: 10.1007/s002050050127. Google Scholar M. Nolasco and G. Tarantello, Double vortex condensates in the Chern-Simons-Higgs theory, Calc. Var. Partial Differential Equations, 9 (1999), 31-94. doi: 10.1007/s005260050132. Google Scholar J. Spruck and Y. Yang, Topological solutions in the self-dual Chern-Simons theory: existence and approximation, Ann. Inst. H. Poincare Anal. Non Lineaire, 12 (1995), 75-97. Google Scholar G. 't Hooft, A property of electric and magnetic flux in nonabelian gauge theories, Nucl. Phys. B, 153 (1979), 141-160. doi: 10.1016/0550-3213(79)90465-6. Google Scholar G. Tarantello, Multiple condensate solutions for the Chern-Simons-Higgs theory, J. Math. Phys., 37 (1996), 3769-3796. doi: 10.1063/1.531601. Google Scholar G. Tarantello, Selfdual Gauge Field Vortices. An Analytical Approach, Progress in Nonlinear Differential Equations and their Applications. Birkhauser Boston, Inc. , Boston, 2008. doi: 10.1007/978-0-8176-4608-0. Google Scholar D. H. Tchrakian and Y. Yang, The existence of generalised self-dual Chern-Simons vortices, Lett. Math. Phys., 36 (1996), 403-413. doi: 10.1007/BF00714405. Google Scholar Y. Yang, Chern-Simons solitons and a nonlinear elliptic equation, Helv. Phys. Acta, 71 (1998), 573-585. Google Scholar Y. Yang, Solitons in Field Theory and Nonlinear Analysis, Springer Monographs in Mathematics, Springer-Verlag, New York, 2001. doi: 10.1007/978-1-4757-6548-9. Google Scholar Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127 Lingyu Li, Jianfu Yang, Jinge Yang. Solutions to Chern-Simons-Schrödinger systems with external potential. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021008 Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 Jiangtao Yang. Permanence, extinction and periodic solution of a stochastic single-species model with Lévy noises. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020371 Zhihua Liu, Yayun Wu, Xiangming Zhang. Existence of periodic wave trains for an age-structured model with diffusion. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021009 Ran Zhang, Shengqiang Liu. On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1197-1204. doi: 10.3934/dcdsb.2020159 Yoichi Enatsu, Emiko Ishiwata, Takeo Ushijima. Traveling wave solution for a diffusive simple epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 835-850. doi: 10.3934/dcdss.2020387 Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $ BV $ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020405 Adrian Constantin, Darren G. Crowdy, Vikas S. Krishnamurthy, Miles H. Wheeler. Stuart-type polar vortices on a rotating sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 201-215. doi: 10.3934/dcds.2020263 Thazin Aye, Guanyu Shang, Ying Su. On a stage-structured population model in discrete periodic habitat: III. unimodal growth and delay effect. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021005 Yan'e Wang, Nana Tian, Hua Nie. Positive solution branches of two-species competition model in open advective environments. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021006 Teresa D'Aprile. Bubbling solutions for the Liouville equation around a quantized singularity in symmetric domains. Communications on Pure & Applied Analysis, 2021, 20 (1) : 159-191. doi: 10.3934/cpaa.2020262 Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2020032 Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1197-1212. doi: 10.3934/dcdss.2020234 Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278 Tian Ma, Shouhong Wang. Topological phase transition III: Solar surface eruptions and sunspots. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 501-514. doi: 10.3934/dcdsb.2020350 Franck Davhys Reval Langa, Morgan Pierre. A doubly splitting scheme for the Caginalp system with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 653-676. doi: 10.3934/dcdss.2020353 Xi Zhao, Teng Niu. Impacts of horizontal mergers on dual-channel supply chain. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020173 Ferenc Weisz. Dual spaces of mixed-norm martingale hardy spaces. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020285 Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028 HTML views (57) Youngae Lee
CommonCrawl
Jacobian Linearization Matlab Linearization form of nonlinear system is obtained by Jacobian with proper cost function and the modeling of it is accomplished with the help of Euler – Lagrangian equation derived by specifying Lagrangian, difference between kinetic and potential energy of DIPC. Implement the quadcopter flight mechanics nonlinear model in MATLAB/Simulink. Introduction In this video we shall define the three hyperbolic functions f(x) = sinhx, f(x) = coshx and f(x) = tanhx. The following briefly reviews key features of diff-eq Linearization technique. A EROSPACE E NGINEERING AND M ECHANICS Outline. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. The Jacobian of a function with respect to a scalar is the first derivative of that function. Partial feedback linearization. Provide details and share your research! But avoid …. Development of finite difference and finite element techniques for solving elliptic, parabolic, and hyperbolic partial differential equations. overcomplete signal representation, interior-point methods for linear programming, total variation denoising, multiscale edges, MATLAB code, denoising, time-frequency analysis, time-scale analysis, $\ell^1$ norm optimization, matching pursuit, wavelets, wavelet packets, cosine packets. fjac means the jacobian (in case of multi-variable ) or derivative in case of single variable function. EXERCISE 1. Cannot use Jacobian based linearization. ARCH-COMP18 Nonlinear Dynamics Immler et al. m in MATLAB). Math 312 Lecture Notes Linearization Warren Weckesser Department of Mathematics Colgate University 23 March 2005 These notes discuss linearization, in which a linear system is used to approximate the behavior of a nonlinear system. don't have the exact formula for our functions and the chain rule is the only way to go. Nonlinear Systems James K. Notation; Arithmetic. This post shows one way to linearize a nonlinear state equation at a steady state setpoint in MATLAB. However, this is easily fixed. Kalman and particle filters, linearization functions, and motion models Sensor Fusion and Tracking Toolbox™ provides estimation filters that are optimized for specific scenarios, such as linear or nonlinear motion models, linear or nonlinear measurement models, or incomplete observability. Currently, the "modern" approach to SLAM is to represent the robot's trajectory as a graph: that is, to represent its poses as nodes, and measurements from those poses as ed. We can represent the transfer functions derived above for the inverted pendulum system within MATLAB employing the following commands. Now that's my Jacobian matrix. Thereafter, proficiency in writing and computing (using a higher-level language such as Python, C, C++, Java, or MATLAB) is expected in all Engineering courses. You can write a book review and share your experiences. They are named after Leonhard Euler. Centralized control of robotic manipulator. Download Presentation Solution of Nonlinear Equation Systems An Image/Link below is provided (as is) to download presentation. Computing this Jacobian at each iterations may be cumbersome and the interest of the Quasi-Newton method is avoiding this computation. - Write a Matlab function that returns the equilibrium current, voltage and state space matrices as a function of r. See Problem 90. These poles are causes the unstable behaviors. Other readers will always be interested in your opinion of the books you've read. We can represent the transfer functions derived above for the inverted pendulum system within MATLAB employing the following commands. Design antiwindup schemes for controlling systems with hard limits on input magnitudes. I have been coding in Python for ~4 years. Use of the new iterate P l+1 to obtain from Eq. Zahr, Kevin Carlberg, David Amsallem, & Charbel Farhat AHPCRC; Farhat Research Group, Stanford University Motivation Analysis Strategy Experiment Linear Module Results. Viewed 4k times 0 $\begingroup$ Assuming I have. Feedback linearization is an exact linearization process that computes state and feedback transformations to linearize a nonlinear system and allows for the design of nonlinear controllers using linear techniques. Iteration 0. Nonlinear Equations with Jacobian. 5 are not exactly the same as 90. Kalman and particle filters, linearization functions, and motion models Sensor Fusion and Tracking Toolbox™ provides estimation filters that are optimized for specific scenarios, such as linear or nonlinear motion models, linear or nonlinear measurement models, or incomplete observability. We see that with a change in initial data as small as 10−12 radians, the change in behavior is enor-mous: the pendulum spins in the opposite direction. Technically, this is implemented by a loop over array elements in a program. Cannot use Jacobian based linearization. Global variables defined in the model Configuration Parameters Simulation Target > Custom Code pane and referenced by the System object are not shared with Stateflow ® and the MATLAB Function block. it -also- keep the non-linearity of your model. compare the Jacobian from the linearization to the phase plane of the system. We are aware that this treatment on feedback linearization control of a single link manipulator with joint elasticity is far from being complete, but it provides a solid basis for studying and application of advanced feedback linearization control. Few simulation results have been presented for the above mentioned system. vr Body Gears & Wheels Throttle & Actuator Engine u T F Fd v Controller Figure 1: Block diagram of a speed control system for an automobile. However, as we mentioned, MATLAB would just not terminate for the feedback linearization controller. As I mentioned, there are two exceptions to the rule that the phase portrait near an equilibrium point can be classified by the linearization at that equilibrium point. Active 5 years, 7 months ago. 2) Linearization can be applied only if the Jacobian ma-trix exists. And this page and the next, which cover the deformation gradient, are the center of that heart. 1 Linear Parameter Varying(LPV) System E. Finally, this series covers some of the snags that can be avoided when linearizing nonlinear models in MATLAB ® and Simulink ® once you have a more practical understanding of how linearization is accomplished within the. Solving nonlinear ODE and PDE problems Compute the Jacobian of a \(2\times 2\) Linearize a 1D problem with a nonlinear coefficient;. Exact Linearization Algorithm. This block supports FMI versions 1. With thanks to: Nelson Noya Contexts: models. Homework 1 in EL2620 Nonlinear Control Name 1 Person Number e-mail address Name 2 Person Number e-mail address I. Non-linear Systems Linearization Definition. However, even when very high accuracy is required, linearization transformations are of value in providing good initial estimates for the. OpenModelica is an open-source Modelica-based modeling and simulation environment intended for industrial and academic usage. These dynamics need to be linearized in order to implement a linear controller. the mpt toolbox has a linearize function, which was shadowing the original command of matlab. Sometimes the order of declaration is incorrect and some parameters evaluate to NaN or Inf. linearization and Equilibrium point can be found by calculating the Jacobian don't work with TF objects in matlab since laplace transform assumes them to be. 3 Gain Scheduling E. That's my matrix of the-- that matrix has these four coefficients, those four numbers. This needs careful tuning of the parameters c. FEM MATLAB Code for Linear and Nonlinear Bending Analysis of Plates/NonlinearAnalysisofPlates/ Assemble(kss,ktt,FF,ke,kt,f,index) BoundaryCondition(typeBC,coordinates,loadstep,updateintforce). For FMI version 2. where denotes the Jacobian matrix calculated at the constant conductivity 1: (2) Jacobian matrix is commonly calculated by studying the first order perturbation of conductivity on each element [1]. The linearization is a first-order approximation. Terejanu Department of Computer Science and Engineering University at Buffalo, Buffalo, NY 14260 [email protected]ffalo. The absolute value of the Jacobian determinant at p gives us the factor by which the function f expands or shrinks volumes near p; this is why it occurs in the general substitution rule. Linearization is the process of taking the gradient of a nonlinear function with respect to all variables and creating a linear representation at that point. The numerical schemes considered are based on a continuous linearization of. The concept of the Jacobian can also be applied to functions in more than variables. Linearization and Newton's method. the mpt toolbox has a linearize function, which was shadowing the original command of matlab. The deformation gradient is used to separate rigid body translations and rotations from deformations, which are the source of stresses. 9 00 –10 30 Session 3. Compact linearization for binary quadratic problems Leo Liberti1 LIX, Ecole Polytechnique, F-91128 Palaiseau, France´ Received: 25th April 2006 / Revised version: ? Abstract Weshowthat a well-known linearization technique initially pro-posed for quadratic assignment problems can be generalized to a broader. I have tried to tune PID controller parameters, but the plant cannot be linearized. equilibrium point" and use the cross hairs that pop up to select a point. it don't like a traditional (Jacobian) linearization which linearize the model approximately at specific equilibrium point. The nal aim is to get a model of the LPV form: ˆ x_ = A( ) x+ B( ) u y= C( ) x+ D( ) u (3) where parameter belongs to some compact,. $\begingroup$ The Jacobian is not an approximation per se, but you can use it to linearize something, a bit the same way a tangent linearizes a curv. This smart calculator is provided by wolfram alpha. The GRE Subject Test is not required for admission. by doing a linearization of the given differential equation and trying to set up a Jacobian. eigenvalues in the linearization of this model. 40 or better, must submit the General Test of the GRE. , Marzorati, D. If at least one of the eigenvalues of the Jacobian matrix has real part greater than zero, then the steady state is unstable. MATLAB news, code tips and tricks, questions, and discussion! We are here to help, but won't do your homework or help you pirate software. Some systems contain discontinuities (for example, the process model might be jump-linear [14], in which the parameters can change abruptly , or the sensor might return highly quantized sensor measurements [15]),. MATLAB users who maintain or develop their own versions of nonlinear FE or MBD software codes may wish to speedup computations using the Parallel Computing Toolbox. As a consequence, the Jacobian will not be of full rank. Interpreted Execution or Code Generation. More Central-Difference Formulas The formulas for f (x0) in the preceding section required that the function can be computed at abscissas that lie on both sides of x, and they were referred to as central-difference formulas. it -also- keep the non-linearity of your model. [DP05, Sec. In fluid dynamics, the Euler equations are a set of quasilinear hyperbolic equations governing adiabatic and inviscid flow. Non-linear Systems Linearization Definition. Simulink ® Control Design™ software linearizes models using a block-by-block approach. it don't like a traditional (Jacobian) linearization which linearize the model approximately at specific equilibrium point. Java Project Tutorial - Make Login and Register Form Step by Step Using NetBeans And MySQL Database - Duration: 3:43:32. LPVTools is a MATLAB toolbox for simulation, analysis, and design of parameter dependent control systems using the Linear Parameter-Varying (LPV) framework. To use JacobianMatrix, you first need to load the Vector Analysis Package using Needs ["VectorAnalysis`"]. Linearization and Newton's method. Consider the problem of finding a solution to a system of nonlinear equations whose Jacobian is sparse. linearization. Recall that an ODE is stiff if it exhibits behavior on widely-varying timescales. A Comparative Study of Kalman Filtering for Sensorless Control of a Permanent-Magnet Synchronous Motor Drive Borsje, P. in the Matlab workspace is available in the block diagram. Dear, The appropriate solution for your issue is the feedback linearization (FBL). We will focus on two-dimensional systems, but the techniques used here also work in n dimensions. the control method. Department of Mechanical Engineering University of Minnesota ME8281 Advanced Control Systems Design Prof. Dyn Games Appl (2016) 6:139–156 DOI 10. However, this is not always the case. The concept of the Jacobian can also be applied to functions in more than variables. 56 LECTURE 13. One can compute all of the 2 by 2 Jacobians that follow by hand, but in some cases it can be tedious and hard to get right on the first try. For Newton-Raphson based solvers, the major cost per iteration lies in computation of the Jacobian matrix [6] where it is often. , Marzorati, D. The Jacobian matrix and linearization in this lecture are similar to the power flow Jacobian and the Newton-Raphson method developed in EE369 (Matlab code for the. The optimality of the signal is due to minimization of,. Introduction In this video we shall define the three hyperbolic functions f(x) = sinhx, f(x) = coshx and f(x) = tanhx. Notice, incidentally, that we stored the handle to the plot of the linearization in h and used this handle to change the color of the line. View Yash Shah's profile on LinkedIn, the world's largest professional community. To overcome these drawbacks, Feedback Linearization (FL) was introduced based on differential geometry mathematics. Timothy Flaherty, Carnegie Mellon University Abstract Newton's method is an algorithm for finding the roots of di↵erentiable functions, that uses iterated local linearization of a function to approxi-. For the system described by eq. it don't like a traditional (Jacobian) linearization which linearize the model approximately at specific equilibrium point. Steps 1 - 4 at bottom of review section provide guidance for OP to obtain problem solution. Otherwise iterative methods must be applied. Parameters 1 and 2 are necessary but the parameters 3 and 4 may be given for better approximation in the answer. We will focus on two-dimensional systems, but the techniques used here also work in n dimensions. Department of EE, The Hong Kong Polytechnic University Hong Kong, China Abstract− This paper presents a comparative study of the novel Unscented Kalman Filter (UKF) and the Extended. Some systems contain discontinuities (for example, the process model might be jump-linear [14], in which the parameters can change abruptly, or the sensor might return highly quantized sensor measurements [15]),. Notation; Arithmetic. The procedure introduced is based on the Taylor series expansion and on knowledge of nominal system trajectories and nominal system inputs. beyond the scope of an undergraduate class. Taylor series can be used to obtain central-difference formulas for the higher derivatives. Linearization of forward kinematic equations is made with usage of Taylor Series for multiple variables. F(x) being the Jacobian of F is called Newton's method. (20) they are the same and the linearization describes the dynamics as well. Friday, December 4, 2009. Finite Elements for Nonlinear Problems Computer Lab 2 In this computer lab we apply nite element method to nonlinear model prob-lems and study two of the most common techniques for solving the resulting nonlinear system of equations, namely, xed point, or Piccard, iteration, and Newton's method. So then my approximate-- what's my linearized equation?. simulations have been carried out on computer algebra system: Maple and Matlab to illustrate the behavior of the considered system for a long time. (a) Model decoupling using V and as inputs while Pe and Qe as measurements. Både matrisen och dess determinant kan ibland något informellt benämnas. Description of the cart-pole system An inverted pendulum is a classic problem in nonlinear dynamics and control. nonlinear constraints, however, Deflnition 3. To overcome these drawbacks, Feedback Linearization (FL) was introduced based on differential geometry mathematics. However, I get different numerical values than if I do the Jacobian by hand and evaluate it on the mesh grid. f(Y2)(Jacobian)(dY ). The numereous eigenvalues poles were seen an the rigth axis side of the complex plane. To linearize your model, you must specify the portion of the model you want to linearize using linear analysis points; that is, linearization inputs and outputs, and loop openings. Taylor series can be used to obtain central-difference formulas for the higher derivatives. This can be taken care of by using a truncated. Solving ODEs in MATLAB So the idea will be to linearize, to look very near that critical point, that point. The software individually linearizes each block in your Simulink model and produces the linearization of the overall system by combining the individual block linearizations. FEM MATLAB Code for Linear and Nonlinear Bending Analysis of Plates/NonlinearAnalysisofPlates/ Assemble(kss,ktt,FF,ke,kt,f,index) BoundaryCondition(typeBC,coordinates,loadstep,updateintforce). 8 30 –9 00 Coffee/Tea. I worked as a Python developer for Seagate for a summer, developing interfaces to manufacturing and test log tools, as well as implementing a real-time data analysis application. If the eigenvalues of the Jacobian matrix all have real parts less than zero, then the steady state is stable. P/NP grading. Literature Search – A large body of publications pertaining to robot kinematics, path planning, and control exists. You can use the model to gain evidence that that the model is valid by seeing whether the predictions obtained match with data for which you already know the correct values. 1 Numerical Methods for Integration, Part 1 In the previous section we used MATLAB's built-in function quad to approximate definite integrals that could not be evaluated by the Fundamental Theorem of Calculus. To overcome these drawbacks, Feedback Linearization (FL) was introduced based on differential geometry mathematics. To overcome the difficulty the linearization is required. The numerical schemes considered are based on a continuous linearization of. (See [Fiacco and McCormick 1968], for a proof. , the so-called Perron effect) (Leonov and Kuznetsov 2006) A strictly positive maximal Lyapunov exponent is often considered as a definition of deterministic chaos. Note, in order to avoid confusion with the i-th component of a vector, we set now the iteration counter as a superscript x(i) and no longer as a subscript x i. Simulations are done with the help of Matlab which shows that the bird's leg can be stabilise if they are double inverted pendulums. Find the partial derivatives Write down the Jacobian matrix Find the eigenvalues of the Jacobian matrix. 1 Full state feedback control using Jacobian linearization Although the feedback linearization controller is not able to stabilize the cart posi-tion, we observed in the previous simulation (Figure 2) that it swung up the pendu-lum close to the upright position and kept the cart near where it started. Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Finally answering our initial question, the dynamical system described by eq. (20) they are the same and the linearization describes the dynamics as well. Classical Lorenz Equations were linearization and then Jacobian matrix was obtained by MATLAB software in embedded system and eigenvalues calculated. For example, x 2 1−x2 1 = 0, 2−x 1x 2 = 0, is a system of two equations in two unknowns. We conclude that our model, at least as it is solved on MATLAB, fails at the initial data point (π,0). Consult department office for detailed information. 3 Gain Scheduling E. To study complex chaotic systems and their appearance in natural and manmade feedback systems. Gowher, The exponential regression model presupposes that this model is valid for your situation (based on theory or past experience). Future use of the results is to be used for the design of a landing gear based on the principles of a bird's leg. Jacobian with Respect to Scalar. These dynamics need to be linearized in order to implement a linear controller. Exact Linearization Algorithm. Asking for help, clarification, or responding to other answers. It is useful to remember that the chain rule in this case is in the form dz dt = dz dx dx dt + dz dy dy dt. The second exception is where the linearization is a centre. That's the Jacobian matrix. Note that you can give names to the outputs (and inputs) to differentiate between the cart's position and the pendulum's position. Section IV is the Jacobi linearization section. 1) The precise assumptions on the problem class will be stated in Section 3 below. 0, if your FMU contains both Co-Simulation and Model Exchange elements, the block detects this state and prompts you to select the operation mode for the block. Li (1/19/2006) 1 Jacobian Linearization. The numerical schemes considered are based on a continuous linearization of. That's the Jacobian matrix. For example, a biologist might model the populations x(t) and y(t) of two interacting species of animals by the. Tuesday, December 1, 2015. Read "Numerical computation of bifurcations in large equilibrium systems in matlab, Journal of Computational and Applied Mathematics" on DeepDyve, the largest online rental service for scholarly research with thousands of academic publications available at your fingertips. A Comparative Study of Kalman Filtering for Sensorless Control of a Permanent-Magnet Synchronous Motor Drive Borsje, P. At each time step, the linearization is performed locally and the resulting Jacobian matrices are then used in the prediction and update states of the Kalman filter algorithm. Some systems contain discontinuities (for example, the process model might be jump-linear [14], in which the parameters can change abruptly , or the sensor might return highly quantized sensor measurements [15]),. The accuracy obtainable with linearization transformations is not as good as that obtainable with the iterative techniques applied directly to the nonlinear problem. Development of finite difference and finite element techniques for solving elliptic, parabolic, and hyperbolic partial differential equations. Briefly, under FEM framework, the potential distribution can be solved by forward model. The writing sample should consist of a minimum of one, to a maximum of three, class papers or publications. And this page and the next, which cover the deformation gradient, are the center of that heart. These dynamics need to be linearized in order to implement a linear controller. The core of the toolbox is a collection of functions for model reduction, analysis, synthesis and simulation of LPV systems. Figure 1 illustrates the concept. In this work, I study two feedback linearization, dynamic inversion with zero dynamics stabilization and exact linearization and non-interacting control via dynamic feedback. Exponential Rosenbrock-type methods. (20) they are the same and the linearization describes the dynamics as well. Similarly, we can run Newton's Method by typing the following in the command window: out02=NewtonMethod('testfunc01',0,100,1e-6); And you should see that Newton's Method only used 5 iterations. Kalman and particle filters, linearization functions, and motion models Sensor Fusion and Tracking Toolbox™ provides estimation filters that are optimized for specific scenarios, such as linear or nonlinear motion models, linear or nonlinear measurement models, or incomplete observability. Jacobi matrix. Active 5 years, 7 months ago. 000000000001). INTRODUCTION Predictive control is a method that calculates in every step, and usually in discrete-time domain, the optimal control signal. Newton's Method on a System of Nonlinear Equations Nicolle Eagan, University at Bu↵alo George Hauser, Brown University Research Advisor: Dr. tol means the tolerance limit of the final answer. The Jacobian matrix consists of the elements where , , are the Cartesian coordinates and , , are the variables of the coordinate system coordsys, if specified, or the default coordinate system otherwise. They are named after Leonhard Euler. I also have experience in C++, C, Matlab, and recently Julia. m should be used to set the. Diploma Thesis Supervisor : ,QJ -L t'RVWiO. Linearization at Critical Points Nonlinear odes: fixed points, stability, and the. The Jacobian matrix of a system of smooth ODEs is the matrix of the partial derivatives of the right-hand side with respect to state variables where all derivatives are evaluated at the equilibrium point x=xe. In the usual way, we analyze the types of the critical points. Plan of Lecture 1 Linearization around steady state, speed of convergence, slope of saddle path 2 Some transition experiments in the growth model 2/30. beyond the scope of an undergraduate class. The core of the toolbox is a collection of functions for model reduction, analysis, synthesis and simulation of LPV systems. Since stable and unstable equilibria play quite different roles in the dynamics of a system, it is useful to be able to classify equi-librium points based on their stability. However, as we mentioned, MATLAB would just not terminate for the feedback linearization controller. it don't like a traditional (Jacobian) linearization which linearize the model approximately at specific equilibrium point. Jacobi matrix. 1 Linear Parameter Varying(LPV) System E. Section V has details on the nonlinear control design. The GRE Subject Test is not required for admission. Find more Widget Gallery widgets in Wolfram|Alpha. The goal of this section is to introduce a variety of graphs of functions of two variables. What we did here was Jacobian linearization. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. tw/~jcjeng/Linearization. You can also get a better visual and understanding of the function by using our graphing tool. matlab linearization, transfer functions and stuffs 2016 brazil study abroad program texas a&m university- university of sao paulo k enny a nderson q ueiroz caldas maurÍcio e iji n akai elmer a lexis g amboa p eÑaloza rodolpho v ilela a lves n eves r afael f ernando q uirino magossi m ichel b essani. Centralized control of robotic manipulator. Compare with the results from (a). 1 (2017): 11-18. The software individually linearizes each block in your Simulink model and produces the linearization of the overall system by combining the individual block linearizations. eigenvalues in the linearization of this model. Nonlinear Systems Toolbox 2015 NST15-1 Arthur J. $ The Jacobian, which is essentially a matrix of truncated Taylor series expansions, is used to linearize nonlinear systems about an equilibrium point, but the eigenvalues of the Jacobian matrix, evaluated at equilibrium, must be nonzero in order for the linearization to hold (there are some important. Introduces fundamental concepts in Electrical and Computer engineering and provides insight to the various careers in each field. The STM is a linearization procedure of a dynamical system. 1 the moles of each component in the gridblock. Trimming and Linearization, Part 1: What is Linearization? - Duration: 14:01. Perform robustness analysis, such as gain margins, time-delay margins, and percentage-variation margins at single points in a complex, multi-loop system. The reported work has been implemented in the MATLAB environment for 25 fft ECG records. I am currently trying to port some of my MATLAB code to Scilab. Find more Widget Gallery widgets in Wolfram|Alpha. These eigenvalues are often referred to as the 'eigenvalues of the equilibrium'. Decimals; Fractions; HCF and LCM; Order of Operations; Signed Numbers; Algebra. As a consequence, the Jacobian will not be of full rank. 3 Full state feedback linearization Formally, full state feedback linearization applies to nonlinear ODE control system model. Linearization of Nonlinear Models • Most chemical process models are nonlinear, but they are often linearized to perform a simulation and stability analysis. Some systems contain discontinuities (for example, the process model might be jump-linear [14], in which the parameters can change abruptly , or the sensor might return highly quantized sensor measurements [15]),. To determine whether the linearization results properly capture characteristics of the nonlinear model, such as the anti-resonance around 6. Unfortunately, among systems of order higher than two, the full state feedback linearizable ones form a set of "zero mea­ sure", in a certain sense. Over 75 percent of the problems presented in the previous edition have been revised or replaced. Find the partial derivatives Write down the Jacobian matrix Find the eigenvalues of the Jacobian matrix. Plan of Lecture 1 Linearization around steady state, speed of convergence, slope of saddle path 2 Some transition experiments in the growth model 2/30. (20) is confined to the two dimensional phase plane. A selection is listed at the end of this report. Linearization of Nonlinear Systems Objective This handout explains the procedure to linearize a nonlinear system around an equilibrium point. sion, also called Jacobian linearization; A second method proposed by (Leith and Leithead, 1998), perhaps less known, called \velocity based linearization", based on time derivatives of system (1). Feedback Linearization. Computing this Jacobian at each iterations may be cumbersome and the interest of the Quasi-Newton method is avoiding this computation. Perform robustness analysis, such as gain margins, time-delay margins, and percentage-variation margins at single points in a complex, multi-loop system. Partial feedback linearization. CONSTRAINED PREDICTIVE CONTROL OF CONTINUOUS STIRRED EXOTHERMICAL REACTOR USING LINEARIZATION Keywords: Model predictive control, exothermic reactor 1. In order to find the direction of the velocity vectors along the nullclines, we pick a point. quasi-LPV models - Stability and induced L2 norm of LPV systems - Synthesis of LPV controllers based on the two-sided projection lemma. For this we have chosen the programming software Matlab , as it has been the most used in the majority of the subjects in the degree and therefore has suficient command of it. 1 Full state feedback control using Jacobian linearization Although the feedback linearization controller is not able to stabilize the cart posi-tion, we observed in the previous simulation (Figure 2) that it swung up the pendu-lum close to the upright position and kept the cart near where it started. 3 Gain Scheduling E. Discuss limitations of this linearization approach, and finally, present a modern alternative to linearization through the unscented transform. 4 Design Example Reference: [DP05] G. Hi dear all, I have difficulty to obtain the frequency response of 2 nonlinear differential equations. EXERCISE 1. The concept of the Jacobian can also be applied to functions in more than variables. Consider the case of two lines in the plane. View Yash Shah's profile on LinkedIn, the world's largest professional community. Find more Widget Gallery widgets in Wolfram|Alpha. As I mentioned, there are two exceptions to the rule that the phase portrait near an equilibrium point can be classified by the linearization at that equilibrium point. Notice, incidentally, that we stored the handle to the plot of the linearization in h and used this handle to change the color of the line. This deficiency exists due to the non-uniqueness of the state space representation. The fixed point iteration (and hence also Newton's method) works equally well for systems of equations. This example is illustrated in the Matlab script run newtonmv. 4 Organization 2 describes the usage of CCM and its mathematical model interface for sim-. Equilibrium points- steady states of the system- are an important feature that we look for. Newton-Raphson Method for Nonlinear Systems of Equations. MathsFirst Home; Online Maths Help. Solving nonlinear ODE and PDE problems Compute the Jacobian of a \(2\times 2\) Linearize a 1D problem with a nonlinear coefficient;. The geometric Jacobian is a function of the robot configuration q (joint angles/positions), which is why it is often denoted as J(q). Exact Linearization Algorithm. The Jacobian of a function with respect to a scalar is the first derivative of that function. Other readers will always be interested in your opinion of the books you've read. If you have a code returning the residuals and jacobian of the dynamic model, you could in principle replace the matlab routines created by Dynare (using symbolic calculus) provided that you manage to use the same API (and that the Jacobian matrix is organized as done in Dynare). While that would be close enough for most applications, one would expect that we could do better on such a simple problem. Ask Question Asked 5 years, 7 months ago. - Jacobian linearization vs. Asking for help, clarification, or responding to other answers. Compare with the results from (a). Trimming and Linearization, Part 1: What is Linearization? - Duration: 14:01. This needs careful tuning of the parameters c.
CommonCrawl
Nebulised hypertonic saline (3 %) among children with mild to moderately severe bronchiolitis - a double blind randomized controlled trial Aayush Khanal1, Arun Sharma1, Srijana Basnet1, Pushpa Raj Sharma1,2 & Fakir Chandra Gami1 To Assess the efficacy of nebulised hypertonic saline (HS) (3 %) among children with mild to moderately severe bronchiolitis. Infants aged 6 weeks to 24 months, with a first episode of wheezing and Clinical Severity scores (Arch Dis Child 67:289-93, 1992) between 1 and 8, were enrolled over 4 months duration. Those with severe disease, co-morbidities, prior wheezing, recent bronchodilator and steroid use were excluded. Patients were randomized in a double-blind fashion, to receive two doses of nebulized 3 % HS (Group 1) or 0.9 % normal saline (Group 2) with 1.5 mg of L-Epineprine, delivered 30 min apart. Parents were contacted at 24 h and 7 days. The principal outcome measure was the mean change in clinical severity score at the end of 2 h of observation. A total of 100 infants (mean age 9.6 months, range 2–23 months; 61 % males) were enrolled. Patients in both groups had mild to moderately severe disease at presentation. On an intention-to-treat basis, the infants in the HS group had a significant reduction (3.57 ± 1.41) in the mean clinical severity score compared to those in the NS group (2.26 ± 1.15); [p < 0.001; CI: 0.78–1.82]. More children in the HS group (n = 35/50; 70.0 %) were eligible for ER/OPD discharge at the end of 2 h than those in the NS group (n = 15/50; 30 %; p < 0.001), and less likely to need a hospital re-visit (n = 5/50; 10.0 %) in the next 24 h as compared to the NS group (n = 15/50, 30.0 %; p < 0.001). The treatment was well tolerated, with no adverse effects. Nebulized 3 % HS is effective, safe and superior to normal saline for outpatient management of infants with mild to moderately severe viral bronchiolitis in improving Clinical Severity Scores, facilitating early Out-Patient Department discharge and preventing hospital re-visits and admissions in the 24 h of presentation. Trial registration Clinicaltrials.gov NCTID012766821. Registered on January 12, 2011. Bronchiolitis is a common, occasionally severe viral infection of the lower respiratory tract responsible for significant morbidity and mortality in children under two years of age [1]. According to World Health Organization bulletin, an estimated 150 million new cases of clinical pneumonia (principally Pneumonia and Bronchiolitis) occur annually [2]; 11–20 million among them requiring hospital admission. Worldwide, 95 % of all cases occur in developing countries [2]. Epidemiologic data show that RSV accounts for about 65 % of hospitalizations due to Bronchiolitis [3]. Multiple studies [4–7] have documented variation in diagnostic testing, treatment modalities practiced and their outcomes in Bronchiolitis suggesting a lack of consensus for this common disorder. Likewise, despite the frequency of this condition, there is no unanimously accepted evidence driven treatment approach [8, 9]. Besides supplemental oxygen, fluids and supportive care, treatment options include, bronchodilators, epinephrine and corticosteroids [9]. Hypertonic saline (3 %) is a new agent that has been found to be promising in recent studies [10–20]. The proposed mechanism are by improving mucus rheology, reducing airway wall edema and causing sputum induction and cough [12]. A recent meta-analysis [10] also showed a consistent improvement in clinical severity scores and suggested HS may also decrease the length of hospital stay in Bronchiolitis. However, multiple other studies [8, 13, 21–25] have shown equivocal results with little or no clinical benefits with the use of hypertonic saline (3, 6 or 7 %). Similary, there is a paucity of data on comparison of important outcomes like readiness for discharge, need for repeat hospital visits and hospitalization rates, which are important reflectors of morbidity and economic burden [10]. In the paucity of rigorously controlled studies in developing countries using 3 % HS, lack of a consensus regarding management of bronchiolitis in our practice and an opportunity to improve care for this common disorder this study was conducted to assess the therapeutic efficacy of 3 % HS. We tried to study primarily the improvement in CS scores but also looked at parameters like readiness for discharge, need for hospital revisit rates and hospitalization which would reflect the morbidity and financial burden of disease. This study was a prospective, interventional, double-blind randomized controlled trial. Ethical clearance A written informed consent was obtained from the primary caretaker of the patients prior to the enrollment. The study was approved by the Department of Research, Institutional Review Board and Ethics Committee of Tribhuvan University Teaching Hospital. Study participants Subjects were recruited from previously healthy children visiting the Emergency Room (ER) and Out-Patient Department (OPD) of Kanti Children Hospital with the following inclusion criteria: Age between 6 weeks and 2 years First Episode of Wheezing Meets Clinical Definition of Bronchiolitis Clinical severity (CS) scores (Wang et al [26]) between 1 and 9 (Table 1). Table 1 Wang et al. clinical severity score Bronchiolitis was clinically defined as per the AAP consensus guidelines [4, 27] as the first episode of acute wheezing in children less than two years of age, starting as a viral upper respiratory infection (coryza, cough or fever). Any underlying disease (e.g., cystic fibrosis, bronchopulmonary dysplasia and cardiac or renal disease), Prior history of wheezing, Diagnosed case of asthma, Oxygen saturation (SpO2) <85 % on room air, CS score > 9, Progressive respiratory distress requiring mechanical ventilation, Previous treatment with bronchodilators within last 4 h, and Any steroid therapy within 48 h The study was carried out in the ER, Observation room (OR) and OPD. Recruitment occurred at the peak of bronchiolitis season in between January 15th to April 15th for duration of 4 months. Patient enrollment occurred on weekdays between 08.00 and 17.00 h. The investigator assessed the children for eligibility and assigned a clinical severity (CS) score described by Wang et al. [26] (Table 1). Data were collected using standardized forms to document pertinent history and physical exam. Each children's weight, temperature, respiratory rate, SpO2 in room air (determined by pulse oximeter, Siemens), heart rate, CS Score and hydration status were recorded. The children were stabilized with antipyretics if necessary (temperature > 38.3*C) and/or nasal suction if the nose was blocked. Supplemental oxygen by face mask was provided to maintain SpO2 > 90 %. Patients determined to be in life threatening condition were immediately managed for same and were not further considered for study. The study drugs were prepared by a pharmacist, administered by an ER/OPD nurse and compliance with medication administration was assured by the investigator's direct observation of each nebulization. All eligible patients were randomly assigned to one of the two groups: Group 1 (n = 50) received inhalation of L-Epinephrine 1.5 mg, diluted to 4 ml with 3 % Hypertonic 1 Saline (HS) solution; Group 2 (n = 50) received inhalation of L-Epinephrine, 1.5 mg, diluted to 4 ml with 0.9 % Normal Saline solution. The study drug was administered at 0 and 30 min by a Jet nebulizer using a face mask. The investigator assessed the children's general condition and recorded the CS score, SpO2, RR and HR prior to each drug administration and at 30, 60 and 120 min after the first nebulization. The study design is shown in Fig. 1. The study design Adverse events were defined as heart rate > 200, tremor and worsening clinical status. Patients were excluded from the study if the two courses of nebulisation was not delivered, the drug delivery was delayed by 10 min or more (protocol deviation) or if clinical deterioration mandated escalation of therapy and/or support. The investigator contacted the parents via telephone 24 h after their ED/OPD discharge to determine the need for any unscheduled hospital visit and hospitalization within the next 24 h of OPD/ ER visit: their readmission (relapse) rate. The register at the ER/OR was checked daily for any unscheduled visits by the caretakers. They were also contacted at the end of 1 week in order to record any unscheduled medical visits, missed working days of caregivers and persistence of cough. Patients were labeled as Lost-to-Follow up if there was failure to communicate for 3 consecutive attempts for 2 consecutive days at their 7th and 8th day of initial presentation to the OPD/ER. Primary outcome Compare the mean change in Clinical Severity score among patients with bronchiolitis treated with either L-Epinephrine- 3 % Hypertonic saline or L-Epinephrine- 0.9 % saline. Assess the improvements in SpO2, respiratory rate and heart rate in both intervention groups. Compare the discharge readiness and readmission rates in both intervention groups at the end of 2 h of observation and within 24 h following discharge respectively. Describe the socioeconomic burden of illness. Sample size was determined by the following formula: $$ \mathbf{N} = \left[\left(\mathbf{z}\mathbf{1} + \mathbf{z}\mathbf{2}\right)\mathbf{2}\ \left(\acute{\mathbf{O}} \mathbf{12} + \acute{\mathbf{O}} \mathbf{22}\right)\ \right]\ /\ \left(\overset{\hat{\mkern6mu} }{\mathbf{u}}\mathbf{1}\ \hbox{--}\ \overset{\hat{\mkern6mu} }{\mathbf{u}}\mathbf{2}\right)\mathbf{2}\ . $$ N: Sample Size z1: The confidence level, for p value: 0.05, z1: 1.96. z2: 0.84 for Power of 80 % 1.28 for Power of 90 % Ó1: Standard Deviation of the Outcome Variable (Clinical Severity Score) in the 1st intervention group (HS) Ó2: Standard deviation of the outcome variable (Clinical severity score) in the 2nd intervention group (NS) Û 1: Mean change in clinical severity score among 1st intervention group (HS) Û 2: Mean change in clinical severity score among 2nd intervention group (NS) Allowing a Type1 error of 5 % (α: 0.05), z1 score: 1.96. For a Power of 95 %, z2 = 1.64. The standard deviation of the change in CS score is derived from previous studies [8] and taken as 1.3. We proposed that a difference of 1 point in the CS score between the two intervention groups will be considered clinically significant. To detect this mean difference of 1 unit in the CS score, with a power of 95 %, a sample size of 44 in each intervention group was required. This required a total of 88 patients to be enrolled in the study. Considering the drop out/lost to follow up to be approximately 10 %, 100 patients were enrolled, allowing 50 in each group. Randomization Sequence generation A Random Allocation Software [28] generated by computer, identified patients by a triple digit mixed numeric code, was used by the study coordinator to allocate patients to treatment groups, and he was the only person with access to the randomization. Type of randomization Block Randomization method was used to stratify patients into blocks of 10 each, each comprising of 10 patients. Allocation concealment After preparation, the study solutions were labeled with the codes and wrapped in an envelope bearing the respective codes. Study solutions were identical in appearance and odor. Their identity was blinded to all participants, care providers, and investigators and outcome assessor. Randomization was done by the study coordinator (not involved in the study), who was the only person to have access to the codes. The codes were a mixed 3 digit numeric code. The study solutions prepared by a pharmacist (not involved in the study) were stored in the non-freezer compartment (2–8*C) of the refrigerator and discarded if not used within 72 h of preparation. The investigator assessed the patients and allocated the treatment modalities to each one of them himself. The study was a Double Blind Randomized Controlled Trial with the investigator, the participants, the nurses who delivered the drug being blinded to the therapeutic option. Statistical analysis was performed using SPSS for Windows, Release 16.0 (SPSS Inc., Chicago, IL). Dichotomous events were analyzed by using the Chi-Square test. Continuous variables were compared by Student t-test. Statistical significance was defined as p-value < 0.05. This trial has been reported in accordance to the Consolidated Standards of Reporting Trials (CONSORT, 2010) guidelines [29]. A total of 754 children were screened and 146 previously well children were assessed for eligibility in the study periodas shown in Fig. 2. Forty-Six children were excluded and 99 patients completed the study. Data was analyzed on an intention to treat basis. The trial profile No significant differences were noted between the study groups with respect to baseline characteristics (p > 0.05) and risk factors for severity as shown in Table 2 and Table 3 respectively. Patients in both groups had moderately severe bronchiolitis with mean CS score above 5. There is gradual improvement in CS score with time in both the groups and the effect seemed to be more pronounced after the second session of nebulisation at 60 min. Patients who received nebulised HS had more significant improvement in the baseline CS scores (Group 1, a change of 3.57 ± 1.41; Group 2, a change of 2.26 ± 1.15; p < 0.01) at the end of 2 h of therapy as shown in Table 5. There was also significant difference in the mean change in CS scores, HR, RR and SpO2 between the two groups at the start of treatment and at the end of 2 h of therapy (p <0.001) as shown in Table 4 and Table 5. More infants who received HS were eligible for discharge at the end of 2 h as compared to NS [Group 1, n = 35(71.0 %); Group 2, n = 15(30.0 %); p < 0.001] as shown in Table 6. Table 2 Baseline characteristics of 2 groups Table 3 Distribution of risk factors proposed to contribute to prolonged disease course and, or severity in both intervention groups Table 4 Mean (± SD) for CS Score, respiratory rate, heart rate and SpO2 for patients in each group at 0,30, 60 and 120 min of assessment Table 5 Clinical outcomes in the two intervention groups at the end of 2 h of treatment Table 6 Secondary outcomes In addition, in our trial the need for repeat medical visits and hospital admission within the next 24 h of initial hospital visit (Relapse rate) was 20 %, which was significantly lesser in the babies who received HS compared to those who received NS [Group 1, n = 5(10.0 %); Group 2, n = 15 (30.0 %); p < 0.001] as shown in Table 6. During the subsequent week, 78 out of 100 patients (Group 1, n = 37; Group 2, n = 41) were accessible. As shown in Table 7, a large number of patients in both groups had persistent cough at the end of 1 week (Group 1, n = 31; Group 2, n = 37; p = 0.45). Eighteen patients from HS Group and 23 patients from NS Group had at least 1 unscheduled medical visit within 1 week (p = 0.58; overall revisit rates 41/88; 46.5 %) and 3 parents from HS Group and 10 patients from NS Group reported at missing least 1 day of work (p = 0.17; total missed days of work 10/77; 12.9 %). No adverse events occurred in either treatment groups. No children were withdrawn from the trial due to side effects or clinical deterioration. Table 7 Burden placed on caretakers due to bronchiolitis Ours is one of the few studies conducted in South-East Asia with a rigorously controlled design that has not only tried to look at the role of hypertonic saline in improving the CS scores but also studied its impact on early discharge eligibility and hospital revisit rates. In both treatment groups, change in CS was greater than 2 scores suggesting that both treatment combinations were effective. More infants with bronchiolitis were eligible for early discharge and less likely to need any hospital re-visit within the next 24 h (p < 0.001). This outcome seems to be particularly relevant in planning resource allocation and staffing in treatment of this common condition. Similar findings were reported in a recent Cochrane review [10] by Zhang et al., where a total of 560 patients treated with nebulised 3 % saline had a significantly shorter mean length of hospital stay and significantly lower post-inhalation clinical score than the 0.9 % saline group in the first three days of treatment (p < 0.001). The effects of improving clinical score were observed in both outpatients and inpatients. On the Contrary, in another double blind RCT, Wu et al. in 2014 [24] assessed the role of nebulised 3 % HS on change in RDAI scores, admission rates and length of stay in Bronchiolitis. They concluded that HS given to children in the ED decreases hospital admissions but did not produce any significant difference in Respiratory Distress Assessment Instrument score or length of stay as compared to NS. Likewise, Florin et al. [22] conducted a RCT in ER setting and found that at 1 h after the intervention, the HS group demonstrated significantly less improvement in the median RDAI Score compared with the NS group (p < 0.001) and hence concluded that nebulised 3 % Saline was not effective in improving RDAI scores in the ER setting. Previously, Sarrell et al. [19] had shown that substituting hypertonic saline for normal saline solution (2 ml) in the inhalation mixture for delivering bronchodilator improved clinical scores and decreased hospitalization rates in ambulatory children. In hospitalized children with more severe bronchiolitis, nebulized 4 ml hypertonic saline solution with or without epinephrine was found to be more effective treatment. In our study all infants recovered in both the groups, there was no treatment failure or significant adverse events following nebulisation, as previously reported by Ralston et al. [30]. Our study was well designed to minimize the common bias and limitations associated with research. Sampling bias was addressed by block randomization of patients. Blinding was maintained throughout the study period. A single observer's assessments nullified the chances of inter-observer variability. Despite these measures, some amount of bias and attrition is inevitable. We could not clarify the additional benefits of supportive care alone in infants with bronchiolitis since we didn't have a placebo arm. Although we considered important outcomes such as hospital revisit and admission rates, we did not assess the impact of therapy on the length of hospital stay. Our study population consisted mainly of infants who presented earlier, had mostly mild symptoms and experienced significant benefits. We are unsure, if similar benefits could be reproduced in infants with a more severe disease presentation. Although, we had strict inclusion and exclusion criteria, and identified potential risk factors for asthma, we might have included infants with first episode of asthma. RSV testing was unavailable and hence not done. We are also unsure if co-infection worsens the outlook in our infants. Our study was conducted at the largest tertiary level pediatric referral centre of our country with over 1000 ER visits per month and more than 10,000 OPD visits per month with seasonal variation. Our sampled population characteristics were truly representative of the general population visiting Kanti Children's Hospital. Strict inclusion and exclusion criteria were used to minimize possible confounding effects of uncharacterized and evolving wheezing phenotypes. A well defined objective, previously validated scoring system was used to assess the clinical response. Our adequate sample size, and double blinded design minimizes the common bias and limitations associated with research. Since our study only consisted of mild to moderate patients with Bronchiolitis, results may need caution while extrapolating to infants with severe disease. Nebulised 3 % hypertonic saline in combination with epinephrine was effective in reducing the Clinical Severity scores, meeting the eligibility criteria for OPD/ER discharge and reducing the need for hospital admission among ambulatory children with bronchiolitis. We believe this simple, inexpensive, safe, and effective treatment intervention could minimize the morbidity of Bronchiolitis by generalizing its use in centers caring for pediatric patients. Similar study could be done in multicentric settings with a larger sample size, involving more severely affected patients, and with a placebo control design in order to confirm and extend our results. AAP: 95 % confidence interval Clinical severity score ER: HS: OPD: Out-patient department L-Epi: L-Epinephrine Spo2: Oxygen saturation as measured by pulse oximeter HR: Respiratory rate Chaudhary K, Sinert R. Is nebulized hypertonic saline solution an effective treatment for bronchiolitis in infants? Ann Emerg Med. 2010;55:120–2. Igor R, Tomaskovic L, Boschi-Pinto C, Campbell H. Global estimate of the incidence of clinical pneumonia among children under 5 years of age. Bull World Health Organ. 2004;82:895–903. Weber MW, Mulholland EK, Greenwood BM. Respiratory syncytial virus infection in tropical and developing countries. Trop Med Int Health. 1998;3(4):268–80. Zorc JJ, Hall CB. Bronchiolitis: recent evidence on diagnosis and management. Pediatrics. 2010;125:342–9. Behrendt CE, Decker MD, Burch DJ, Watson PH. International variation in the management of infants hospitalized with respiratory syncytial virus. International RSV Study Group. Eur J Pediatr. 1998;157(3):215–20. Christakis DA, Cowan CA, Garrison MM, Molteni R, Marcuse E, Zerr DM. Variation in inpatient diagnostic testing and management of bronchiolitis. Pediatrics. 2005;115(4):878–84. Willson DF, Horn SD, Hendley JO, Smout R, Gassaway J. Effect of practice variation on resource utilization in infants hospitalized for viral lower respiratory illness. Pediatrics. 2001;108(4):851–5. Anil AB, Anil M, Saglam AB, Cetin N, Bal A, Aksu N. High volume normal saline alone is as effective as nebulized salbutamol-normal saline, epinephrine-normal saline, and 3 % saline in mild bronchiolitis. Pediatr Pulmonol. 2010;45:41–7. Wainwright C. Acute Viral Bronchiolitis in children- a very common condition with few therapeutic options. Paediatr Respir Rev. 2010;11:39–45. Zhang L, Mendoza-Sassi RA, Wainwright C, Klassen TP. Nebulised hypertonic saline solution for acute bronchiolitis in infants. Cochrane Database of Syst Rev. 2013, Issue 7. Art. No.: CD006458. doi:10.1002/14651858.CD006458.pub3. Al-Ansari K, Sakran M, Davidson BL, El Sayyed R, Mahjoub H, Ibrahim K. Nebulised 5 or 3 or 0.9 % saline for treating acute bronchiolitis in infants. J Pediatr. 2010;157:630–4. Mandelberg A, Amirav I. Hypertonic saline or high volume normal saline for viral bronchiolitis: mechanisms and rationale. Pediatr Pulmonol. 2010;45:36–40. Giudice M, Saitta F, Leonardi S. Effectiveness of nebulized hypertonic saline and epinephrine in hospitalized infants with bronchiolitis. Int J Immunopathol Pharmacol. 2012;25(2):485–91. Kuzik BA, Al Qadhi SA, Kent S. Nebulised hypertonic saline in the treatment of viral bronchiolitis 1 in infants. J Pediatr. 2007;151:266–70. Luo Z, Fu Z, Liu E. A randomized controlled trial of nebulised hypertonic saline treatment in hospitalized children with moderate to severe viral bronchiolitis. J Clin Microbiol Inf. 2011;17(12):1829–33. Luo Z, Liu E, Luo J, Li S, Zeng F, Yang X, et al. Nebulized hypertonic saline/salbutamol solution treatment in hospitalized children with mild to moderate bronchiolitis. Pediatr Int. 2010;52(2):199–202. Mandelberg A, Tal G, Witzling M, Someck E, Houri S, Balin A, et al. Nebulised 3 % hypertonic saline solution treatment for hospitalized infants with bronchiolitis. Chest. 2003;123:483–7. Sarrell EM, Tal G, Witzling M, Someck E, Houri S, Cohen HA, et al. Nebulised 3 % hypertonic saline solution for treatment of ambulatory children with viral bronchiolitis decreases symptoms. Chest. 2002;122:2015–20. Tal G, Cesar K, Oron A. Hypertonic saline/ epinephrine treatment in hospitalized infants with viral bronchiolitis reduces hospitalization stay: 2 years experience. IMAJ. 2006;8:169–73. Ipek IO, Yalchin EU, Sezer RG, Bozaykut A. The efficacy of nebulized salbutamol, hypertonic saline and salbutamol/hypertonic saline combination in moderate bronchiolitis. Pulm Pharmacol Ther. 2011;24(6):633–7. Grewal S, Ali S, McConnell DW, Vandermeer B, Klassen TP. A randomized trial of nebulised 3 % hypertonic saline with epinephrine in the treatment of acute bronchiolitis in the emergency department. Arch Pediatr Adolesc Med. 2009;163(11):1007–12. Florin TA, Shaw KN, Kittick M, Yaskscoe S, Zorc JJ. Nebulized hypertonic saline for bronchiolitis in the emergency department: a randomized clinical trial. 1. JAMA Pediatr. 2014;168(7):664–702. Teunissen J, Hochs AH, Vaessen-Verberne A, Boehmer AL, Smeets CC, Brackel H, et al. The effect of 3 and 6 % hypertonic saline in viral bronchiolitis: a randomised controlled trial. Eur Respir J. 2014;44(4):913–21. Wu S, Baker C, Lang ME. Nebulized hypertonic saline for bronchiolitis: a randomized controlled trial. JAMA Pediatr. 2014;168(7):657–63. Jacobs J, Foster M, Wan J, Pershad J. 7 % hypertonic saline in acute bronchiolitis: a randomized controlled trial. Pediatrics. 2014;133:e8. Wang EE, Milner R, Allen U, Maj H. Bronchodilators for treatment of mild bronchiolitis: a factorial randomised trial. Arch Dis Child. 1992;67:289–93. Subcommittee on Diagnosis and Management of Bronchiolitis. Diagnosis and management of bronchiolitis. Pediatrics. 2006;118:1774–93. Saghaei M. Random allocation software for parallel group randomised trials. BMC Med Res Methodol. 2004;4:26. Schultz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomized trials. BMJ. 2010;340:c332. Ralston S, Hill V, Martinez M. Nebulised hypertonic saline without adjunctive bronchodilators for children with bronchiolitis. Pediatrics. 2010;126:e520–5. Department of Pediatrics, Tribhuvan University Teaching Hospital, Institute of Medicine, Maharajgunj-44600, P.O. Box 1524, Kathmandu, Nepal Aayush Khanal, Arun Sharma, Srijana Basnet, Pushpa Raj Sharma & Fakir Chandra Gami Kathmandu Medical College Teaching Hospital, Sinamangal, Kathmandu, Nepal Pushpa Raj Sharma Aayush Khanal Arun Sharma Srijana Basnet Fakir Chandra Gami Correspondence to Aayush Khanal. AK: Selected and designed the study, enrolled the patients and prepared the results. AS: Edited the Study Protcol including design, randomization and blinding techniques, made amendments where required, Performed critical Appraisal. SB: Revision of the Study Protocol and Performed Statistical analysis and Critical Appraisal. PS: Reviewed and Edited the Study Protocol and performed Critical Appraisal. FG: Reviewed and Edited the Study Protocol and performed Critical appraisal. All authors read and approved the final manuscript. Khanal, A., Sharma, A., Basnet, S. et al. Nebulised hypertonic saline (3 %) among children with mild to moderately severe bronchiolitis - a double blind randomized controlled trial. BMC Pediatr 15, 115 (2015). https://doi.org/10.1186/s12887-015-0434-4
CommonCrawl
Segmentation in dermatological hyperspectral images: dedicated methods Robert Koprowski1 & Paweł Olczyk2 Segmentation of hyperspectral medical images is one of many image segmentation methods which require profiling. This profiling involves either the adjustment of existing, known image segmentation methods or a proposal of new dedicated methods of hyperspectral image segmentation. Taking into consideration the size of analysed data, the time of analysis is of major importance. Therefore, the authors proposed three new dedicated methods of hyperspectral image segmentation with special reference to the time of analysis. The segmentation methods presented in this paper were tested and profiled to the images acquired from different hyperspectral cameras including SOC710 Hyperspectral Imaging System, Specim sCMOS-50-V10E. Correct functioning of the method was tested for over 10,000 2D images constituting the sequence of over 700 registrations of the areas of the left and right hand and the forearm. As a result, three new methods of hyperspectral image segmentation have been proposed: fast analysis of emissivity curves (SKE), 3D segmentation (S3D) and hierarchical segmentation (SH). They have the following features: are fully automatic; allow for implementation of fast segmentation methods; are profiled to hyperspectral image segmentation; use emissivity curves in the model form, can be applied in any type of objects not necessarily biological ones, are faster (SKE—2.3 ms, S3D—1949 ms, SH—844 ms for the computer with Intel® Core i7 4960X CPU 3.6 GHz) and more accurate (SKE—accuracy 79 %, S3D—90 %, SH—92 %) in comparison with typical methods known from the literature. Profiling and/or proposing new methods of hyperspectral image segmentation is an indispensable element of developing software. This ensures speed, repeatability and low sensitivity of the algorithm to changing parameters. Image segmentation is the process of dividing an image into parts defined as areas which are homogeneous in terms of selected properties. Today, segmentation is one of the most widely used [1, 2] and overused word in the area of biomedical image analysis (PubMed database contains 23,202 results—using the search strategy "all fields", scopus—150,147 results—using the search strategy "all fields"). Practically everywhere where a region of interest (ROI) is separated using any method, the authors call it segmentation [3, 4]. Of course, in some cases, this is justified [5]. Formally, however, segmentation of images is divided into four main groups [6–12]: point operations, edge methods, region methods and hybrid methods. Today, the most developed are region methods which include [13–16]: region growing, region merging, region splitting, split and merge, watershed segmentation. Not all segmentation methods can be applied in the analysis of medical hyperspectral images [17]. The point methods based on the selection of an appropriate binarization threshold are the most common. The watershed and hybrid methods are equally popular. Their application is limited due to two main elements. The first one is segmentation of an object whose limit, contour, is a change in the brightness of pixels. It is most often calculated for one or three (RGB) brightness levels. The second element is the lack of dependence of segmentation on the reference (expected) spectral spectrum. Therefore, in comparison to the segmentation of images in grey levels or colour images, hyperspectral imaging gives much more opportunities [18]. However, this excess of data is not, in every case, used by the authors of the algorithm (the authors of the article). The main limitation is the above-mentioned lack of methods dedicated for hyperspectral image segmentation. This type of dedicated methods should use, apart from the conventional 2D image analysis, spatial information for individual wavelengths. One of the significant factors facilitating this type of analysis is organization of data written to *.raw, *.dat or *.cube files by a hyperspectral camera. Typical organization of this type of data is shown in Fig. 1. Illustrative system of data organization in *.raw, *.dat and *.cube (files obtained from a hyperspectral camera) and the result of conversion to a sequence of 2D images The data are saved in a file by a hyperspectral camera (Fig. 1a) sequentially, starting with the full spectral range for the first (m-th) row of the matrix. In the next steps, data for subsequent rows of the matrix of the image L GRAY (m, n, i) are saved in the file. Therefore segmentation methods can be easily implemented (minimizing computational complexity) using information about the spectral amplitude values stored for each wavelength for one pixel [19, 20]. In the stored *.raw, *.dat or *.cube file this is one (n-th) column (see Fig. 1a). In terms of data organization in a file, access to these data is at the beginning (Fig. 1a, b). This type of data organization has a significant influence on the access time. Moreover, an additionally favorable element is the organization of particular first rows of each image consecutively as the first read data from *.raw, *.dat or *.cube files. This specific data organization and specificity of hyperspectral imaging were further used in the dedicated methods of hyperspectral image segmentation presented in the following sections. The presented segmentation methods are related to the images acquired from different hyperspectral cameras (SOC710 Hyperspectral Imaging System, Specim sCMOS-50-V10E). In total, more than 10,000 2D images were acquired, constituting a sequence of more than 700 registrations, for 20 healthy subjects (40 % of women) aged 20–55 years. The test area was the area of the left and right hand as well as the forearm. The subjects expressed their free and informed consent for the study. The images were acquired in accordance with the Helsinki Declaration. Data were obtained retrospectively and no measurements or tests were carried out on the subjects as part of this work. This work only describes new techniques for image analysis. The discussed methods were tested on hyperspectral images from public databases and images described in the authors' earlier works—for example [5]. The processing methods presented in this article were tested on a computer with Intel® Core i7 4960X CPU 3.6 GHz. The resolution of acquired images L GRAY (m, n, i), where m-row, n-column, i-frame number, was standardized to M × N × I = 696 × 520 × 128 pixels (where M-number of rows, N-number of columns, I-number of bands). The spectral range for 128 bands was from 0.4 to 1.0 µm, dynamic range 12 bit, line rate 33 lines/s, pixels per line 520. The test subjects were illuminated by means of halogen lamps with a power of 100 W and linear radiation in the range of the camera operation. The new dedicated methods of hyperspectral image segmentation were divided into 3 different areas (methods): fast analysis of emissivity curves (absorption) of an object, 3D segmentation, hierarchical segmentation. The methods were preceded by image pre-processing. Image pre-processing These methods were preceded by image pre-processing. It involved median filtering of the image L GRAY (m, n, i) with a mask sized M h × N h × I h = 3 × 3 × 3 pixels. The size of the filter mask was selected based on the maximum size of a single artefact which was not higher than 4 pixels. The resulting image L MED (m, n, i) was further calibrated on the basis of the calibration bar which, at each registration, was located in the upper part of the image [5], or on the basis of the recorded images [dark L DARK (m, n, i) and white L WHITE (m, n, i)] allowing for conventional calibration and normalization of the image. The resulting image after this pre-processing of the image L C (m, n, i) was the basis for further transformations. It should be emphasized here that median filtering for the fixed mask size is one of the simplest filtration methods. In a more developed form, median filtering should be adaptive. The necessity of applying the adaptive method is caused by the imaging character of hyperspectral cameras. For each camera, 2D images acquired for threshold wavelength values have the biggest noise [21]. Therefore, one of the possibilities is the application of an adaptive filtration method based on enlarging the mask size of the median filter for 2D images at the band edge. As it was shown in [21], it is the mask size changed depending on the wavelength in the range from M h × N h = 7 × 7 pixels to M h × N h = 3 × 3 pixels. In extreme cases when there is no information which 2D image will be analysed, the following mask size can be adopted: M h × N h = 7 × 7 pixels, taking into account the necessary removal of some minor details for median 2D images (for median wavelengths) [22–26]. Fast analysis of emissivity curves According to the authors, the analysis of emissivity curves should be the most commonly used technique of hyperspectral image segmentation. Due to the specific data organisation, it can be used even when reading the beginning of the data in the *.raw, *.dat and *.cube file. Therefore, it can be used for the rough, screening test of compliance of spectrum amplitudes with the model (the expected waveform). The practical realization of such segmentation relates to the analysis of the first row of the matrix L GRAY (m = 1, n, i) or, after image pre-processing, the matrix L C (m, n, i). Assuming that the reference waveform is L PAT (i), the analysis for the first column n = 1 and the subsequent ones was formulated as: $$L_{D} \left( {m,\;n = 1,\;i} \right) = \frac{{L_{c} \left( {m,\;n = 1,i} \right) - L_{PAT} \left( i \right)}}{{L_{PAT} \left( i \right)}}$$ For the adopted tolerance threshold p r of the difference between the reference and measured emissivity, the number of individual wavelengths λ (number i) for which the values L D (m, n, i) exceeded the adopted threshold p r were calculated—the obtained results constitute a new matrix L R (m, n), i.e.: $$L_{R} \left( {m,\;n} \right) = \mathop \sum \limits_{i = 1}^{I} L_{BR} \left( {m,\;n,\;i} \right)$$ $$L_{BR} \left( {m,\;n,\;i} \right) = \left\{ {\begin{array}{*{20}l} 1 \quad {\text{if}} \, {L_{D} \left( {m,\;n,\;i} \right) < p_{r} } \\ 0 \quad {\text{other}} & \\ \end{array} } \right.$$ Depending on the needs (speed of calculations), this analysis can be narrowed down to the afore-mentioned wavelength values i at m = 1 and i = 1. Table 1 shows the summary of mean analysis times for different values of m and i for Intel® Core i7 4960X CPU 3.6 GHz. As shown in Table 1, the mean analysis time depends largely on the value of m. It is the result of time needed to read the information from the data files, *.raw, *.dat or *.cube (see Fig. 1a). The highest values are obtained at the first reading, and they are equal to 2.3 ms. This increased time results from getting access to data on the disk (in the tested operating system Windows 7 Professional). The time of reading other data for the next values of m and i increases linearly and for i∈(1, 100) and m = 100, it is 1.6 ms. For comparison, the time of reading any image L GRAY (m, n, i) for i = 1 or i = 100 is a minimum of 10 ms. Table 1 Summary of mean analysis times for different values of m and i for Intel® Core i7 4960X CPU 3.6 GHz [the values are given in (ms)] The sum of the difference values between the reference and measured waveforms L R (m, n) is the basis for changing the colour space and, thus, segmentation. The image L V (m, n), as defined below, is reliable for the grey levels: $$L_{V} \left( {m,\;n} \right) = 1 - \frac{{\left| {L_{R} \left( {m,\;n} \right)} \right|}}{{\mathop {\hbox{max} }\limits_{m,n} \left| {L_{R} \left( {m,\;n} \right)} \right|}}$$ In this case, interpretation of the results obtained is intuitive. The value equal to "0" is the maximum value of the error L V (m, n), the values close to "1" are the minimum error values. A sample image L V (m, n) is shown in Fig. 2. Segmentation of an object (objects) present in a scene can be performed based on the matrix L V (m, n) or L R (m, n). Sample segmentation results for various binarization thresholds (values 0.2, 0.4, 0.6 and 0.8) were presented in Fig. 3. It should be underlined that the obtained result is for the segmentation based on the first row of the matrix of 2D images for specific wavelengths, while the time of segmentation for the processor Intel® Core i7 4960X CPU 3.6 GHz did not exceed 2.3 ms. Sample results obtained for the image of a healthy thumb: a colour image created on the basis of 3 matrices L GRAY (520, 450 and 650 nm); b matrix L V (m, n) for the artificial colour palette; c reference waveform L PAT (i) Sample results of segmentation (binarization) for a fast method of segmentation for binarization thresholds of images L V (m, n) equal to: a 0.2; b 0.4; c 0.6 and d 0.8 3D segmentation A different approach to segmentation is the analysis of the whole sequence of i images L C (m, n, i). For this purpose, the ROI containing a fragment of the segmented object is marked either manually or automatically in the image L C (m, n, i). On this basis, the waveform L PAT (i) is created as: $$L_{PAT} \left( i \right) = \frac{1}{{M_{R} \cdot N_{R} }}\mathop \sum \limits_{m,n \in ROI}^{ } L_{C} \left( {m,\;n,\;i} \right)$$ where: M R and N R —number of rows and columns of the ROI respectively. The image L V (m, n) is calculated in the next step: $$L_{V} \left( {m,\;n} \right) = L_{C} \left( {m,\;n,\;i} \right) - L_{PAT} \left( i \right)$$ The image L V (m, n) contains a lot of artefacts resulting from the noise occurring in the border images L V (m, n, i) for i ∈ {1, 2, I − 1, I}. Therefore, 3D filtration was proposed, which enabled median filtering of each pixel for the 8-neighbourhood system. Schematic diagram of the filtration process for a three-dimensional plane of an object (border) is shown in Fig. 4a. The process of filtration relates to the 8-neighbourhood system of each pixel of the matrix L v (m, n, i) on the basis of which the median value is calculated. A median filter is used in relation to the plane formed from the object—in this case sized 30 × 30 pixels (see Fig. 4a). On this basis, the filtered image Ls(m, n), after binarization with the thresholds p r1 , p r2 and p r3 equal to 30, 20 and 10 % respectively (thresholding with the upper threshold), is shown using colours (red, green and blue respectively) in Fig. 4b. This image, L S (m, n), resulting from binarization with the properly selected (manually or automatically) threshold, is the result of segmentation. Schematic diagram of filtration for the 8-neighbourhood system: a location of a sample pixel of the matrix L V (m, n, i); b sample results, image L S (m, n) for the image of a healthy thumb. Red, green and blue indicate the results of thresholding of the image L S (m, n) with the thresholds p r1 = 30 %, p r2 = 20 % and p r3 = 10 % As it can be concluded from the above description, due to the specificity of this segmentation method, the time of analysis is at the level of 1949 ms for the processor Intel® Core i7 4960X CPU 3.6 GHz. Hierarchical segmentation The dedicated hierarchical method gives the best results in segmentation. The word hierarchical refers here to the hierarchy in the resolution analysis of the image L C (m, n, i). The analysis is associated with the analysis of the resolution of the image L C (m, n, i) reduced to M/8 × N/8. Then, the analysis of the k-nearest neighbours was proposed. The k-nearest neighbour analysis was implemented for three features, w(1), w(2) and w(3). These features are absolute values of differences between the analysed image L C (m, n, i) and the reference waveform L PAT (i) for the selected three wavelengths. The three selected wavelengths are, in this case, the local maxima of the waveform L PAT (i). In general, the number of features [local maxima of the waveform L PAT (i)] is arbitrary. It can be limited only by computational complexity (the need to determine individual local maxima). In this example, due to the facilitated visualization (3D graph), the number of sought local maxima was limited to three [three features w(1), w(2) and w(3)]. Therefore, the segmentation method starts with classification in 2, 3, 4 and 5 regions, taking into account the features w(1), w(2) and w(3)—Fig. 5. Simplified block diagram of the proposed hierarchical segmentation method As the number of regions, the number of classes, is not generally known (the number of the segmented objects), the value of standard deviation of the mean (std) is used as a criterion. The number of classes is increased from 2 to 5 regions. The correct number of classes is the one for which the value of the inter-class mean square error (std) is the smallest. Figure 6 shows examples of segmentation results for the thumb (Fig. 2a). The results were obtained for two, three and four classes and three features [w(1), w(2) and w(3)]. Sample result of segmentation based on the analysis of the nearest neighbours: a, c, e graphs of changes in the value for the next features w(1), w(2) and w(3) and b, d, f results of segmentation for 2, 3 and 4 classes respectively For the sample results shown in Fig. 6, the smallest value of the mean square error (std = 0.21) was for 3 classes. The image L HC (m, n, i) created in this way is the basis for increasing its resolution and automatic designation of segmented areas in the source image L GRAY (m, n, i) or in the image after pre-processing of the images L C (m, n, i) (Fig. 5). Hyperspectral images segmentation [27–29] with the hierarchical method is linked with yet another new question concerning the variability of size/shape of the object in the segmentation for next i-th values taking into consideration the criterion described with the formula (4). The segmented object may therefore change its shape fulfilling the criterion (4) for the next i-th images. A simple method of binarization conducted for the next i-th images does not render satisfying results due to the possibility of adding next new objects (not by changing the size of the existing segmented object). Therefore, erosion L E (m, n, i) was proposed with conditional dilatation L D (m, n, i) performed for each i-th binary image starting from L BIN (m, n, i = 1), i.e.: $$L_{E}^{ } \left( {m,\;n,\;i = 1} \right) = \left\{ {\begin{array}{*{20}l} {L_{BIN}^{ } \left( {m,\;n,\;i = 1} \right)} &\quad {\text{if}}\, {p_{c} \left( {m,\;n,\;i = 1} \right) < p_{dc} } \\ {\mathop {\hbox{min} }\limits_{{m_{S} ,n_{S} \in SE}} \left( {L_{BIN}^{ } \left( {m + m_{S} ,n + n_{S} ,i = 1} \right)} \right)} &\quad {\text{other}} & \\ \end{array} } \right.$$ $$L_{D}^{ } \left( {m,\;n,\;i = 1} \right) = \left\{ {\begin{array}{*{20}l} {L_{BIN}^{ } \left( {m,\;n,\;n = 1} \right)} &\quad {\text{if}} \,{p_{c} \left( {m,\;n,\;i = 1} \right) > p_{ec} } \\ {\mathop {\hbox{max} }\limits_{{m_{S} ,n_{S} \in SE}} \left( {L_{BIN}^{ } \left( {m + m_{S} ,n + n_{S} ,n = i} \right)} \right)} &\quad {\text{other}} & \\ \end{array} } \right.$$ $$p_{c} \left( {m,\;n,\;i} \right) = \frac{1}{{M_{S} \cdot N_{S} }}\mathop \sum \limits_{{m_{S} = 1}}^{{M_{S} }} \mathop \sum \limits_{{n_{S} = 1}}^{{N_{S} }} L_{GRAY}^{ } \left( {m + m_{S} ,\;n + n_{S} ,\;i} \right)$$ and m S , n S the row and column position of the mask SE sized M S × N S depending on the size of the segmented object. The mask changed in this case has a size of M S × N S = 17 × 17 pixels. The values of thresholds p ec and p dc are set by the user once for all images from the given type of camera. In the present case of SOC710 Hyperspectral Imaging System the value was p ec = p dc = 0.5. The results for the segmentation without erosion and conditional dilatation and with erosion and conditional dilatation are shown in Fig. 7. Sample segmentation of the objects without and with erosion and conditional dilatation: a the results of segmentation in the case of no erosion and conditional dilatation—e.g. as a result of a simplified method: b, c sample images for extreme values i = 10 and i = 110 showing the level of noise for extreme wavelengths and d the result of segmentation of the same object with the use of erosion and conditional dilatation Experimental and discussion The three new proposed segmentation methods need to be tested in practice in order to confirm the accuracy and time of analysis. The basis for the performed experiments is: The three methods described in this article: fast analysis of emissivity curves (SKE), 3D segmentation (S3D), hierarchical segmentation (SH), The three known segmentation methods (for quantitative comparison of the results obtained): method based on brightness thresholding (SPJ)—the binarization threshold is selected manually and automatically using Otsu's formulas [18], watershed method (SWS) preceded by filtration with an averaging filter whose mask size is in the range from 3 × 3 pixels to 9 × 9 pixels, method based on mathematical morphology (SMM), especially erosion and conditional dilation, Manual method of object selection considered further as the benchmark (SP). Over 700 complete sequences of images L GRAY (m, n, i) (a total of more than 10,000 2D images), whose acquisition conditions are given in the "Materials" section, were subject to segmentation. Common measures were used to compare the quality of the detected objects—FP the number of false-positive detected pixels, FN—false negative as well as TP, TN—true negative and true positive respectively. On their basis, sensitivity TPR = TP/(TP + FN), specificity SPC = TN/(TN + FP) and balanced accuracy ACC = (TPR + SPC)/2 were determined. The obtained results of mean values of TPR, SPC and ACC for the compared segmentation methods are presented in Table 2. Table 2 Comparison of the proposed dedicated methods of hyperspectral image segmentation with the known methods and the benchmark According to the presented Table 2, segmentation based on a hierarchical approach, proposed in this article, has the greatest value of ACC (ACC = 92). The other proposed segmentation methods are at a similarly high level (of ACC values). Among the known non-profiled segmentation methods, the method based on brightness thresholding has the largest value of ACC. However, it is a semi-automatic method whose results are closely dependent on the method of selecting the binarization threshold. The other features of the discussed segmentation methods look slightly different—Table 3. Table 3 Comparison of other features of the proposed dedicated and well-known methods of hyperspectral image segmentation The results for a PC with Intel® Core i7 4960X CPU 3.6 GHz clearly show the superiority of the method of 3D segmentation (S3D) whose analysis time does not depend on the number of detected (segmented) objects. Moreover, this method is not sensitive to image rotation. This is due to the idea of its operation shown in Fig. 4 where no direction is privileged. Of course, fast analysis of emissivity curves is the quickest—its analysis time for m = 1 and i = 1 is equal to 2.3 ms (Table 1). The other known segmentation methods consume several times more of the CPU time. As it was mentioned in the "Background" section, numerous similar segmentation methods are known from the literature. Hence, the method utilizing emissivity curves of a given object (SKE method) is used in many applications such as specim spectral imaging [30, 31], hyperspectral imaging system [5, 18] or imageJ [32]. However, in none of such applications the time of preliminary segmentation was obtained taking into account the emissivity curve at the level of 2.3 ms. For example in [33] the authors obtain the time of analysis equal to 100 ms using Intel® Core i5 CPU M460 @2.5 GHz 4 GB RAM. Usually, the time of analysis increases, e.g. in Specim systems, which results from the fact that the registered sequence of i-th 2D images is pre-processed. A significant element of this processing is lowering the resolution of images. Then, this initially-prepared 3D image is analyzed. In the case when many different objects are registered, the process becomes burdensome unnecessarily increasing the involvement of staff (operator). In the case of the second and third method in question (S3D and SH), the situation is different. These methods, especially SH based on erosion and conditional dilatation, is not used in any commercial applications concerning hyperspectral imaging and it was not presented in this use, namely for correction of hyperspectral images. The majority of authors, for instance in the articles [4, 6, 8, 11, 13], perform segmentation in the field of an image, one selected 2D image from i-th images of a sequence. Another widely tested approach is the use of clustering in the space of particular wavelengths e.g. at work [6]. Both mentioned types of segmentation do not cope with two elements: variable levels of noise of hyperspectral images for band extreme values and with the correction of the shape of the segmented image for subsequent i-th images. In particular, the correction of the object shape for subsequent i-th 2D images of the series is especially important in terms of determining the compatibility of its emissivity curves with the calibration curve. Therefore, the methods presented in this paper are new segmentation methods which provide a new quality in a considerably shorter period of time when compared to the known methods of hyperspectral image segmentation. The presented three new segmentation methods have the following advantages: they are fully automatic, they enable to use a fast segmentation method they are profiled to hyperspectral image segmentation, they use emissivity curves as the model, they can be applied to any type of objects (not necessarily biological), they are faster and more precise compared to conventional methods known from the literature. Work is currently underway to implement the described segmentation methods, especially the first described fast method for digital circuits DSP. The aim is the initial, at the acquisition phase, object segmentation. This will enable to automatically limit the acquired image and, thus, reduce the hyperspectral camera operation time. SKE: S3D: segmentation 3D SH: SPJ: method based on brightness thresholding SWS: watershed method SMM: method based on mathematical morphology SP: Pang B, Zhang D, Wang K. The bi-elliptical deformable contour and its application to automated tongue segmentation in Chinese medicine. IEEE Trans Med Imaging. 2005;24(8):946–56. Morgan P, Frankish C. Image quality, compression and segmentation in medicine. Audiov Media Med. 2002;25(4):149–54. Suetens P, Bellon E, Vandermeulen D, Smet M, Marchal G, Nuyts J, Mortelmans L. Image segmentation: methods and applications in diagnostic radiology and nuclear medicine. Eur J Radiol. 1993;17(1):14–21. Piqueras S, Krafft C, Beleites C, Egodage K, Eggeling F, Guntinas-Lichius O, Popp J, Tauler R, Juan A. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues. Anal Chim Acta. 2015;881:24–36. Koprowski R, Wilczyński S, Wróbel Z, Błońska-Fajfrowska B. Calibration and segmentation of skin areas in hyperspectral imaging for the needs of dermatology. Biomed Eng Online. 2014;13:113. Veganzones MA, Tochon G, Dalla-Mura M, Plaza AJ, Chanussot J. Hyperspectral image segmentation using a new spectral unmixing-based binary partition tree representation. IEEE Trans Image Process. 2014;23(8):3574–89. Porwik P. Efficient spectral method of identification of linear Boolean function. Control Cybern. 2004;33(4):663–78. Fu D, Xie XS. Reliable cell segmentation based on spectral phasor analysis of hyperspectral stimulated Raman scattering imaging data. Anal Chem. 2014;86(9):4115–9. Hennessy R, Bish S, Tunnell JW, Markey MK. Segmentation of diffuse reflectance hyperspectral datasets with noise for detection of melanoma. Conf Proc IEEE Eng Med Biol Soc. 2012;2012:1482–5. Eches O, Benediktsson JA, Dobigeon N, Tourneret JY. Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral images. IEEE Trans Image Process. 2013;22(1):5–16. Piqueras S, Duponchel L, Tauler R, Juan A. Resolution and segmentation of hyperspectral biomedical images by multivariate curve resolution-alternating least squares. Anal Chim Acta. 2011;705(1–2):182–92. Zhang Q, Plemmons R, Kittle D, Brady D, Prasad S. Joint segmentation and reconstruction of hyperspectral data with compressed measurements. Appl Opt. 2011;50(22):4417–35. Tarabalka Y, Chanussot J, Benediktsson JA. Segmentation and classification of hyperspectral images using minimum spanning forest grown from automatically selected markers. IEEE Trans Syst Man Cybern B Cybern. 2010;40(5):1267–79. Liu Z, Yan JQ, Zhang D, Li QL. Automated tongue segmentation in hyperspectral images for medicine. Appl Opt. 2007;46(34):8328–34. Christensen MP, Euliss GW, McFadden MJ, Coyle KM, Milojkovic P, Haney MW, Gracht J, Athale RA. ACTIVE-EYES: an adaptive pixel-by-pixel image-segmentation sensor architecture for high-dynamic-range hyperspectral imaging. Appl Opt. 2002;41(29):6093–103. Gao L, Smith RT. Optical hyperspectral imaging in microscopy and spectroscopy—a review of data acquisition. J Biophotonics. 2015;8(6):441–56. Foster KR, Koprowski R, Skufca JD. Machine learning, medical diagnosis, and biomedical engineering research—commentary. BioMed Eng OnLine. 2014;13:94. Koprowski R. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab. J Biophotonics. 2015;8(11–12):935–43. Mitchell T. Machine learning. New York: McGraw Hill; 1997. p. 414. Krzanowski WJ. Principles of multivariate analysis, a user's perspective. New York: Oxford University Press; 1988. p. 608. Koprowski R. Processing hyperspectral medical images. Berlin: Springer; 2017. p. 140. Lefèvre S, Aptoula E, Perret B, Weber J. Morphological template matching in color images advances in low-level color image processing. Berlin: Springer; 2013. Galeano J, Jolivot R, Marzani F. Analysis of human skin hyper-spectral images by non-negative matrix factorization advances. Soft Comput. 2011;7095:431–42. Jia H, Ding S, Meng L, Fan S. A density-adaptive affinity propagation clustering algorithm based on spectral dimension reduction Neural Computing and Applications. 2014;25:1557–67. Carlinet E, Géraud T. MToS: a Tree of Shapes for Multivariate Images. IEEE Trans Image Process. 2015;24(12):5330–42. Halimi A, Dobigeon N, Tourneret JY. Unsupervised unmixing of hyperspectral images accounting for endmember variability. IEEE Trans Image Process. 2015;24(12):4904–17. Grana M, Chyzhyk D. Image understanding applications of lattice autoassociative memories. IEEE Trans Neural Netw Learn Syst. 2015. Piqueras S, Krafft C, Beleites C, Egodage K, von Eggeling F, Guntinas-Lichius O, Popp J, Tauler R, de Juan A. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues. Anal Chim Acta. 2015;881:24–36. Banas K, Banas A, Gajda M, Pawlicki B, Kwiatek WM, Breese MB. Pre-processing of Fourier transform infrared spectra by means of multivariate analysis implemented in the R environment. Analyst. 2015;140(8):2810–4. Lin Y, Puttonen E, Hyyppä J. Investigation of tree spectral reflectance characteristics using a mobile terrestrial line spectrometer and laser scanner. Sensors (Basel). 2013;13(7):9305–20. Serranti S, Cesare D, Marini F, Bonifazi G. Classification of oat and groat kernels using NIR hyperspectral imaging. Talanta. 2013;103:276–84. Pasqualin C, Gannier F, Yu A, Malécot CO, Bredeloux P, Maupoil V. SarcOptiM for ImageJ: high frequency online sarcomere length computing on stimulated cardiomyocytes. Am J Physiol Cell Physiol. 2016;311:C277–83. Koprowski R, Wilczyński S, Wróbel Z, Kasperczyk S, Błońska-Fajfrowska B. Automatic method for the dermatological diagnosis of selected hand skin features in hyperspectral imaging. Biomed Eng Online. 2014;22(13):47. RK suggested the algorithm for image analysis and processing, implemented it and analyzed the images. PO performed the measurements and assessment, from the medical perspective, of the usefulness of the segmentation method in question in medical practice. Both authors read and approved the final manuscript. The Authors would like to thank Mr. Raphael Stachiewicz from ENFORMATIC Sp. z. o. o., Poland, for providing the hyperspectral camera. The Authors wish to sincerely thank Doctor Sławomir Wilczyński from Medical University of Silesia in Katowice in Poland for the possibility of using the collected hyperspectral images. This work was financially supported by Medical University of Silesia in Katowice, Grant number KNW-1-023/N/6/O. Department of Biomedical Computer Systems, University of Silesia, Bedzinska 39, 41-200, Sosnowiec, Poland Robert Koprowski Department of Community Pharmacy, School of Pharmacy and Division of Laboratory Medicine in Sosnowiec, Medical University of Silesia in Katowice, Kasztanowa 3, 41-200, Sosnowiec, Poland Paweł Olczyk Correspondence to Robert Koprowski. Koprowski, R., Olczyk, P. Segmentation in dermatological hyperspectral images: dedicated methods. BioMed Eng OnLine 15, 97 (2016). https://doi.org/10.1186/s12938-016-0219-5 Fast segmentation method Conditional erosion Conditional dilatation Hyperspectral imaging
CommonCrawl
Full compressible Navier-Stokes equations for quantum fluids: Derivation and numerical solution KRM Home Non--local macroscopic models based on Gaussian closures for the Spizer-Härm regime September 2011, 4(3): 767-783. doi: 10.3934/krm.2011.4.767 Asymptotic limit of nonlinear Schrödinger-Poisson system with general initial data Qiangchang Ju 1, , Fucai Li 2, and Hailiang Li 3, Institute of Applied Physics and Computational Mathematics, Box 8009-28, Beijing 100088, China Department of Mathematics, Nanjing University, Nanjing 210093 Department of Mathematics and Institute of Mathematics and Interdisciplinary Science, Capital Normal University, Beijing 100037, China Received January 2011 Revised May 2011 Published August 2011 The asymptotic limit of the nonlinear Schrödinger-Poisson system with general WKB initial data is studied in this paper. It is proved that the current, defined by the smooth solution of the nonlinear Schrödinger-Poisson system, converges to the strong solution of the incompressible Euler equations plus a term of fast singular oscillating gradient vector fields when both the Planck constant $\hbar$ and the Debye length $\lambda$ tend to zero. The proof involves homogenization techniques, theories of symmetric quasilinear hyperbolic system and elliptic estimates, and the key point is to establish the uniformly bounded estimates with respect to both the Planck constant and the Debye length. Keywords: semi-classical limit, incompressible Euler equations., quasi-neutral limit, Nonlinear Schrödinger-Poisson System. Mathematics Subject Classification: Primary: 35Q55; Secondary: 35B4. Citation: Qiangchang Ju, Fucai Li, Hailiang Li. Asymptotic limit of nonlinear Schrödinger-Poisson system with general initial data. Kinetic & Related Models, 2011, 4 (3) : 767-783. doi: 10.3934/krm.2011.4.767 T. Alazard and R. Carles, Semi-classical limit of Schrödinger-Poisson equations in space dimension $n\geq 3$,, J. Differential Equations, 233 (2007), 241. doi: 10.1016/j.jde.2006.10.003. Google Scholar A. Arnold and F. Nier, The two-dimensional Wigner-Poisson problem for an electron gas in the charge neutral case,, Math. Methods Appl. Sci., 14 (1991), 595. doi: 10.1002/mma.1670140902. Google Scholar P. Bechouche, N. J. Mauser and F. Poupaud, Semiclassical limit for the Schrödinger-Poisson equation in a crystal,, Comm. Pure Appl. Math., 54 (2001), 852. doi: 10.1002/cpa.3004. Google Scholar Y. Brenier, Convergence of the Vlasov-Poisson system to the incompressible Euler equations,, Comm. Partial Differential Equations, 25 (2000), 737. Google Scholar F. Brezzi and P. A. Markowich, The three-dimensional Wigner-Poisson problem: Existence, Uniqueness and Approximation,, Math. Methods Appl. Sci., 14 (1991), 35. doi: 10.1002/mma.1670140103. Google Scholar F. Castella, "Effects disperifs pour les équations de Vlasov et de Schrödinger,", Ph.D. Thesis, (1997). Google Scholar F. Castella, $L^2$ solutions to the Schrödinger-Poisson system: existence, uniqueness, time behavior and smoothing effects,, Math. Models Methods Appl. Sci., 7 (1997), 1051. doi: 10.1142/S0218202597000530. Google Scholar T. Cazanava, "An Introduction to Nonlinear Schödinger Equations,", Testos de Métodos Matemáticos, (1980). Google Scholar E. Grenier, Semiclassical limit of the nonlinear Schrödinger equation in small time,, Proc. Amer. Math. Soc., 126 (1998), 523. doi: 10.1090/S0002-9939-98-04164-1. Google Scholar C. C. Hao and H. L. Li, On the initial value problem for the bipolar Schrödinger-Poisson systems,, J. Partial Differential Equations, 17 (2004), 283. Google Scholar C. C. Hao, L. Hsiao and H. L. Li, Modified scattering for bipolar nonlinear Schrödiner-Poisson equations,, Math. Models Methods Appl. Sci., 14 (2004), 1481. doi: 10.1142/S0218202504003684. Google Scholar A. Jüngel and S. Wang, Convergence of nonlinear Schrödinger-Poisson system to the compressible Euler equations,, Comm. Partial Differential Equations, 28 (2003), 1005. Google Scholar T. Kato, Nonstationary flows of viscous and ideal fluids in $R$$^3$,, J. Funct. Anal., 9 (1972), 296. doi: 10.1016/0022-1236(72)90003-1. Google Scholar M. De Leo and D. Rial, Well posedness and smoothing effect of Schrödinger-Poisson equation,, J. Math. Phys., 48 (2007). Google Scholar H. L. Li and C.-K. Lin, Semiclassical limit and well-poseness of nonlinear Schrödinger-Poisson systems,, Electron. J. Differential Equations, 2003 (). Google Scholar P.-L. Lions, "Mathematical Topics in Fluid Mechanics, Vol. 1: Incompressible Models,", Oxford Lecture Series in Mathematics and its Applications, 3 (1996). Google Scholar P.-L. Lions and T. Paul, Sur les measure de Wigner,, Rev. Mat. Iberoamericana, 9 (1993), 553. Google Scholar A. Majda, "Compressible Fluids Flow and Systems of Conservation Laws in Several Space Variables,", Applied Mathematical Sciences, 53 (1984). Google Scholar P. A. Markowich and N. J. Mauser, The classical limit of the self-consistent quantum-Vlasov equations in 3D,, Math. Models Methods Appl. Sci., 3 (1993), 109. doi: 10.1142/S0218202593000072. Google Scholar N. Masmoudi, From Vlasov-Poisson system to the incompressible Euler system,, Comm. Partial Differential Equations, 26 (2001), 1913. Google Scholar M. Puel, Convergence of the Schrödinger-Poisson system to the incompressible Euler equations,, Comm. Partial Differential Equations, 27 (2002), 2311. Google Scholar W. Strauss, "Nonlinear Wave Equations,", CBMS Regional Conference Series in Mathematics, 73 (1989). Google Scholar C. Sulem and P.-L. Sulem, "The Nonlinear Schrödinger Equation, Self-Focusing and Wave Collapse,", Applied Mathematical Sciences, 139 (1999). Google Scholar P. Zhang, Wigner measure and the semiclassical limit of Schrödinger-Poisson equations,, SIAM J. Math. Anal., 34 (2002), 700. doi: 10.1137/S0036141001393407. Google Scholar P. Zhang, Y.-X Zheng and N. Mauser, The limit from the Schrödinger-Poisson to Vlasov-Poisson equations with general data in one dimension,, Comm. Pure Appl. Math., 55 (2002), 582. doi: 10.1002/cpa.3017. Google Scholar Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447 Li Cai, Fubao Zhang. The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020125 Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 Jianfeng Huang, Haihua Liang. Limit cycles of planar system defined by the sum of two quasi-homogeneous vector fields. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 861-873. doi: 10.3934/dcdsb.2020145 Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002 Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304 Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011 Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 Andrew Comech, Scipio Cuccagna. On asymptotic stability of ground states of some systems of nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1225-1270. doi: 10.3934/dcds.2020316 Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323 Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 Jason Murphy, Kenji Nakanishi. Failure of scattering to solitary waves for long-range nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1507-1517. doi: 10.3934/dcds.2020328 Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001 Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003 Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020294 Zhouxin Li, Yimin Zhang. Ground states for a class of quasilinear Schrödinger equations with vanishing potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020298 Xinfu Chen, Huiqiang Jiang, Guoqing Liu. Boundary spike of the singular limit of an energy minimizing problem. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3253-3290. doi: 10.3934/dcds.2020124 Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405 Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 Qiangchang Ju Fucai Li Hailiang Li
CommonCrawl
Generalized Lorentz Transformation Thread starter unified lorentz transformations special relativity Summary: The problem is to generalize the Lorentz transformation to two dimensions. Relevant Equations Lorentz Transformation along the positive x-axis: $$ \begin{pmatrix} \bar{x^0} \\ \end{pmatrix} = \begin{pmatrix} \gamma & -\gamma \beta & 0 & 0 \\ -\gamma \beta & \gamma & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} {x^0} \\ Lorentz Transformation along the positive y-axis: \gamma & 0 & -\gamma \beta & 0 \\ -\gamma \beta & 0 & \gamma & 0 \\ Velocity transformations from S to S' where S' moves along the positive x-axis at speed v relative to S: u'_x = \frac{u_x - v}{1 - \frac{u_xv}{c^2}} \\ u'_y = \frac{u_y}{\gamma(1 - \frac{u_xv}{c^2})} $$\bar{S} \text{ moves at velocity } \\ \vec{v} = \beta c(Cos\phi \hat{x} + Sin\phi \hat{y}) \text{ relative to S, with axes parallel and origins coinciding at time } \\ t = \bar{t} = 0. \text{ Find the Lorentz transformation from S to } \bar{S}$$ Attempt at a solution $$ \text{Let S' move at velocity } \beta cCos\phi\hat{x} \text{ relative to S. We can find the velocity of } \bar{S} \text { relative to S' using the velocity transformations. } \\ u'_x = 0 \\ u'_y = \gamma_x\beta cSin\phi \\ where \, \gamma_x = \frac{1}{(1-\beta^2Cos^2\phi)^\frac{1}{2}}$$ $$ \text{We can express the coordinates in } \bar{S} \text{ given the coordinates in S' by the y-axis Lorentz transformation, with } \\ \bar{\beta} = \beta Sin\phi \gamma_x \, \\ \bar{\gamma} = \frac{\gamma}{\gamma_x} \\ where \, \gamma = \frac{1}{(1 - \beta^2)^\frac{1}{2}}$$ $$ \text{And we can express the coordinates in S' given the coordinates in S by the x-axis transformation with } \\ \beta' = \beta Cos\phi \\ \gamma' = \gamma_x$$ $$ \text{ Therefore we can express the coordinates in } \bar{S} \text { given the coordinates in S as follows } $$ \frac{\gamma}{\gamma_x} & 0 & -\frac{\gamma}{\gamma_x}\beta Sin\phi \gamma_x & 0 \\ -\frac{\gamma}{\gamma_x}\beta Sin\phi \gamma_x & 0 & \frac{\gamma}{\gamma_x} & 0 \\ \gamma_x & -\gamma_x\beta Cos\phi & 0 & 0 \\ -\gamma_x \beta Cos\phi & \gamma_x & 0 & 0 \\ x^0 \\ x^3 \gamma & -\gamma \beta Cos\phi & -\gamma \beta Sin\phi & 0 \\ -\gamma_x\beta Cos\phi & \gamma_x & 0 & 0 \\ -\gamma \gamma_x\beta Sin\phi & \gamma \gamma_x\beta^2Sin\phi Cos\phi & \frac{\gamma}{\gamma_x} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} This final matrix represents the solution to the problem. But, this solution is wrong, as the correct matrix is given an follows. -\gamma \beta Cos\phi & (\gamma Cos^2\phi + Sin^2\phi) & (\gamma - 1)Sin\phi Cos\phi & 0 \\ -\gamma \beta Sin\phi & (\gamma - 1)Sin\phi Cos\phi & (\gamma Sin^2\phi + Cos^2\phi) & 0 \\ I am unable to determine what is wrong with this solution, any suggestions would be appreciated. PeterDonis Moderator's note: Moved to advanced physics homework forum. Jianbing_Shao The product of two pure boost in different directions wii not be a pure boost anymore. $$\exp(\zeta^1X_1)\cdot\exp(\zeta^2X_2)\neq\exp(\zeta^1X_1+\zeta^2X_2)$$ Because the boosts in different directions do not commute with each other. Jianbing_Shao said: I noticed that of course. But, isn't it true that the matrix gives S(bar) in terms of S? I don't see how the logic could be wrong. We give S(bar) in terms of S' and S' in terms of S, thus S(bar) in terms of S. I agree the formula is not a boost, but isn't it still correct? PeroK unified said: There's a thread about this from a few weeks ago. It depends how you define the axes in the final frame. What you have is a valid transformation. For example, the time dilation effect will be correct. But, if you assume that the frame ##\bar S## is moving with velocity ##\vec v## in frame ##S##, then frame ##S## is not moving with velocity ##-\vec v## in your frame ##\bar S##. In fact, that requirement is usually an unstated assumption in this problem. The assumption that you want symmetry of relative velocity between ##S## and ##\bar S##. Your approach does not achieve this, but transforms to a rotated version of the "required" coordinates for ##\bar S##. PeroK said: If what I have is a valid transformation, then I'm confused about what is calculating. Given the coordinates in S, does it give the coordinates in S(bar)? How do you define the coordinates in ##\bar S##? There's no single orientation of axes. The axes can be in any directions you want. Take a simple boost in the x-direction from ##S## to ##S'##. There's nothing to say that ##S'## must take the same x-axis as ##S##. It's perfectly valid for ##S'## to define the direction of motion as the ##y## direction. Or, to rotate its coordinates in any way. It's by convention that the coordinates in ##S'## are defined in a certain way with respect to the ##S## coordinates. If you have ##\bar S## moving in both the x and y directions relative to ##S##, then it's not so obvious how you want to define the x,y axes in ##\bar S## with respect to the x, y axes in ##S##. The only thing that must be true is that the magnitude of the velocity of ##S## must be correct. But, any orientation of axes where this holds is a valid coordinate system for ## \bar S##. However, the coordinate system where the velocity of ##S## is ##- \vec v## is big convention the one given for the 2D boost. PS note that for a boost along the x-axis, all the axes in ##S'## can be taken to be parallel to those in ##S##. I.e. the two sets of axes coincide when the origins coincide. But, if the boost is not along a single coordinate axis, then the mutually orthogonal x, y axes in one frame cannot coincide with mutually orthogonal x', y' axes in the other. It's a good exercise to check this out. Perhaps you can think of the problem in such a way: $$\exp(\zeta^1 X_1)\exp(\zeta^2 K_2)=\lim_{n\rightarrow \infty}\prod^{n}_{i=1}\exp\left(\frac{\zeta^1 X_1}{i}\right) \prod^{n}_{j=1}\exp\left(\frac{\zeta^2 X_2}{j}\right) $$ $$\exp(\zeta^1 X_1+\zeta^2 K_2)=\lim_{n\rightarrow \infty}\prod^{n}_{i=1}\exp\left(\frac{\zeta^1 X_1}{i}\right)\exp\left(\frac{\zeta^2 X_2}{i}\right)$$ Obviously the two formulas describe different physical processes I'm not sure whether the problem is well posed. I interpreted it differently than what the discussion suggest: I think what's asked for is the rotation free Lorentz boost along an arbitrary velocity ##\vec{v}## between the two frames. This will give a symmetric Lorentz-transformation matrix. It's not difficult to derive from the Lorentz transformation in one direction by writing it in 3D covariant terms. Another understanding of the question seems to be that it's ask to do the transformation in two steps, i.e., first a boost in 1-direction and then a boost in 2-direction. This leads, however, not to a rotation free boost for the reasons stated above, i.e., the composition of two rotation-free boosts in different direction leads to a Lorentz boost followed by a rotation. For the rotation-free boost, see https://itp.uni-frankfurt.de/~hees/pf-faq/srt.pdf Related Threads on Generalized Lorentz Transformation General Lorentz Transformations Special Relativity-Lorentz Transformation Special relativity/lorentz transforms Relativity : Lorentz Transformation. Lorentz Transformation of relative velocity Relativity calculation, Lorentz transformations Lorentz velocity transformations - relativity Lorentz transformations (2nd year relativity) Lorentz transformations
CommonCrawl
A quantitative method for estimating the adaptedness in a physiological study Vladimir N. Melnikov ORCID: orcid.org/0000-0001-5786-18701 Existed mathematical models of individual adaptation are mostly reductionist by nature. Researchers usually a priori consider the subject adapted basing only on the fact of continued or prolonged influence of the harmful factor. This paper describes a method that allows assessing the physiological adaptedness to experimental challenges on the basis of holistic approach and quantitative criteria. The suggested method comprises simple equations and incorporates into the model an indicator that differentiates functions in regard to their significance for determining physiological adaptedness considered as an outcome of the adaptive process. The proposed empirical model affords the possibility of comparing subjects in respect to their resistance to several loads. Physiological parameters were differentiated with regard to their significance for assessing adaptedness. Two examples of animal adaptation to exercise after physical training and plant adaptogen administration are considered. The calculated index of adaptedness is useful in that it replaces wordy descriptions of large tables that reveal alterations in numerous parameters of many subjects under study. Adaptation as an active process of responding to challenges and adaptedness as a result of this process mean achieving a positive outcome, i.e. survival and reproduction, in the face of adversity. In a wider sense, it includes behavioral, physiological, structural, and genetic changes upon environmental impacts that are beyond the biologically adequate ranges. Apart from traditional key environmental components (warmth, food, salt, water, microelements), the modern living organisms and especially human beings experience growing need to adjust their millieu to arising amount and variability of new materials, drugs, and chemicals. It is just the physiological adaptation that involves 'active resistance' and provides achieving that positive outcome, living without diseases, if and when the preceding behavioral adaptation occurs ineffective or insufficient. Adaptability as a fundamental property of living matter is widely studied by experimental biology and medicine. Every specialist in the field encounters difficulties in selecting criteria of adaptedness and analysing the results of multifactor experiments, wherein the state of the subject is assessed by a set of numerous parameters. Many researches a priori consider the investigated subject adapted to the given factor only on the basis of its prolonged influence. There have been some previous attempts to quantify adaptedness [1]. Most of them deal with Darwinian adaptation considered as the biological fitness in the context of reproductive success; some relate with pharmacological tolerance as a result of adaptive process [2, 3]. Researches consider adaptation in terms of tolerance, stability, constancy, resistance, coping, in view of stress conception [4]. An analysis of the existed literature reveals that authors stretch the meaning of the term and interpret it loosely from genes and millisecond time scale [5] to such long-lasting weather-related health outcome as cardiovascular mortality [6]. Most studies focus on distinct systems or functions: neural networks [7], visual perception [8], blood circulation [9], carbon dioxide transport [10], physical performance [11, 12]. Pries and co-authors [13] have proposed a mathematical model to explain how a complex interaction of various stimuli can lead to vascular adaptation. Gorban et al. [14] have designed an algorithm based on the number and extent of correlations between parameters characterising the state of the population under study. The existed models are mostly reductionistic by nature. The empirical model proposed here is built on an organism level and based upon the holistic approach introducing quantitative criteria of individual adaptation. This study is the first to incorporate into the model an indicator that differentiates functions with regards to their significance for determining physiological adaptedness considered as an outcome of the adaptive process. Description of the method In order to estimate the degree of adaptedness to given harmful factor, probably the most direct way is to subject the organism to the action of said factor, applying functional "resolving" or "provoking" loads which force the physiological system to reveal its adaptive possibilities, for example, attained in the course of training or pharmacological pre-treatment. The proposed method, involving simple mathematical operations, makes it possible to obtain an index of adaptedness of bio systems to various environmental or experimental conditions. CASE I: two organisms, repeated-measure design Let us assume that we have two organisms B and C chosen at random from a homogenous population. Suppose that the individual B was subjected to P, a disturbing effect of environment or internal stressor. Let us then try to answer the question how the effect of P influenced the resistance of B to the short-term action of another factor, say Q, to which both B and C are equally subjected for the comparison in an experiment with parallel design (Fig. 1). As a rule, P precedes the action of Q (not necessarily, however) and is accompanied by a change in the resistance of the organism to Q. Qualitatively, both effects may be of the same nature. Thus, for instance, this is observed in effect of prolonged moderate physical training on the tolerance of animals to muscular load or the effect of preventive introduction of poison on the resistance of organisms to subsequent acute poisoning. Yet, the above two effects may be different in nature, this being the case in the study of the effects of pharmacological preparations, pretreatment or preconditioning (P) on adaptation to hypoxia, hypothermia, and so on (Q). Scheme of the physiological experiment appropriate for the model described Out of a multitude of possible situations, let us take the following four states of the subjects studied: (1) baseline intact state prior to the action of P (F0); (2) state subsequent to P and prior to Q (F′); (3) state followed immediately after Q (F″); and (4) state following the lapse of time ∆t after the termination of action of Q (F‴). It is clear that the first and second states for subject C coincide: \( {F}_C^{0\kern0.5em } \) = \( {F}_C^{\prime \kern0.5em } \) (P is absent). \( {F}_B^{0\kern0.5em } \) = \( {F}_C^{0\kern0.5em } \) is also justifiable. The entire set of physiological parameters reflecting the state of the bio system during the action of a given threatening factor may be tentatively divided into at least three categories [15]. The first one includes parameters, mainly homeostatic [16], which primarily and directly change as a result of the action of an entropic agent: body temperature at cooling or overheating, blood oxygen saturation during hypo- or hyperoxia, concentration of lactic acid and content of energetic resources (hepatic and muscle glycogen, glucose, FFA, ATP levels) during physical exercise, etc. The second category includes parameters which reflect changes in adaptive, or allostatic [16], functions and mechanisms, whose work is directed to normalize the initially changed characters, to counter the adverse effects of entropic factor, and to level off shifts in homeostatic variables [17,18,19]. As in the cases of hypoxia [20] and physical load [12], such parameters would be heart and respiration rates, cardiac output, secretion of adaptive hormones corticosteroids and catecholamines [21], etc. Variables of the third category do not change at all under the above-said action. In other words, they characterize functions indifferent to the influencing factor. The boundary between the first and second groups is sometimes tentative. It is clear that the values of some functions may sometimes alter with environmental changes in the manner depending on the severity and timing of stress: from initially reacting mechanisms they may become pronouncedly allostatic and vice versa. Nevertheless, it is usually possible to establish such a boundary in each concrete situation from the physiological view. Changes in the parameters of the homeostatic and adaptive categories under the effect of a given factor should be accounted for with opposing signs in assessing the adaptedness of an organism to the said factor. Despite the continuing action of the perturbation agent, the initially changed physiological characters become normalized or less pronounced in the state of adaptation. Contrariwise, the adaptive mechanisms work with greater intensity. The latter premise has been suggested as a result of an analysis of experiments performed in different studies. Thus, for instance, it is known that trained athletes in response to intensive physical loads (F″) show higher cortisol secretion [22] mainly due to the decrease in post-training value (F′) of basal secretion [23, 24]. Untrained people, however, did not display such increased secretion, or even showed a decline thereof. Cases of normalization of a given adaptive function in the process of adaptation, observed by many investigators, indicates a "redistribution of roles" among functions, when a less powerful mechanism had exhausted its possibilities and has been replaced by another more powerful and stable one, a mechanism that rarely "actuates" (in states remote from exhaustion). Let us assess the state of tested organisms by the set of parameters K1, K2,…, Kj, …, Km, which, in our view, are the most important in estimating adaptedness to Q. Theoretically, a situation may arise wherein out of the entire set, we will choose for investigation such parameters that do not change under the action of Q; then F′ = F″. In practice, however, this is almost improbable, since the investigator usually has sufficient information as to what characteristics should change under Q. The difference K″ – K′ characterizes a change in K under the action of Q. Since the responding variables can change in either positive or negative direction from the baseline, their reactions should be considered as a modulus. In order to have a possibility to compare such differences irrespective of the sign and absolute value of the parameter, we will further on use the relative magnitude ∣K″ – K ′ ∣ /K′. To assess recovery process during the rest of the characteristic that had changed as a result of the action of Q, let us consider the fraction ∣K‴ – K ′ ∣ / ∣ K‴ – K ″ ∣. The numerator shows how close the changed parameter had "returned" to its initial value prior to the action of Q. In view of the present model, a lesser difference indicates better tolerance. The physiological practice indicates that the fraction is not sensitive to K′, so the possible effect of the difference between K'B and K'C for a given variable may be ignored in such an approximate algorithm. The difference K‴ – K″ characterizes the rate of parameter change during a given period of time ∆t. It is suggested that the higher the rate of recovery and the greater the difference K‴ – K″, the more adaptive is the organism. Precisely for that reason, the rate index is shown in the denominator in order to minimize the fraction $$ \mid {K}^{{\prime\prime\prime} }-K^{\prime}\mid /\mid {K}^{{\prime\prime\prime} }-K^{{\prime\prime}}\mid \to \min, $$ when approaching the state of adaptation, wherein vitally important constants are kept on a normal level or, in other words, when stability of the usual level of specific vital activity, viability and reproducibility of the bio system is maintained under conditions of continuing action of a stressor. In this case, we stem from the following criteria of adaptedness: an organism adapted to Q responds to the action of it with lesser shifts in homeostasis, lesser deviations from "tranquil" values of the initially changing parameters during more intensive work of adaptive mechanisms. Secondly, during the state of adaptation, the process of recover of indices that changed under Q becomes accelerated. These criteria may be expressed mathematically as follows: $$ \left\{\begin{array}{c}\frac{\mid {K}^{{\prime\prime} }-{K}^{\prime}\mid }{K^{\prime }}\kern0.5em \to \min, \mathrm{for}\ \mathrm{the}\ \mathrm{primarily}\ \mathrm{changing}\ \mathrm{homeostatic}\ \mathrm{variable};\\ {}\frac{\mid {K}^{{\prime\prime} }-{K}^{\prime}\mid }{K^{\prime }}\kern0.5em \to \max, \mathrm{for}\ \mathrm{the}\ \mathrm{variable}\ \mathrm{representing}\ \mathrm{the}\ \mathrm{adaptive}\ \mathrm{function}/\mathrm{process};\\ {}\frac{\mid {K}^{{\prime\prime\prime} }-{K}^{\prime}\mid }{\mid {K}^{{\prime\prime\prime} }-{K}^{{\prime\prime}}\mid}\kern0.5em \to \min, \mathrm{for}\ \mathrm{the}\ \mathrm{recovery}\ \mathrm{process};\end{array}\right. $$ or for many parameters $$ \left\{\begin{array}{c}{\sum}_{j=1}^m\frac{K_j^{{\prime\prime} }-{K}_j^{\prime }}{K_j^{\prime }}\to \min, \\ {}{\sum}_{h=1}^d\frac{K_h^{{\prime\prime} }-{K}_h^{\prime }}{K_h^{\prime }}\to \max, \\ {}{\sum}_{j=1}^m\frac{\mid {K}_j^{{\prime\prime\prime} }-{K}_j^{\prime}\mid }{\mid {K}_j^{{\prime\prime\prime} }-{K}_j^{{\prime\prime}}\mid }+{\sum}_{h=1}^d\frac{\mid {K}_h^{{\prime\prime\prime} }-{K}_h^{\prime}\mid }{\mid {K}_h^{{\prime\prime\prime} }-{K}_h^{{\prime\prime}}\mid}\to \min, \end{array}\right. $$ on approaching the state of physiological adaptation to Q. Here j is the index of directly and primarily changing parameter (j = 1, 2, …, m); and h the index of the variable representing the work of adaptive allostatic functions (h = 1, 2,…, d). Naturally, all the differences should be within the limits of physiological adequateness of reactions. The index of the adaptedness of subject B to the action of Q is: $$ {a}_{B=}{\sum}_{j=1}^m\left(-\frac{\mid {K}_{Bj}^{{\prime\prime} }-{K}_{Bj}^{\prime}\mid }{K_{Bj}^{\prime }}-\frac{\mid {K}_{Bj}^{{\prime\prime\prime} }-{K}_{Bj}^{\prime}\mid }{\mid {K}_{Bj}^{{\prime\prime\prime} }-{K}_{Bj}^{{\prime\prime}}\mid}\right)+{\sum}_{h=1}^d\left(\frac{\mid {K}_{Bh}^{{\prime\prime} }-{K}_{Bh}^{\prime}\mid }{K_{Bh}^{\prime }}-\frac{\mid {K}_{Bh}^{{\prime\prime\prime} }-{K}_{Bh}^{\prime}\mid }{\mid {K}_{Bh}^{{\prime\prime\prime} }-{K}_{Bh}^{{\prime\prime}}\mid}\right). $$ The formula for ac is the same. It should be noted that the equalities mв = mc, dв = dc must be maintained. Otherwise, the difference between aв and ac will depend upon the differing numbers of the parameters tested since the a index is calculated as a sum. In case aв > ac, there are grounds, on the basis of the above criteria, to say that B is more adaptive to Q, than C. In case of a contrary inequality, the conclusion should be made that P had an unfavorable effect on B, this being revealed in a lower resistance of B to Q. The magnitude of (1) essentially depends on ∆t. When ∆t is small, K‴ is close to K″, and the difference K‴ – K′ may be large. In this case, ∣ K‴ – K ′ ∣ / ∣ K‴ – K ″ ∣ > > 1. Such situations should be avoided as they are difficult to analyse by means of the method suggested, since the above fraction would essentially affect the index of adaptedness. It should be noted that in formula (2) the terms of ∣K″ – K ′ ∣ /K′ are usually close to unity or comprise its fractions. To avoid the above-said difficulty, one can assume two solutions. First, to divide the fraction by a constant number, performing this operation for the given parameter Ki for all the subjects compared. One may be to a certain extent subjective in selecting the constant number, and this would put Ki in an unequivalent position compared with that of the other parameters. Hence, it would be desirable to select such a duration of rest period that would render K‴ significantly different from K″ and close to K′, while the fraction of (1) would be less than unity. At the same time, the interval ∆t = t3 – t2 (Fig. 2) should not be too great (usually from several dozen minutes to several hours), since, during a prolonged rest period, one may become "involved" in a phase of supercompensation of the investigated parameter, when its value would, for a second time, differ from the baseline level. This primarily concerns indices that characterize certain reserves or resources of the organism, which become exhausted under the effect of Q. Two different variants of recovery dynamics for more adapted (dotted line) and less adapted (solid line) organisms It would be desirable that tв = tc. Only in that case there would be no need to introduce ∆t into formulae (1) and (2). Moreover, this would exclude the influence of a possible nonlinearity in time of the process of recovery, the said nonlinearity being dependent on P. Special consideration should be given to cases when the sign, representing an alteration in a parameter under test load Q (K″– K′), changes to the opposite, depending on whether or not P affected the organism. Thus, for instance, in the subject C the above variable increases under the functional load (\( {K}_C^{{\prime\prime} } \) > \( {K}_C^{\prime } \), K″ – K′ > 0), while under the same load the organism B responds by a decreased parameter (\( {K}_B^{{\prime\prime} } \) < \( {K}_B^{\prime } \), K″ – K′ < 0). Such situations are rare, but they are interesting from the theoretical point of view and afford much information on the mechanisms of adaptive reactions. Let us now assume that, as a result of action of P, the mechanism responsible for maintaining Ki within its norms has "exhausted" its possibilities, and the value of the index is either on its possible upper (lower) boundary or has returned to its initial level, but has not correspondingly increased (decreased) to the action of Q. At the same time, an organism unaffected by P possesses a large reserve for changing the above index up to one its boundaries on the background of unchanged reserves. In this case, P has probably had an adverse effect on the system, violating the natural course of the response to the action of Q. In all cases, "paradoxical" alterations with the p value > 0.05, i.e. statistically negligible, should ostensibly be considered insignificant and taken for zero. In this connection, the greater the number of parameters considered, the lesser the possibility of an erroneous conclusion, because, in a vast amount of data, an individual, possibly a chance anomalous alteration, of a given parameter, is leveled off to a greater degree. Let us introduce into (2) a coefficient that characterizes the importance of each parameter for assessing adaptedness. As a criterion of importance, let us take S, the stability index calculated by subtracting the variability coefficient from unity: Sx = 1 – \( \mid \mathrm{SD}\mid /\overline{K_x^0} \). Here SD is the standard deviation, and \( \overline{K_x^o} \) the arithmetical mean of the variational series, comprising the values of the given parameter Kx for a group of intact animals, whose state corresponds to Fo: $$ \overline{K_x^o}=\frac{1}{l}\kern0.62em {\sum}_{y=1}^l{K}_{xy}^o, $$ where x is the index of the parameter (established fixed), x = 1, 2,…, (m + d), and y – the index of individual, (y = 1, 2, …, l). The real minimum value of coefficient S is practically estimated at 0.5, i.e. the statistical distribution of the random magnitude \( {K}_x^0 \) in a population of intact individuals should approach to normal. Cases when S → 0 or SD ≈ Mean, characteristic of the Poisson and asymmetrical distributions, are not considered by this model. To estimate S more accurately, it should be calculated for a sufficiently numerous sampling population of intact individuals. The coefficient S depends on SD and shows the degree of variability of a given parameter in a group of subjects. It is obvious that constant parameters of the internal medium (pH, osmotic blood pressure, body temperature in homeotherms, etc.) have low standard deviation values and S values close to unity. Let us now consider how to take account of the degree of stability of an initially changing parameter when assessing its importance for estimating adaptedness. Considerable alterations in ultrastable parameters under the action of Q indicates a low resistance of the organism to Q. Contrariwise, an organism adapted to Q responds to the latter by changes in characters of low stability, while no or insignificant changes take place in stable and ultrastable constants. Hence, ultrastable characteristics are more important for assessing adaptedness than labile ones. To cite an example, let us assume that in a given organism, subjected to the action of Q, the parameter with low stability (S = 0.7) changed by 5 units, and the ultrastable one (S = 0.9) by 10 units. Contrariwise, in another organism the low stability and ultrastability characters changed by 10 and 5 units, respectively. Then, a1 = 5 + 10 = 15, a2 = 10 + 5 = 15, and a1 = a2. Introduction into the formula of a corresponding stability coefficient before each of the parameters would change the equations as follows: a1 = 0.7 × 5 + 0.9 × 10 = 12.5; a2 = 0.7 × 10 + 0.9 × 5 = 11.5, and a2 < a1. This, in turn, would allow to make a correct conclusion that the first organism is more adapted to Q. Argumentation concerning adaptive function should bear a different nature. The labile functions, which are characterized by initially intensive work, but are energetically disadvantageous and possess low "adaptation capability", are later gradually replaced by deep, highly stable functions, which afford the organism substantial gains in the process of adaptation. Thus, people adapted to hypoxia have respiration and heart rates almost close to normal; at the same time, the changes taking place in their tissues and cells remain to be substantial [25]. Hence, it would be considered preferable to maintain that adaptation takes place on account of ultrastable and deep adaptive mechanisms. In the process of adaptation, there is a possibility of "normalization" of a "low-powered" adaptive mechanism with replacement of its functions by another more stable phylogenetically older and ontogenetically earlier mechanism. Taking this into consideration, all experiments should be planned in such a way as to have the possibility to measure the indices of at least three adaptive functions, which differ considerably with regard to their stability coefficients. Considering the terms of the recovery process, it should be said that quicker restoration of highly stable parameters to the initial level during rest indicates their better adaptedness. In this case, too, ultrastable parameters are more important for our purpose than low-stability parameters. Accounting for the above-cited considerations, subsequent to the introduction of coefficient S, which differentiated all characters with respect to their importance for assessing adaptedness, formula (3) for the integral index would be expressed as: $$ {a}_{B=}{\sum}_{j=1}^m{S}_j\left(-\frac{\mid {K}_{Bj}^{{\prime\prime} }-{K}_{Bj}^{\prime}\mid }{K_{Bj}^{\prime }}-\frac{\mid {K}_{Bj}^{{\prime\prime\prime} }-{K}_{Bj}^{\prime}\mid }{\mid {K}_{Bj}^{{\prime\prime\prime} }-{K}_{Bj}^{{\prime\prime}}\mid}\right)+{\sum}_{h=1}^d{S}_h\left(\frac{\mid {K}_{Bh}^{{\prime\prime} }-{K}_{Bh}^{\prime}\mid }{K_{Bh}^{\prime }}-\frac{\mid {K}_{Bh}^{{\prime\prime\prime} }-{K}_{Bh}^{\prime}\mid }{\mid {K}_{Bh}^{{\prime\prime\prime} }-{K}_{Bh}^{{\prime\prime}}\mid}\right). $$ CASE II: two groups of organisms, repeated measure parallel experimental design Let us now consider the case when two groups of subjects numbering nв and nс and selected at random from a qualitatively homogenous population are subjected to the effects of P and Q. Let us assume that, as in the first case with two organisms, concrete experimental techniques allow us to record the necessary parameters for each subject in four (or three for group C) of the above said states (Fo, F′, F″, F ‴), the technique of measuring the characteristics not affecting the state of the subject. Then, having composed and classically treated variational series based on calculated individual magnitudes ai and ar, one can obtain the mean values $$ \left\{\begin{array}{l}{\mathrm{A}}_{\mathrm{B}}=\overline{a_{\mathrm{B}}}=\frac{1}{{\mathrm{n}}_{\mathrm{B}}}\;{\sum \limits}_{i=1}^{{\mathrm{n}}_{\mathrm{B}}}{a}_i=\\ {}=\frac{1}{{\mathrm{n}}_{\mathrm{B}}}{\sum \limits}_{i=1}^{{\mathrm{n}}_{\mathrm{B}}}\left[{\sum \limits}_{j=1}^m{S}_j\left(-\frac{\left|{Kij}^{\hbox{'}\hbox{'}\hbox{'}}-{Kij}^{\hbox{'}}\right|}{\left|{Kij}^{\hbox{'}\hbox{'}\hbox{'}}-{Kij}^{\hbox{'}\hbox{'}}\right|}-\frac{\left|{Kij}^{\hbox{'}\hbox{'}}-{Kij}^{\hbox{'}}\right|}{Kij^{\hbox{'}}}\ \right)+{\sum \limits}_{h=1}^d{S}_h\left(-\frac{\left|{K}_{ih}^{\hbox{'}\hbox{'}\hbox{'}}-{K}_{ih}^{\hbox{'}}\right|}{\left|{K}_{ih}^{\hbox{'}\hbox{'}\hbox{'}}-{K}_{ih}^{\hbox{'}\hbox{'}}\right|}++\frac{\left|{K}_{ih}^{\hbox{'}\hbox{'}}-{K}_{ih}^{\hbox{'}}\right|}{K_{ih}^{\hbox{'}}}\ \right)\ \right]\\ {}{A}_C=\overline{a_c}=\frac{1}{{\mathrm{n}}_c}{\sum \limits}_{r=1}^{{\mathrm{n}}_c}{a}_r\end{array}\right. $$ and standard deviations, make up confidence intervals, and estimate the authenticity of the difference between AB and AC with any pre-estimated probability of error. In formulae (5), i is the index of the individual in group B, r the index of an organism in group C, j the index of a directly changing homeostatic parameter, and h the parameter index of allostatic functions. Multipliers 1/nB and 1/nC may be omitted in formulae if nB = nC. CASE III: parallel design The next design is widely distributed in experimental biology and physiology, where the imperfection of investigation techniques or their specific properties often lead to the necessity of killing the animal in order to obtain certain characteristics of its internal medium. This is justified for almost all morphological and biochemical investigation methods with the exception of cases involving biological fluids. In this case, several animals are killed in each F state, and, on the basis of their individual properties, the mean value for each parameter (in case of need, its SD also) is estimated: $$ \overline{K_t^{\hbox{'}}} = \frac{1}{g}\;{\sum}_{f=1}^g{K}_{tf}^{\hbox{'}}, $$ where f is the index of individual, g the index of individuals used for obtaining parameter t in state F′, t the parameter index, t = 1, 2,…, (m + d). Accounting for this important circumstance, each parameter in (4) should be replaced by its mean value (6). In case III, it is more difficult to calculate the standard deviation for the index of adaptedness than in Case II, even though there are mean error for each parameter in each of the states. The problem is to calculate the standard variation of the sum of (4) with corresponding replacement of K by \( \overline{K} \) in accord with (6). Let us consider, for example, the calculation of the index of adaptedness for several groups of animals. This case may be assigned to the last and, apparently, the most complex of the algorithm described. The experiment was performed by the author in collaboration with Drs. A.V. Shulga, E. Khasina, and G. Bezdetko. The numerical application of the method is presented in Tables 1 and 2. Table 1 Biochemical parameters in rats subjected to physical training, eleutherosides administration, and acute exercise Table 2 Calculation of the index of rat adaptedness to short-term intensive muscular work Sexually mature male Wistar rats from groups D and E (Table 1) were subcutaneously injected with 0.5% aqueous solution of a sum of eleutherosides (active substances from the roots of the Far Eastern plant Eleutherococcus senticosus). This remedy like ginseng belongs to adaptogenic herbs [26, 27] that are known to hold a capacity to increase stress resistance, physical and mental performance without increasing oxygen consumption [28]. The injections were made in dose of 5 mg/kg twice a day for fourteen days. Animals from groups C and E were trained daily for a fortnight by swimming with a 6% load on the tail. The initial session was 12 min, plus 2 min every subsequent day. Animals from group C were injected with an isotonic solution of sodium chloride. The animals were killed 46 h. after completing the last swimming session. Immediately before that, they were subjected to a functional load involving 15-min swimming with 6% load in water, temperature 29–31 °C. Some of the rats were decapitated prior to swimming (state of rest). The results of the experiment and their treatment for obtaining the index of adaptedness are presented in Tables 1 and 2. The concentration of 11-hydroxycorticisteroids, the major of which is corticosterone in rats, is considered an adaptive parameter and in accordance with the criteria (2) was used with positive sign while calculating the index. Plus and minus for each physiological or biochemical variable in Tables 2 and 4 do not mean the direction of changes but play technical (assistant) role and indicate the sign which should be assigned to the given parameter while calculating the sum. In analyzing the bottom line in Table 2, one may conclude that trained animals (group C) are the most resistant to short-time intensive muscular load. Untrained and trained animals from the groups D and E, which were administered with eleutherosides, "rank" second and third, respectively. Intact rats from the group B proved to be least resistant to the above load. The most noteworthy fact is that the injection of eleutherosides simulates training and exerts a protective effect. This conclusion is evident from the comparison of A values for groups B and D. Let us now consider one more example when the index of adaptedness was calculated for two groups of race horses differing in the functional status of the individuals [29]. The load intensity corresponded to about 80% of the maximum. This case takes into account recovery of variables 1 h after the test running (Tables 3 and 4). The perspiration, assessed visually and semi quantitatively, was not taken into analysis. The stability coefficients of other indices were found as the mean values of two magnitudes calculated separately for \( {F}_1^{\prime } \) and \( {F}_2^{\prime } \). Table 3 Biochemical and physiological variables in horses before and after racing Table 4 Calculation of the index of adaptedness of race horses to physical loads Increased pulse rate up to the definite limit, when indices of heart efficiency begin to decline, is undoubtedly an adaptive trait. Adaptive responses also include increased rate of oxygen release by the blood and increased concentration of hemoglobin. In analyzing the changes in the variables studied, as well as the energetic efficiency of animals, Epstein and Rudoy [29] have characterized the horses from groups 1 and 2 as robust and weak, respectively. Thus, by means of precise quantitative calculations, the proposed method confirmed the conclusion of the above authors regarding the functional state if the animals from the groups compared. In discussion, the following important notes should be made. The index of adaptedness A has no independent value whatsoever if not compared with the index of another group (or other groups) of organisms studied in parallel. According to the differentiation offered by Prosser and Brown [30], only regulating organisms, but not conformers, can be analyzed by this method. It is implied that the proposed algorithm can not be applicable to species of "poikilo-organisms", or non-homeostatic animals, that are unable to maintain constant parameters of milieu interieur. The calculation of SD for evaluating interindividual variability and hence functional stability makes sense for normally or at least symmetrically distributed variables. Therefore, non-Gaussian distributed parameters cannot be introduced in the proposed model. Further, it cannot operate parameters of nonlinear responses, particularly demonstrating the exponential dynamics during recovery process. When referring to the topic, such terms as resistance, stability, tolerance, fitness, acclimation, coping, and adaptation are often equalized. This, however, is hardly justified, and, being a theoretical question, remains to be substantiated and should be the subject of a special study. To be sure, the calculation of the suggested index of adaptedness cannot replace a detailed analysis of all the changes observed, since such an analysis is essential for elucidating specific mechanisms of adaptation. The model may be applied for assessing the functional state and resistance of athletes under different kinds of loads. It may also be used to study cross adaptation to two or several constraints, and in all experiments whose scheme may be represented by Fig. 1. A probable application of the method is the screening of pharmacological substances that one way or another affect the process of adaptation. Thus, the index of adaptedness makes it possible to quantify the integral response of bio systems to the action of disturbing factors. The said index is useful in that it replaces wordy descriptions of large tables that reveal alterations in numerous parameters of many of the subjects studied. The use of the model for analyzing experimental results would unavoidably force the investigator to take a more careful approach in designing experiment and in selecting the respective test groups, states and parameters. This, in turn, would lead to a more thorough methodological substantiation of the investigations in question. The program for calculating the index of adaptedness, designed in EXCEL computer package, for any experimental data and other materials are freely available from the author upon request. 11-HCS: 11-hydroxycorticosteroids ATP: FFA: Free fatty acids P50 : Blood oxygen release capacity at 50% HbO2 SD: Peck JR, Waxman D. What is adaptation and how it should be measured? J Theor Biol. 2018;447:190–8. Peper A. A theory of drug tolerance and dependence I: a conceptual analysis. J Theor Biol. 2004;229:477–90. Peper A. A theory of drug tolerance and dependence II: the mathematical model. J Theor Biol. 2004;229:491–500. Tonhajzerova I, Mestanik M. New perspectives in the model of stress response. Physiol Res. 2017;66:S173–85. Shi W, et al. Adaptation with transcriptional regulation. Sci Rep. 2017;7:42648. Masselot P, et al. A new look at weather-related health impacts through functional regression. Sci Rep. 2018;8:15241. Yadav S, Sood A. Adaptation in networks: a review. Int J Eng Comput Sci. 2013;2:3278–81. De Palo G, et al. Common dynamical features of sensory adaptation in photoreceptors and olfactory sensory neurons. Sci Rep. 2013;3:1251. Dawson EA, et al. Do acute effects of exercise on vascular function predict adaptation to training? Eur J Appl Physiol. 2018;118:523–30. O'Neill DP, Robbins PA. A mechanistic physiochemical model of carbon dioxide transport in blood. J Appl Physiol. 2017;122:283–95. Busso T, et al. Modeling of adaptations to physical training by using a recursive least squares algorithm. J Appl Physiol. 1997;82:1685–93. Wood RE, et al. Applying a mathematical model to training adaptation in a distant runner. Eur J Appl Physiol. 2005;94:310–6. Pries AR, Secomb TW, Gaehtgens P. Structural adaptation and stability of microvascular networks: theory and simulations. Am J Physiology. 1998;275:H349–60. Gorban AN, et al. Law of the minimum paradoxes. Bull Math Biol. 2011;73:2013–44. Baffy G, Loscalzo J. Complexity and network dynamics in physiological adaptation: an integrated view. Physiol Behav. 2014;131:49–56. Arminjon M. Birth of the allostatic model: from Cannon's biocracy to critical physiology. J History Biol. 2016;49:397–423. Selye H. The stress of life. London: Longmans Green; 1956. Romero LM, Dickens MJ, Cyr NE. The reactive scope model – a new model integrating homeostasis, allostasis, and stress. Horm Behav. 2009;55:375–89. McEwen BS. Stress: homeostasis, rheostasis, reactive scope, allostasis and allostatic load. In: Reference module in neuroscience and biobehavioral psychology: Elsevier; 2017. https://doi.org/10.1016/B978-0-12-809324-5.02867-4. Richalet J-P. A proposed classification of environmental adaptation: the example of high altitude. Rev Environ Sci Biotechnol. 2007;6:223–9. Cannon W. The wisdom of the body. New York: W.W. Norton & Co.; 1932. Paccotti P, et al. Effects of high-intensity isokinetic exercise on salivary cortisol in athletes with different training schedules: relationships to serum cortisol and lactate. Int J Sports Med. 2005;26:747–55. Roberts CK, et al. Resistance training increases SHBG in overweight/obese young men. Metabolism. 2013;62:725–33. Grandys M, et al. The importance of the training-induced decrease in basal cortisol concentration in the improvement in muscular performance in humans. Physiol Res. 2016;65:109–20. Bailey DM, Davis B. Physiological implications of altitude training for endurance performance: a review. Br J Sport Med. 1997;31:183–90. Brekhman II, Dardymov IV. New substances of plant origin which increase nonspecific resistance. Annu Rev Pharmacol. 1969;9:419–30. Panossian A. Understanding adaptogenic activity: specificity of the pharmacological action of adaptogens and other phytochemicals. Ann N Y Acad Sci. 2017;140:49–64. Oliynyk S, Oh S. Actoprotective effect of ginseng: improving mental and physical performance. J Ginseng Res. 2013;37:144–66. Epstein IM, Rudoy VB. On the dependence of blood oxygen release upon physical load and functional state of the organism of sport horses. In: Korobkov AV, editor. Problems in Sport Physiol. Moscow: Inst. Fiz. Cult; 1972. p. 120–3. Rus. Prosser CL, Brown FA. Comparative animal physiology. Philadelphia: Saunders; 1961. No funding bodies were utilized in the design, analysis, and writing of the manuscript. Institute of Physiology and Basic Medicine, P.O. Box 237, 4, Timakov Str, Novosibirsk, 630117, Russia Vladimir N. Melnikov Search for Vladimir N. Melnikov in: The author read and approved the final manuscript. Correspondence to Vladimir N. Melnikov. The animals were cared for in accordance with the Institutional Guide for the Care and Use of Laboratory Animals. The experimental protocol was approved by the Institute Ethical Committee. The author declares that he has no competing interests. Melnikov, V.N. A quantitative method for estimating the adaptedness in a physiological study. Theor Biol Med Model 16, 15 (2019) doi:10.1186/s12976-019-0111-7 Physiological adaptation Plant adaptogens
CommonCrawl
How to construct matrix of regular and "flipped" 2-qubit CNOT? When constructing the matrices for the two CNOT based on the target and control qubit, I can use reasoning: "If $q_0$==$|0\rangle$, everything simply passes through", resulting in an Identity matrix style $\begin{bmatrix}1&0\\0&1\end{bmatrix}$ in the top left. "If $q_0==|1\rangle$, we need to let $q_0$ pass and swap $q_1$, resulting in a Pauli X $\begin{bmatrix}0&1\\1&0\end{bmatrix}$ in the bottom right. $CNOT \equiv \begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}$ "If $q_1$==$|0\rangle$, everything simply passes through", results in leaving $|00\rangle$ and $|10\rangle$ unaffected. "If $q_1==|1\rangle$, we need to let $q_1$ pass and swap $q_0$, mapping $|01\rangle$ to $|11\rangle$ and $|11\rangle$ to $|01\rangle$ This all seems to check out, but here comes my question: I would like to know if there is a more mathematical way to express this, just as there would be when combining for instance two hadamard gates: $H \otimes H \equiv \frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} \otimes \frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\1&-1\\ \end{bmatrix} = \frac{1}{2}\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1 \end{bmatrix}$ And a bonus question: How I can, using notation like "CNOT", show which qubit is the control bit and which qubit the target bit? quantum-gate matrix-representation pauli-gates Thomas HubregtsenThomas Hubregtsen $\begingroup$ You might enjoy this post about introducing an algebraic "control value": algassert.com/impractical-experiments/2015/05/17/… $\endgroup$ – Craig Gidney Jan 11 '19 at 11:18 $\begingroup$ I wrote a blog post about how to describe CNOT gates and Control-U gates a couple years ago you may find helpful. It is basically the same as the Projection operator answer but goes into a bit more detail and has examples in Python using the QuDotPy opensource quantum computing library. You can find the blog post here: Quantum Control Gates in Python $\endgroup$ – Perry Sakkaris Jan 12 '19 at 1:39 The CNOT gate is a 2-qubit gate, and consequently, its operation cannot be expressed by the tensor product of two one-qubit gates as the example you gave with the Hadamard gates. An easy way to check that such matrix cannot be expressed as the tensor product of two other matrices is to take matrices $A =\begin{pmatrix}a & b \\ c & d\end{pmatrix}$ $B=\begin{pmatrix}e & f \\ g & h\end{pmatrix}$ and see which values should get the elements of them so that $CNOT = A\otimes B$. Following this reasoning: $A\otimes B= \begin{pmatrix}ae &af & be & bf \\ ag & ah & bg & bh \\ ce & cf & de & df\\cg & ch & dg & dh\end{pmatrix} $ It is clear that $b=0$ and $c=0$ are needed, as all the elements they affect must be zero, and so $a=1$ and $d=1$ because some of the elements they affect must be non-zero. This leaves last matrix as $\begin{pmatrix}e &f & 0 & 0 \\ g & h & 0 & 0 \\ 0 & 0 & e & f\\0 & 0 & g & h\end{pmatrix}$. From this matrix it is pretty straightforward to see that it is not possible to obtain the CNOT matrix, as for example $e=1$ so that the $(1,1)$ element is $1$, but also $e=0$ because the $(3,3)$ element is one; implying a contradiction, and so proving that CNOT cannot be expressed in such a way. The same happens with the other definition of CNOT. From a notation point of view to see which is the target and control qubit I usually use $CNOT(1\rightarrow2)=\begin{pmatrix}1 &0& 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1\\0 & 0 & 1 & 0\end{pmatrix}$. $CNOT(1\leftarrow2)=\begin{pmatrix}1 &0& 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0\\0 & 1 & 0 & 0\end{pmatrix}$. This way it is know that the target qubit is the one being pointed by the arrow. One thing you can express mathematically easily is the relationship between $CNOT(1\rightarrow2)$ and $CNOT(1\leftarrow2)$, which is easily proved to be $CNOT(1\leftarrow2) = (H\otimes H)CNOT(1\rightarrow 2)(H\otimes H)$. Josu Etxezarreta MartinezJosu Etxezarreta Martinez $\begingroup$ Thank for the great answer. Small note: I believe your CNOT(1←2) matrix contains a typo, as both $|01\rangle$ and $|11\rangle$ are being mapped at $|11\rangle$, creating a problem with reversibility. $\endgroup$ – Thomas Hubregtsen Jan 11 '19 at 9:38 $\begingroup$ Yes, you are right. I have edited it to the correct one. $\endgroup$ – Josu Etxezarreta Martinez Jan 11 '19 at 9:43 $$ CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} $$ But what does this matrix mean? The above matrix means: on a two qubit system (such as $\left|00\right>$, $\left|10\right>$, $\left|11\right>$, etc.) if the first qubit is a one, apply the not gate (X) on the second qubit. That's cool, but what if you have a 4 qubit system $\left(\left|0101\right>\right)$ or an 8 qubit system $\left(\left|00010011\right>\right)$ ? Also, let's say on the 8 qubit system you want to apply the X gate on the 7th qubit if the 2nd is one, how would you go about constructing this matrix? Cookbook Method Most books and articles will discuss a cookbook method where you build a matrix based on what the operation does to the base states. Let's work through this for the 2 qubit CNOT gate above. The above gate says if the first qubit is 1 then apply an X to the 2nd qubit. An X just changes the qubit from a 0 to a 1 or a 1 to a 0. The base states for a two qubit system are: $\left|00\right>,\, \left|01\right>,\, \left|10\right>,\, \left|11\right>$ How do I know those are the 4 states? Quantum qubit systems grow as $2^q$ where $q$ is the number of qubits. For a two qubit system $q=2$ therefore there are 4 qubit states. Then you count from 0 to 3 in binary. That is all. First, this is how CNOT operates on the base states: $$ CNOT\left|00\right> \rightarrow \left|00\right> $$$$ CNOT\left|01\right> \rightarrow \left|01\right> $$$$ CNOT\left|10\right> \rightarrow \left|11\right> $$$$ CNOT\left|11\right> \rightarrow \left|10\right> $$ To build a matrix you use the following trick: \begin{align*} CNOT &= \begin{bmatrix}\left<00|CNOT|00\right> && \left<00|CNOT|01\right> && \left<00|CNOT|10\right> && \left<00|CNOT|11\right> \\ \left<01|CNOT|00\right> && \left<01|CNOT|01\right> && \left<01|CNOT|10\right> && \left<01|CNOT|11\right> \\ \left<10|CNOT|00\right> && \left<10|CNOT|01\right> && \left<10|CNOT|10\right> && \left<10|CNOT|11\right> \\ \left<11|CNOT|00\right> && \left<11|CNOT|01\right> && \left<11|CNOT|10\right> && \left<11|CNOT|11\right> \end{bmatrix} \\ \\ &= \begin{bmatrix}\left<00|00\right> && \left<00|01\right> && \left<00|11\right> && \left<00|10\right> \\ \left<01|00\right> && \left<01|01\right> && \left<01|11\right> && \left<01|10\right> \\ \left<10|00\right> && \left<10|01\right> && \left<10|11\right> && \left<10|10\right> \\ \left<11|00\right> && \left<11|01\right> && \left<11|11\right> && \left<11|10\right> \end{bmatrix} \\ \\ &=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} \end{align*} This is nice, but now suppose you want a CNOT gate between the 3rd and 9th qubit of a 10 qubit system? You will have $2^{10}$ base states and a $2^{10} \times 2^{10}$ matrix! Good luck with the cookbook method! What we need is an algorithmic method that we can code up in Python. The Algorithmic Method A better algorithmic way to think about quantum control gates is by using operators and tensor products. Suppose we have a two qubit system. To say "If the first qubit is $\left|0\right>$ leave the second qubit alone": $$ \left|0\rangle\langle0\right| \otimes I $$ to leave a qubit alone you apply the Identity operator / matrix $I$ To say "If the first qubit is $\left|1\right>$ apply X to the second qubit": $$ \left|1\rangle\langle1\right| \otimes X $$ Now put them together by adding them, "If the first qubit is $\left|0\right>$ leave the second qubit alone and If the first qubit is $\left|1\right>$ apply X to the second qubit": $$ \left|0\rangle\langle0\right| \otimes I + \left|1\rangle\langle1\right| \otimes X $$ In [3]: from qudotpy import qudot zero_matrix = qudot.ZERO.ket * qudot.ZERO.bra one_matrix = qudot.ONE.ket * qudot.ONE.bra CNOT = np.kron(zero_matrix, np.eye(2)) + np.kron(one_matrix, qudot.X.matrix) print(CNOT) [[ 1.+0.j 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 1.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 1.+0.j 0.+0.j]] Nice! This is a good algorithmic way to make CNOT gates (or control gates in general). For example, suppose now you want a CNOT gate of a 10 qubit system where the 3rd qubit is the control and the 9th qubit is the target: "If the third qubit is $\left|0\right>$ leave the ninth qubit alone:" $$ I \otimes I \otimes \left|0\rangle\langle0\right| \otimes I \otimes I \otimes I \otimes I \otimes I \otimes I \otimes I $$ "If the third qubit is $\left|1\right>$ apply X to the ninth qubit": $$ I \otimes I \otimes \left|1\rangle\langle1\right| \otimes I \otimes I \otimes I \otimes I \otimes I \otimes X \otimes I $$ Add those two expression together and you have your 10 qubit CNOT gate. The above algorithm is straightforward to implement in Python. Martin Vesely Perry SakkarisPerry Sakkaris I like to use the projectors $$ P_0=|0\rangle\langle 0|\equiv\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)\qquad P_1=|1\rangle\langle 1|\equiv\left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right) $$ to express the intuition of the controlled-not when constructing it: $$ CNOT=P_0\otimes\mathbb{I}+P_1\otimes X. $$ Here you can see very plainly the statement "if the first qubit is in $|0\rangle$, do nothing (apply identity) to the second qubit, while if the first qubit is in $|1\rangle$, apply the bit flip". Similarly, the reversed version is $$ \mathbb{I}\otimes P_0+X\otimes P_1. $$ I'm not aware of a standard notation to specify the direction of the controlled-not. Whenever I've needed to do it, I've needed to do it a lot, and went for something very compact: $C^i_j$ to indicate a controlled not controlled by qubit $i$ and targeting qubit $j$. But it's far from standard, so whatever convention you pick, you'd want to state it quite explicitly. $\begingroup$ Interesting. I find this a clear mathematical way to describe both CNOT gates. Still a large lack in my understanding that I am trying to fill, so a bit of an ignorant question: Is a projector a valid single-qubit gate? My first thought would be that projectors can not be seen as a gate on a single qubit in a quantum circuit as it does not appear to be reversible? But when seeing them in the scope of two qubits, you can use them as you did, making both your and Josu his answer valid? ("its operation cannot be expressed by the tensor product of two one-qubit gates") $\endgroup$ – Thomas Hubregtsen Jan 11 '19 at 10:20 $\begingroup$ A projector is not a valid unitary gate (it's more like a measurement), but you can add together combinations to make something that is a unitary overall. For example, you can think of Pauli Z as $Z=P_0-P_1$. The distinguishing feature of the other answer is not the gate aspect. CNOT simply cannot be written as the tensor product of two matrices. The way around it is that you need sums of tensor products. $\endgroup$ – DaftWullie Jan 11 '19 at 10:27 Not the answer you're looking for? Browse other questions tagged quantum-gate matrix-representation pauli-gates or ask your own question. Composing the CNOT gate as a tensor product of two level matrices Going from a circuit to the quantum state output of the circuit How does evolving a two-qubit state through a CNOT gate entangle them? How to factor the output of a CNOT acting on the input $|-,+\rangle$ What is the matrix representation for $n$-qubit gates? How to translate CNOT gate by Hadamard and Pauli Z gate in matrix form? Representing qubit swap using linear algebra How to implement a Fredkin gate using Toffoli and CNOTs? General approach for switching control and target qubit Show how the Bell state arises from the circuit with Hadamard and CNOT, using matrix notation How does entangle qubits pass single qubit gate?
CommonCrawl
Construction of response functions in forced strongly dissipative systems DCDS Home Pentagonal domain exchange October 2013, 33(10): 4401-4410. doi: 10.3934/dcds.2013.33.4401 On the large deviation rates of non-entropy-approachable measures Masayuki Asaoka 1, and Kenichiro Yamamoto 2, Department of Mathematics, Kyoto University, 606-8502, Kyoto, Japan School of Information Environment, Tokyo Denki University, 2-1200 Buseigakuendai, Inzai-shi, Chiba 270-1382 Received October 2010 Revised March 2013 Published April 2013 We construct a non-ergodic maximal entropy measure of a $C^{\infty}$ diffeomorphism with a positive entropy such that neither the entropy nor the large deviation rate of the measure is influenced by that of ergodic measures near it. Keywords: Large deviation, rate approachable, ergodic measure., weakly approachable, entropy approachable. Mathematics Subject Classification: Primary: 37C40; Secondary: 60F1. Citation: Masayuki Asaoka, Kenichiro Yamamoto. On the large deviation rates of non-entropy-approachable measures. Discrete & Continuous Dynamical Systems, 2013, 33 (10) : 4401-4410. doi: 10.3934/dcds.2013.33.4401 L. Barreira and Ya. B. Pesin, "Lyapunov Exponents and Smooth Ergodic Theory," University Lecture Series, 23, American Mathematical Society, Providence, RI, 2002. Google Scholar A. Dembo and O. Zeitouni, "Large Deviations Techniques and Applications," $2^{nd}$ edition, Applications of Mathematics (New York), 38, Springer-Verlag, New York, 1998. Google Scholar M. Denker, C. Grillenberger and K. Sigmund, "Ergodic Theory on Compact Spaces," Lecture Notes in Mathematics, 527, Springer-Verlag, Berlin-New York, 1976. Google Scholar A. Eizenberg, Y. Kifer and B. Weiss, Large deviations for $\mathbbZ^d$-actions, Comm. Math. Phys., 164 (1994), 433-454. doi: 10.1007/BF02101485. Google Scholar H. Follmer and S. Orey, Large deviations for the empirical field of a Gibbs measure, Ann. Probab., 16 (1988), 961-977. doi: 10.1214/aop/1176991671. Google Scholar F. Hofbauer, Generic properties of invariant measures for continuous piecewise monotonic transformations, Monatsh. Math., 106 (1988), 301-312. doi: 10.1007/BF01295288. Google Scholar C.-E. Pfister and W. G. Sullivan, Large deviations estimates for dynamical systems without the specification property. Application to the $\beta$-shifts, Nonlinearity, 18 (2005), 237-261. doi: 10.1088/0951-7715/18/1/013. Google Scholar Y. Pomeau and P. Manneville, Intermittent transition to turbulence in dissipative dynamical systems, Comm. Math. Phys., 74 (1980), 189-197. doi: 10.1007/BF01197757. Google Scholar M. Qian, J.-S. Xie and S. Zhu, "Smooth Ergodic Theory for Endomorphisms," Lecture Notes in Mathematics, 1978, Springer-Verlag, Berlin-New York, 2009. doi: 10.1007/978-3-642-01954-8. Google Scholar K. Sigmund, Generic properties of invariant measures for axiom-A diffeomorphisms, Invent. Math., 11 (1970), 99-109. doi: 10.1007/BF01404606. Google Scholar K. Yamamoto, On the weaker forms of the specification property and their applications, Proc. Amer. Math. Soc., 137 (2009), 3807-3814. doi: 10.1090/S0002-9939-09-09937-7. Google Scholar L.-S. Young, Some large deviation results for dynamical systems, Trans. Amer. Math. Soc., 318 (1990), 525-543. doi: 10.2307/2001318. Google Scholar Jérôme Buzzi, Sylvie Ruette. Large entropy implies existence of a maximal entropy measure for interval maps. Discrete & Continuous Dynamical Systems, 2006, 14 (4) : 673-688. doi: 10.3934/dcds.2006.14.673 Andreas Strömbergsson. On the deviation of ergodic averages for horocycle flows. Journal of Modern Dynamics, 2013, 7 (2) : 291-328. doi: 10.3934/jmd.2013.7.291 John Kieffer and En-hui Yang. Ergodic behavior of graph entropy. Electronic Research Announcements, 1997, 3: 11-16. Oliver Jenkinson. Every ergodic measure is uniquely maximizing. Discrete & Continuous Dynamical Systems, 2006, 16 (2) : 383-392. doi: 10.3934/dcds.2006.16.383 Yueling Li, Yingchao Xie, Xicheng Zhang. Large deviation principle for stochastic heat equation with memory. Discrete & Continuous Dynamical Systems, 2015, 35 (11) : 5221-5237. doi: 10.3934/dcds.2015.35.5221 François Blanchard, Wen Huang. Entropy sets, weakly mixing sets and entropy capacity. Discrete & Continuous Dynamical Systems, 2008, 20 (2) : 275-311. doi: 10.3934/dcds.2008.20.275 Jane Hawkins, Michael Taylor. The maximal entropy measure of Fatou boundaries. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4421-4431. doi: 10.3934/dcds.2018192 Wen Huang, Leiye Xu, Shengnan Xu. Ergodic measures of intermediate entropy for affine transformations of nilmanifolds. Electronic Research Archive, 2021, 29 (4) : 2819-2827. doi: 10.3934/era.2021015 Kazuo Yamazaki. Large deviation principle for the micropolar, magneto-micropolar fluid systems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 913-938. doi: 10.3934/dcdsb.2018048 Ran Wang, Jianliang Zhai, Shiling Zhang. Large deviation principle for stochastic Burgers type equation with reflection. Communications on Pure & Applied Analysis, 2022, 21 (1) : 213-238. doi: 10.3934/cpaa.2021175 Dilip B. Madan. Measure distorted arrival rate risks and their rewards. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 8-. doi: 10.1186/s41546-017-0021-8 Erik M. Bollt, Joseph D. Skufca, Stephen J . McGregor. Control entropy: A complexity measure for nonstationary signals. Mathematical Biosciences & Engineering, 2009, 6 (1) : 1-25. doi: 10.3934/mbe.2009.6.1 Tao Wang, Yu Huang. Weighted topological and measure-theoretic entropy. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3941-3967. doi: 10.3934/dcds.2019159 Jon Chaika, Howard Masur. There exists an interval exchange with a non-ergodic generic measure. Journal of Modern Dynamics, 2015, 9: 289-304. doi: 10.3934/jmd.2015.9.289 Jialu Fang, Yongluo Cao, Yun Zhao. Measure theoretic pressure and dimension formula for non-ergodic measures. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2767-2789. doi: 10.3934/dcds.2020149 Nuno Luzia. On the uniqueness of an ergodic measure of full dimension for non-conformal repellers. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5763-5780. doi: 10.3934/dcds.2017250 Tomasz Downarowicz, Benjamin Weiss. Pure strictly uniform models of non-ergodic measure automorphisms. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 863-884. doi: 10.3934/dcds.2021140 Roland Gunesch, Anatole Katok. Construction of weakly mixing diffeomorphisms preserving measurable Riemannian metric and smooth measure. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 61-88. doi: 10.3934/dcds.2000.6.61 Jean-Baptiste Bardet, Bastien Fernandez. Extensive escape rate in lattices of weakly coupled expanding maps. Discrete & Continuous Dynamical Systems, 2011, 31 (3) : 669-684. doi: 10.3934/dcds.2011.31.669 Peng Zhang. Multiperiod mean semi-absolute deviation interval portfolio selection with entropy constraints. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1169-1187. doi: 10.3934/jimo.2016067 Masayuki Asaoka Kenichiro Yamamoto
CommonCrawl
This issue Previous Article Optimal reaction exponent for some qualitative properties of solutions to the $p$-heat equation Next Article Global gradient estimates in elliptic problems under minimal data and domain regularity Statistical exponential formulas for homogeneous diffusion Matthew B. Rudd1, Department of Mathematics, Sewanee: The University of the South, Sewanee, TN 37383 Received: February 28, 2014 Revised: March 31, 2014 Let $\Delta^{1}_{p}$ denote the $1$-homogeneous $p$-Laplacian, for $1 \leq p \leq \infty$. This paper proves that the unique bounded, continuous viscosity solution $u$ of the Cauchy problem \begin{eqnarray} u_{t} - ( \frac{p}{ N + p - 2 } ) \Delta^{1}_{p} u = 0 \quad \mbox{for} \quad x \in R^N \quad \mbox{and} \quad t > 0 , \\ \\ u(\cdot,0) = u_0 \in BUC(R^N). \end{eqnarray} is given by the exponential formula \begin{eqnarray} u(t) := \lim_{n \to \infty}{ ( M^{t/n}_{p} )^{n} u_{0} } \ , \end{eqnarray} where the statistical operator $M^h_p \colon BUC( R^{N} ) \to BUC( R^{N} )$ is defined by \begin{eqnarray} (M^{h}_{p} \varphi)(x) := (1-q) median_{\partial B(x,\sqrt{2h})}{ \{ \varphi \} } + q \int_{\partial B(x,\sqrt{2h})}{ \varphi ds } \end{eqnarray} when $1 \leq p \leq 2$, with $q := \frac{ N ( p - 1 ) }{ N + p - 2 }$, and by \begin{eqnarray} (M^{h}_{p} \varphi )(x) := ( 1 - q ) midrange_{\partial B(x,\sqrt{2h})}{ \{ \varphi\} } + q \int_{\partial B(x,\sqrt{2h})}{ \varphi ds } \end{eqnarray} when $p \geq 2$, with $q = \frac{ N }{ N + p - 2 }$. Possible extensions to problems with Dirichlet boundary conditions are mentioned briefly. Nonlinear diffusion, nonlinear semigroups, exponential formulas, homogeneous $p$-Laplacian, parabolic $p$-Laplacian. Mathematics Subject Classification: Primary: 35K, 47H20; Secondary: 37L05, 47H05. G. Akagi, P. Juutinen and R. Kajikiya, Asymptotic behavior of viscosity solutions for a degenerate parabolic equation associated with the infinity-Laplacian, Math. Ann., 343 (2009), 921-953.doi: 10.1007/s00208-008-0297-1. G. Akagi and K. Suzuki, Existence and uniqueness of viscosity solutions for a degenerate parabolic equation associated with the infinity-Laplacian, Calc. Var. Partial Differential Equations, 31 (2008), 457-471.doi: 10.1007/s00526-007-0117-6. L. Alvarez, F. Guichard, P.-L. Lions and J.-M. Morel, Axioms and fundamental equations of image processing, Arch. Rational Mech. Anal., 123 (1993), 199-257.doi: 10.1007/BF00375127. M. Bardi, M. G. Crandall, L. C. Evans, H. M. Soner and P. E. Souganidis, Viscosity solutions and applications, vol. 1660 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 1997, Lectures given at the 2nd C.I.M.E. Session held in Montecatini Terme, June 12-20, 1995, Edited by I. Capuzzo Dolcetta and P. L. Lions, Fondazione C.I.M.E.. [C.I.M.E. Foundation]. G. Barles and P. E. Souganidis, Convergence of approximation schemes for fully nonlinear second order equations, Asymptotic Anal., 4 (1991), 271-283. G. Barles and C. Georgelin, A simple proof of convergence for an approximation scheme for computing motions by mean curvature, SIAM J. Numer. Anal., 32 (1995), 484-500.doi: 10.1137/0732020. F. Catté, F. Dibos and G. Koepfler, A morphological scheme for mean curvature motion and applications to anisotropic diffusion and motion of level sets, SIAM J. Numer. Anal., 32 (1995), 1895-1909.doi: 10.1137/0732085. M. G. Crandall and T. M. Liggett, Generation of semi-groups of nonlinear transformations on general Banach spaces, Amer. J. Math., 93 (1971), 265-298. M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67.doi: 10.1090/S0273-0979-1992-00266-5. E. DiBenedetto, Degenerate Parabolic Equations, Universitext, Springer-Verlag, New York, 1993.doi: 10.1007/978-1-4612-0895-2. L. C. Evans and J. Spruck, Motion of level sets by mean curvature. I, J. Differential Geom., 33 (1991), 635-681. L. C. Evans and J. Spruck, Motion of level sets by mean curvature. II, Trans. Amer. Math. Soc., 330 (1992), 321-332.doi: 10.2307/2154167. L. C. Evans and J. Spruck, Motion of level sets by mean curvature. III, J. Geom. Anal., 2 (1992), 121-150.doi: 10.1007/BF02921385. L. C. Evans, Convergence of an algorithm for mean curvature motion, Indiana Univ. Math. J., 42 (1993), 533-557.doi: 10.1512/iumj.1993.42.42024. L. C. Evans and J. Spruck, Motion of level sets by mean curvature. IV, J. Geom. Anal., 5 (1995), 77-114.doi: 10.1007/BF02926443. Y. Giga, S. Goto, H. Ishii and M.-H. Sato, Comparison principle and convexity preserving properties for singular degenerate parabolic equations on unbounded domains, Indiana Univ. Math. J., 40 (1991), 443-470.doi: 10.1512/iumj.1991.40.40023. Y. Giga, Surface Evolution Equations, vol. 99 of Monographs in Mathematics, Birkhäuser Verlag, Basel, 2006, A level set approach. J. A. Goldstein, Semigroups of Linear Operators and Applications, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, New York, 1985. A. Grigor'yan, Heat Kernel and Analysis on Manifolds, vol. 47 of AMS/IP Studies in Advanced Mathematics, American Mathematical Society, Providence, RI, 2009. D. Hartenstine and M. Rudd, Asymptotic statistical characterizations of $p$-harmonic functions of two variables, Rocky Mountain J. Math., 41 (2011), 493-504.doi: 10.1216/RMJ-2011-41-2-493. D. Hartenstine and M. Rudd, Statistical functional equations and $p$-harmonious functions, Adv. Nonlinear Stud., 13 (2013), 191-207. J. Heinonen, T. Kilpeläinen and O. Martio, Nonlinear Potential Theory of Degenerate Elliptic Equations, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, New York, 1993, Oxford Science Publications. T. Ilmanen, P. Sternberg and W. P. Ziemer, Equilibrium solutions to generalized motion by mean curvature, J. Geom. Anal., 8 (1998), 845-858. Dedicated to the memory of Fred Almgren.doi: 10.1007/BF02922673. H. Ishii, G. E. Pires and P. E. Souganidis, Threshold dynamics type approximation schemes for propagating fronts, J. Math. Soc. Japan, 51 (1999), 267-308.doi: 10.2969/jmsj/05120267. V. Julin and P. Juutinen, A new proof for the equivalence of weak and viscosity solutions for the $p$-Laplace equation, Comm. Partial Differential Equations, 37 (2012), 934-946.doi: 10.1080/03605302.2011.615878. P. Juutinen, $p$-harmonic approximation of functions of least gradient, Indiana Univ. Math. J., 54 (2005), 1015-1030.doi: 10.1512/iumj.2005.54.2658. P. Juutinen and B. Kawohl, On the evolution governed by the infinity Laplacian, Math. Ann., 335 (2006), 819-851.doi: 10.1007/s00208-006-0766-3. P. Juutinen, P. Lindqvist and J. J. Manfredi, On the equivalence of viscosity solutions and weak solutions for a quasi-linear equation, SIAM J. Math. Anal., 33 (2001), 699-717.doi: 10.1137/S0036141000372179. B. Kawohl and N. Kutev, Comparison principle for viscosity solutions of fully nonlinear, degenerate elliptic equations, Comm. Partial Differential Equations, 32 (2007), 1209-1224.doi: 10.1080/03605300601113043. B. Kawohl, Variations on the $p$-Laplacian, in Nonlinear elliptic partial differential equations, vol. 540 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2011, 35-46.doi: 10.1090/conm/540/10657. B. Kawohl, J. Manfredi and M. Parviainen, Solutions of nonlinear PDEs in the sense of averages, J. Math. Pures Appl., 97 (2012), 173-188.doi: 10.1016/j.matpur.2011.07.001. B. Kawohl and N. Kutev, Comparison principle and Lipschitz regularity for viscosity solutions of some classes of nonlinear partial differential equations, Funkcial. Ekvac., 43 (2000), 241-253. R. V. Kohn and S. Serfaty, A deterministic-control-based approach to motion by curvature, Comm. Pure Appl. Math., 59 (2006), 344-407.doi: 10.1002/cpa.20101. G. F. Lawler, Random Walk and the Heat Equation, vol. 55 of Student Mathematical Library, American Mathematical Society, Providence, RI, 2010. P. D. Lax, Functional Analysis, Pure and Applied Mathematics, Wiley-Interscience, John Wiley & Sons, New York, 2002. E. Le Gruyer, On absolutely minimizing Lipschitz extensions and PDE $\Delta_\infty(u)=0$, NoDEA Nonlinear Differential Equations Appl., 14 (2007), 29-55.doi: 10.1007/s00030-006-4030-z. E. Le Gruyer and J. C. Archer, Harmonious extensions, SIAM J. Math. Anal., 29 (1998), 279-292.doi: 10.1137/S0036141095294067. G. M. Lieberman, Second Order Parabolic Differential Equations, World Scientific Publishing Co. Inc., River Edge, NJ, 1996.doi: 10.1142/3302. P. Lindqvist, Notes on the $p$-Laplace equation, vol. 102 of Report, University of Jyväskylä Department of Mathematics and Statistics, University of Jyväskylä, Jyväskylä, 2006. J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games, SIAM J. Math. Anal., 42 (2010), 2058-2081.doi: 10.1137/100782073. B. Merriman, J. K. Bence and S. J. Osher, Motion of multiple functions: a level set approach, J. Comput. Phys., 112 (1994), 334-363.doi: 10.1006/jcph.1994.1105. A. M. Oberman, A convergent monotone difference scheme for motion of level sets by mean curvature, Numer. Math., 99 (2004), 365-379.doi: 10.1007/s00211-004-0566-1. A. M. Oberman, A convergent difference scheme for the infinity Laplacian: construction of absolutely minimizing Lipschitz extensions, Math. Comp., 74 (2005), 1217-1230.doi: 10.1090/S0025-5718-04-01688-6. S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), 12-49.doi: 10.1016/0021-9991(88)90002-2. A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, vol. 44 of Applied Mathematical Sciences, Springer-Verlag, New York, 1983.doi: 10.1007/978-1-4612-5561-1. Y. Peres and S. Sheffield, Tug-of-war with noise: a game-theoretic view of the $p$-Laplacian, Duke Math. J., 145 (2008), 91-120.doi: 10.1215/00127094-2008-048. S. J. Ruuth and B. Merriman, Convolution-generated motion and generalized Huygens' principles for interface motion, SIAM J. Appl. Math., 60 (2000), 868-890.doi: 10.1137/S003613999833397X. P. Sternberg and W. P. Ziemer, Generalized motion by curvature with a Dirichlet condition, J. Differential Equations, 114 (1994), 580-600.doi: 10.1006/jdeq.1994.1162. N. T. Varopoulos, L. Saloff-Coste and T. Coulhon, Analysis and Geometry on Groups, vol. 100 of Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 1992. W. P. Ziemer, Weakly Differentiable Functions, vol. 120 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1989.doi: 10.1007/978-1-4612-1015-3. HTML views() PDF downloads(107) Cited by(0) Matthew B. Rudd
CommonCrawl
Nonlocal problems in Hilbert spaces PROC Home Canard-type solutions in epidemiological models 2015, 2015(special): 94-102. doi: 10.3934/proc.2015.0094 Infinitely many solutions for a perturbed Schrödinger equation Rossella Bartolo 1, , Anna Maria Candela 2, and Addolorata Salvatore 3, Dipartimento di Meccanica, Matematica e Management, Politecnico di Bari, Via E. Orabona 4, 70125 Bari, Italy Dipartimento di Matematica, Università degli Studi di Bari Aldo Moro, Campus-via E. Orabona 4, 70125 BARI Dipartimento di Matematica, Università degli Studi di Bari "Aldo Moro", Via E. Orabona 4, 70125 Bari Received September 2014 Revised August 2015 Published November 2015 Under a Creative Commons license We find multiple solutions for a nonlinear perturbed Schrödinger equation by means of the so--called Bolle's method. Keywords: Nonlinear Schrödinger equation, broken symmetry, variational approach, unbounded domain., perturbative method. Mathematics Subject Classification: Primary: 35Q55; Secondary: 35J60, 35J20, 58E05, 35B3. Citation: Rossella Bartolo, Anna Maria Candela, Addolorata Salvatore. Infinitely many solutions for a perturbed Schrödinger equation. Conference Publications, 2015, 2015 (special) : 94-102. doi: 10.3934/proc.2015.0094 A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Funct. Anal., 14 (1973), 349-381. Google Scholar A. Bahri and H. Berestycki, A perturbation method in critical point theory and applications, Trans. Amer. Math. Soc., 267 (1981), 1-32. Google Scholar A. Bahri and P. L. Lions, Morse index of some min-max critical points. I. Applications to multiplicity results, Comm. Pure Appl. Math., 41 (1988), 1027-1037. Google Scholar S. Barile and A. Salvatore, Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains, Discrete Contin. Dyn. Syst. Supplement 2013, (2013), 41-49. Google Scholar S. Barile and A. Salvatore, Multiplicity results for some perturbed elliptic problems in unbounded domains with non-homogeneous boundary conditions, Nonlinear Analysis, 110 (2014), 47-60. Google Scholar T. Bartsch and Z. Q. Wang, Existence and multiplicity results for some superlinear elliptic problems on $\mathbbR^N$, Comm. Partial Differential Equations, 20 (1995), 1725-1741. Google Scholar V. Benci and D. Fortunato, Discreteness conditions of the spectrum of Schrödinger operators, J. Math. Anal. Appl., 64 (1978), 695-700. Google Scholar F. A. Berezin and M. A. Shubin, The Schrödinger Equation, Mathematics and its Applications, (Soviet Series) 66, Kluwer Academic Publishers, Dordrecht, 1991. Google Scholar P. Bolle, On the Bolza problem, J. Differential Equations, 152 (1999), 274-288. Google Scholar P. Bolle, N. Ghoussoub and H. Tehrani, The multiplicity of solutions in non-homogeneous boundary value problems, Manuscripta Math., 101 (2000), 325-350. Google Scholar A. Candela, G. Palmieri and A. Salvatore, Radial solutions of semilinear elliptic equations with broken symmetry, Topol. Methods Nonlinear Anal., 27 (2006), 117-132. Google Scholar M. Clapp, Y. Ding and S. Hernández-Linares, Strongly indefinite functionals with perturbed symmetries and multiple solutions of nonsymmetric elliptic systems, Electron. J. Differential Equations, 100 (2004), 18 pp. Google Scholar D. E. Edmunds and W.D. Evans, Spectral Theory and Differential Operators, Oxford Mathematical Monographs, New York, 1987. Google Scholar P. Li and S. T. Yau, On the Schrödinger equation and the eigenvalue problem, Comm. Math. Phys., 88 (1983), 309-318. Google Scholar P. H. Rabinowitz, On a class of nonlinear Schrödinger equations, Z. Angew. Math. Phys., 43 (1992), 270-291. Google Scholar P. H. Rabinowitz, Multiple critical points of perturbed symmetric functionals, Trans. Amer. Math. Soc., 272 (1982), 753-769. Google Scholar A. Salvatore, Multiple solutions for perturbed elliptic equations in unbounded domains, Adv. Nonlinear Stud., 3 (2003), 1-23. Google Scholar A. Salvatore, M. Squassina, Deformation from symmetry for nonhomogeneous Schrödinger equations of higher order on unbounded domains, Electron. J. Differential Equations, 65 (2003), 1-15. Google Scholar M. Struwe, Infinitely many critical points for functionals which are not even and applications to superlinear boundary value problems, Manuscripta Math., 32 (1980), 335-364. Google Scholar M. Struwe, Infinitely many solutions of superlinear boundary value problems with rotational symmetry, Arch. Math., 36 (1981), 360-369. Google Scholar M. Struwe, Superlinear elliptic boundary value problems with rotational symmetry, Arch. Math., 39 (1982), 233-240. Google Scholar K. Tanaka, Morse indices at critical points related to the symmetric mountain pass theorem and applications, Comm. Partial Differential Equations, 14 (1989), 99-128. Google Scholar W. Zou and M. Schechter, Critical Point Theory and Its Applications, Springer, New York, 2006. Google Scholar Chenglin Wang, Jian Zhang. Cross-constrained variational method and nonlinear Schrödinger equation with partial confinement. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021036 Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395 Sara Barile, Addolorata Salvatore. Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains. Conference Publications, 2013, 2013 (special) : 41-49. doi: 10.3934/proc.2013.2013.41 S.V. Zelik. The attractor for a nonlinear hyperbolic equation in the unbounded domain. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 593-641. doi: 10.3934/dcds.2001.7.593 Woocheol Choi, Youngwoo Koh. On the splitting method for the nonlinear Schrödinger equation with initial data in $ H^1 $. Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3837-3867. doi: 10.3934/dcds.2021019 Jie Liu, Jianguo Si. Invariant tori of a nonlinear Schrödinger equation with quasi-periodically unbounded perturbations. Communications on Pure & Applied Analysis, 2017, 16 (1) : 25-68. doi: 10.3934/cpaa.2017002 D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563 José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2601-2617. doi: 10.3934/dcds.2020376 Chuang Zheng. Inverse problems for the fourth order Schrödinger equation on a finite domain. Mathematical Control & Related Fields, 2015, 5 (1) : 177-189. doi: 10.3934/mcrf.2015.5.177 Hector D. Ceniceros. A semi-implicit moving mesh method for the focusing nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2002, 1 (1) : 1-18. doi: 10.3934/cpaa.2002.1.1 Masoumeh Hosseininia, Mohammad Hossein Heydari, Carlo Cattani. A wavelet method for nonlinear variable-order time fractional 2D Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2273-2295. doi: 10.3934/dcdss.2020295 J. Colliander, M. Keel, Gigliola Staffilani, H. Takaoka, T. Tao. Resonant decompositions and the $I$-method for the cubic nonlinear Schrödinger equation on $\mathbb{R}^2$. Discrete & Continuous Dynamical Systems, 2008, 21 (3) : 665-686. doi: 10.3934/dcds.2008.21.665 Jianqing Chen. Sharp variational characterization and a Schrödinger equation with Hartree type nonlinearity. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1613-1628. doi: 10.3934/dcdss.2016066 Jianqing Chen. A variational argument to finding global solutions of a quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 83-88. doi: 10.3934/cpaa.2008.7.83 Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 807-834. doi: 10.3934/dcds.2011.30.807 Tarek Saanouni. Remarks on the damped nonlinear Schrödinger equation. Evolution Equations & Control Theory, 2020, 9 (3) : 721-732. doi: 10.3934/eect.2020030 Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003 Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063 Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109 Van Duong Dinh. A unified approach for energy scattering for focusing nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6441-6471. doi: 10.3934/dcds.2020286 Impact Factor: Rossella Bartolo Anna Maria Candela Addolorata Salvatore
CommonCrawl
Skip to main content Skip to sections Applied Composite Materials Simulation of Delamination Growth at CFRP-Tungsten Aerospace Laminates Using VCCT and CZM Modelling Techniques J. Jokinen M. Kanerva First Online: 20 November 2018 Delamination analysis in advanced composites is required for the laminate design phase and also during the operation of composite aerospace structures to estimate the criticality of flaws and damage. The virtual crack closure technique (VCCT) and cohesive zone modelling (CZM) have been applied to delamination simulation as numerical tools of crack modelling. VCCT and CZM have their unique advantages and disadvantages per application. This study focuses on the application of VCCT to a brittle delamination in a hybrid tungsten–carbon-fibre reinforced composite (CFRP-W) and pursues to identify the challenges due to very high internal residual stresses and strain energy as well as unstable crack propagation. The CFRP-W composites have application areas in high-performance, light-weight radiation protection enclosures of satellite electronics and ultra-high frequency (e.g. 5G) systems. In our work, we present the effects of free-edge stress concentrations and interfacial separation prior to nodal release on a combined VCCT-CZM model and compare the results to pure VCCT and CZM models of the interfacial crack. Parameter notes are given based on the results to apply the combined method for delamination analyses with interfaces heavily loaded by internal residual strains. Satellite enclosure Finite element simulation Delamination VCCT CZM Carbon-fibre-reinforced plastics (CFRPs) have been used in structures and also in electronics housings of satellites due to the weight efficiency achieved in extreme light-weight concepts [1, 2]. Naturally, composite-based housings can be applied to future 5G technologies with highly optimized attenuation windows in otherwise protected enclosures [3, 4]. In the case of electronic housings, the typical means to realize high enough protection against various radiation from the (space) environment, and to also control the thermal and electrical conductance of the CFRP-based parts, is to laminate metal foils as part of the CFRP lamination. Although these enclosures are not always primary load-carrying components, delamination of CFRP and metal foil would lead to significant deviation of the heat flux, radiation attenuation, and geometry. Therefore, predictive analyses of delamination are required in the design of CFRP-metal laminates for aerospace applications (Fig. 1) [5]. Open image in new window Three main design phases in the hybrid enclosure design for satellites [1, 5] The main numerical methods for finite element delamination analyses are the Virtual Crack Closure Technique (VCCT) and Cohesive Zone Modelling (CZM). Both methods have their pros and cons in a practical design process since VCCT is primarily applied for structures with an initial flaw [6] and CZM requires fitting parameters other than pure fracture toughness and standardized fitting procedures do not exist. VCCT does not rely on energy dissipation by a cohesive zone after nodal release whereas CZM can be input various models of residual stiffness [7] before full release of the contact. Consequently, VCCT is typically applied for brittle crack propagation [8, 9]. The scientific challenge is to understand the limits of brittle crack propagation for VCCT so that dynamic effects remain insignificant. The hybrid CFRP-metal laminates utilize steel or tungsten foils [10], which result in extremely high residual stresses during the manufacture [11, 12]. Due to the very high Young's modulus of the unidirectional CFRP plies and, say, tungsten foils, even the smallest difference in thermal expansion leads to build-up of high interfacial loading between the dissimilar layers. Due to the fact that a separate crack onset phase is typically not simulated, any initial flaw tend to onset unstable crack propagation during a VCCT analysis. What follows is severe convergence problems in the numerical solution iteration or a need to apply artificial damping leading to an unclear error to the solution. In this paper, we study the application of VCCT in the delamination analysis of CFRP-tungsten (CFRP-W) laminate. A cracked lap-shear (CLS) specimen is 3-D modelled using the finite element method and the effects of VCCT and CZM crack models are compared in terms of the crack onset stresses and the crack-tip loading. Additionally, a combined analysis with a separate crack nucleation model using a CZM-zone along with a VCCT zone is studied. 2.1 Reference Experimental Data The experimental reference, i.e., the validation data, is based on a cracked lap shear (CLS) testing in this study. CLS testing is used for determining mixed mode fracture toughness values for interfaces and composites [13, 14]. The basic concept of the cracked lap shear specimen and the reference test setup in this study are illustrated in Fig. 2. The case material system in this study is based on a hybrid system of tungsten foil and CFRP. The detailed description of the laminate configuration and the test procedure can be found in a previous publication [5]. The fracture during the CLS testing of CFRP-W specimens is highly brittle and sudden. Several analytical methods have been developed to analyse CLS testing. However, accurate solutions of interfacial fracture toughness (Gcr) require taking into account 3-D residual stress distributions and related residual strain energy making numerical methods necessary [15]. Description of the reference test specimen and tensile test setup to be numerically simulated in this study. The cracked lap shear (CLS) specimen consists of a strap part and a lap part. The pre-crack tip is located at the interface between the strap and lap 2.2 Finite Element Method A 3-D finite element (FE) model of the hybrid CLS specimen was generated by using ABAQUS® (standard, 2017, Dassault Systèmes). The strap and lap parts' laminate thicknesses were 1.55 mm and 1.5 mm, respectively. The lay-up of unidirectional CFRP and tungsten (W) was [05/W] and [05] for the strap and lap, respectively. The total specimen length was 180 mm, width 20 mm, and lap length 115 mm. The FE model is shown in Fig. 3. The finite element model of the CLS specimen, applied coordinate system, and CZM element zone of the combined method along the CFRP-W layer interface The regions of the specimen that match with the test machine gripping point in reality, were subjected to boundary conditions (BCs). For all the BC surfaces, out-of-plane (Z-coordinate) displacements were fully set to zero (initial value). The thick end of the specimen (without lap opening) was fixed as regards the longitudinal displacement. In turn, the other end was subjected to enforced displacement to simulate real test machine loading. All the parts of the geometric model were meshed using reduced integrated continuum elements (C3D8R) with a nominal size of 0.5 mm \(\times \) 0.5 mm. The solution procedure was divided into two separate steps to include thermal residual stresses. During the first step, a thermal load of 100 ℃ was applied over the model to simulate the cool-down phase of laminate cure. After the first step, the defined enforced displacement followed. CFRP and tungsten were modelled as linear material and the material constants are given in Table 1. In this study, three different methods were used to study the crack onset and propagation: Cohesive zone modelling over the entire strap-lap interface; Virtual crack closure technique applied over the entire strap-lap interface; Combined method where CZM elements are applied at the interface edges and VCCT for the propagation over the rest of the specimen. Material data for CLS specimen modelling; for directional properties refer to Fig. 3 Engineering constant (unit) CFRP [15] Tungsten [15] E11 (GPa) E22, E33 (GPa) E12, E23, E13 (GPa) ν12, ν23, ν13 ( - ) CTE1 (10− 6 1/℃) \(-\)0.43 CTE2, CTE3 (10− 6 1/℃) 2.2.1 VCCT Method: VCCT relies on the idea of virtual crack closure, where the force required to hold a nodal connection and the deformation due to a virtual crack opening are used to compute the energy release rate (ERR) related to the crack opening process. The computed ERR value is compared to a critical value to justify possible release of the nodal connection. In the event of release, the force-displacement behavior is presumed linear at the crack tip. The basic concept of VCCT is show in Fig. 4a. a The VCCT method and the related nodal representation; b the combined method of this study 2.2.2 Combined VCCT-CZM Method: CZM techniques for various fracture analyses are versatile and numerous different applications exists. Previously, it has been reported that CZM is needed to simulate the interfacial metal-CFRP debonding process in CFRP-W laminates under mixed mode fracture conditions [15]. Jokinen and Kanerva formulated a procedure for determining the critical fracture toughness value based on the crack onset during testing and, subsequently, to fit a CZM model with two separate critical traction levels based on crack propagation. This procedure is accurate and reliable, yet is rather element mesh-dependent due to the CZM zone that covers most of the fracture plane. Therefore, a combined method is considered in this study, where simultaneous application of CZM and VCCT is analysed. In this combined method, CZM is simulating the crack onset process (Fig. 2) and VCCT handles the crack propagation after the process zone of the real crack tip has fully developed. The concept of the combined VCCT-CZM method is show in Fig. 4b. The length of the process zone, i.e. the modelled crack nucleation zone, is \(L = 1.0 \) mm in this study (CZM element size 0.5 mm \(\times \) 0.5 mm), to analyse crack nucleation modelling also at the specimen sides. For the CZM elements with zero thickness, the initial stiffness of the nodal bond was set to 1015 N/m3 for all three fracture modes. To analyse the applicability of the combined method for CLS testing and, particularly to CFRP-W interfaces, the fracture parameters are given the values verified in the current literature [15]. Likewise, the stress criterion for damage onset (CZM zone) was applied in a quadratic form as follows: $$ f = \left( \frac{\tau_{1,a}}{\tau_{1}} \right)^{2} + \left( \frac{\tau_{2,a}}{\tau_{2}} \right)^{2} + \left( \frac{\tau_{3,a}}{\tau_{3}} \right)^{2} , $$ where \(\tau \) refers to traction, and sub indices (1, 2, 3) refer to the three fracture modes (mode I, mode II, mode III), and the sub index a to a momentary traction value. For energy release rate (ERR), the following linear power law was applied to account for mode interaction: $$ f = \frac{G_{I}}{G_{Icr}} + \frac{G_{II}}{G_{IIcr}} + \frac{G_{III}}{G_{IIIcr}} , $$ where it was presumed \(G_{IIc}\) = \(G_{IIIc}\) in this study. For comparisons, a CLS specimen model with pure CZM interface (0.2 mm elements) was computed according to the description in a previous work [15]. The selected crack models' parameter values for all the three models are shown in Table 2. Input values for the different crack models of crack nucleation and propagation [15] (unit) Combined method Pure CZM Pure VCCT τ 1 (MPa) τ2, τ3 G I c (J/m2) GIIc, GIIIc 3.1 Overall Simulation Response The simulation of a brittle fracture is convenient for methods, which primarily rely on linear elastic fracture mechanics. For example, the CFRP-W hybrid laminate involves CFRP-W layer interfaces, where mode II dominated fracture has brittle response upon loading and the crack propagates in an unstable manner. Hence, the failure of the interface can be estimated to have linear response. The drawback of brittle failure is that the simulation of crack growth becomes computationally challenging due to convergence problems. Therefore, we pursue to compare different crack modelling methods to assess their capability to simulate unstable crack propagation. The overall strain-force ε-F response of the three crack models is shown in Fig. 5. It can be seen that the slope \(dF/d\varepsilon \) is essentially linear until the crack onsets and passes the strain measuring point. After passing the strain measuring point, \(dF/d\varepsilon \) increases significantly and the overall strain energy continues to grow in the specimen. This behavior is typical for unstable crack propagation where the crack is unable to extensively release strain energy and the surplus energy speeds up the crack propagation. The behavior of the crack onset and propagation are clearly dependent on the residual stress state [15]. The omitting of thermal load and respective residual strains, the crack onset is postponed and the crack development into a delamination occurs clearly more intensively, i.e. the force range \({\Delta } F\) needed for the crack to pass the strain measuring point decreases (corresponding to \({\Delta } time\) in a real test). Strain-force response of simulated CLS testing. Strains are recorded on the strap side of the specimen model and at the bonded section (average over 10×2 elements corresponding to a 5 mm strain gauge in reality) To evaluate the performance of the crack models at the moment of crack onset, the crack tip loading in terms of the mode-mixity must be known for different crack lengths. The \(G_{I}\) and \(G_{II}\) distributions over the specimen width are shown in Fig. 6 for pre-crack lengths of 20, 30, 45 and 60 mm. It can be seen that the crack-tip loading is essentially constant over the 0...60 mm-range of crack length. Finally, near the test machine gripping, the crack-tip loading turns into mode II (dominated) shearing. The mode-mixity is given by \(\phi \): $$ \phi = tan^{-1} \left( \frac{G_{II}}{G_{I}} \right) . $$ The mode mixity remains nearly constant over the specimen width and only at the free-edges, mostly due to very high residual stresses, turns into mode I dominated loading, as shown in Fig. 7. When residual strains are not accounted for, the dominance of mode II at the specimen free edges increases (Fig. 7b). aGI distributions in the specimen width direction for four different pre-crack lengths; bGII distributions in the specimen width direction for four different crack lengths (computed using pure VCCT and ΔT = 86 ℃) Mode-mixity distribution at crack tip: a for four different crack lengths (computed using pure VCCT and ΔT = 86 ℃); b for a case without any residual strains 3.2 Crack Onset Computation The differences in the crack-tip stresses affect the formation of the 3-D crack-front, i.e. delamination shape after the onset of crack propagation. The initial phase of the crack onset represents the crack nucleation process at a real crack tip. Basically, the different crack models should produce similar (realistic) crack-front shape in addition to the valid load-strain response of the specimen or structure. It should be noted that the sheer crack-tip loading (mode mixity) cannot be directly compared between VCCT and CZM since the homogenized region (i.e. the simulated process zone) is different for the two methods. CZM is based on the concept of process zone whereas VCCT differentiates only between bond and full nodal release. Figure 8 shows the opening stresses per crack model and it can be seen that the CZM and VCCT-CZM models produce essentially similar stress-states around the crack tip; the Combined method (with \(L =\) 1 mm) is slightly stiffer due to the VCCT zone and, thus, the stress peak is slightly higher and shifted towards the crack tip. If the crack-tip stress-state is compared for a simulation case omitting residual strains (i.e. \({\Delta } T\) = 0 ℃), all the crack models produce exactly the same crack-tip stress-state, as expected. Crack-opening stresses (σ33): a distributions when thermal residual strains are included; b distributions when thermal residual strains are omitted. Stresses are recorded at a tensile force level of F = 5 kN The pure VCCT model faces severe convergence problems during the crack propagation simulation. The crack onset occurs during the residual stress step (simulating the cool-down after cure in reality) and the crack propagation at the specimen sides (free-edges) is extensive (see Fig. 9a). Finally, in the beginning of the tensile load step, the simulation either must be stabilized heavily or the tolerance of critical G values must be increased enormously to continue running the computation. In turn, the pure CZM crack model is capable of propagating the crack throughout the simulated test. The main difference between these two types of simulations is the loading (stresses) amidst the crack tip. The VCCT model does not allow any relief of strain energy until the full release of crack-tip nodes while the CZM elements deteriorate (see Fig. 9b) and the bond stiffness decreases after the damage onset criterion is satisfied (Eq. 1). By using a combined VCCT-CZM method, specific nucleation can be simulated (see Fig. 10). Crack-front growth (delamination) for pure VCCT and CZM crack models: a VCCT delamination at ΔT = 86 ℃ before any mechanical load; b CZM element damaging (SCRT) after full thermal load (ΔT = 100 ℃) and axial load of 5 kN Crack-front growth (delamination) for the combined VCCT-CZM model: a delamination after the thermal load step; b crack onset damage (SCRT) of the CZM elements after the thermal load step To summarize, the VCCT model is validated (via its critical G values) for a tensile load level according to experiments, i.e. residual strain are included. When the delamination is allowed to propagate (set by operator for software) along the specimen edges (in addition to the pre-crack tip), the simulated crack propagation begins already during the residual stress step. This means that: Simultaneous free-edge stresses and the pre-crack tip singular stresses are problematic for the VCCT model validated for the CFRP-tungsten laminate; For the real crack, a high stress state without a singularity point does not lead to crack propagation (since it was not observed at edges); For the real free-edge, already slight deformation (opening along the layer interfaces) release efficiently residual (thermal) strain energy. The CZM model is validated (via its critical tractions) also by using the experimental data, in addition to its critical G values, which are based on the VCCT validation. Here, for the CFRP-tungsten laminate, the damage initiation criterion of the bi-linear traction-separation law prevents the 'pre-mature' delamination, although the critical ERR level is achieved, because the stress-based criterion is not yet fulfilled (only at the very corners, Fig. 9). Also, the capability to relieve the stress peaking prior to full debond is shown in the lateral stress distributions and when compared to VCCT with hard nodal ties. VCCT simulation ends up to release the nodes and the delamination front halts only after a significant decrease of stresses (σ33 and \(\sigma _{32}\)), as shown in Fig. 11. Lateral stress distributions when comparing pure CZM and VCCT simulation results at delamination (i.e. along the original pre-crack line): a opening stress (σ33); b shearing stress (σ32). All the stresses are recorded at ΔT = 86 ℃ In the current literature, different means have been introduced to simultaneously model both crack nucleation and the crack propagation after full development of the crack. In general, the question of constant materials properties related to fracture (in most cases fracture toughness) depends on the length-scale of interest [16]. In this study, the process zone is negligible compared to other dimensions of the structure. For this type of simulation case, the development of the crack could be taken into account using a set of fracture toughness values (rising R-curve method) [17], coupled models of fracture and stress criteria [18], or alternatively by using the traditional CZM [19]. The fundamental challenge of CZM, which also offers possibilities, is the wide variety of parameters included in the damage onset criterion, traction-separation law, and in the criterion for mode mixity. The crack nucleation process in polymeric materials involves a complex set of micromechanical phenomena, such as crazing, cavitation, shear banding and mechanical interlocking [20, 21, 22], thus, it is expected that a crack nucleation model involves several material parameters. In our previous study, we proposed a method to fit a CZM model for the simulation of CFRP-W interfacial delamination for design purposes. In this study, we applied a combined method that uses critical G values for a VCCT zone according to an analysis of a non-propagating crack at the experimentally determined critical load level (Fc in a previous report [5]). The CZM zone of the combined method was given input parameter values according to the fitting procedure presented previously [15]. For the combined method, CZM elements (zero thickness) ahead of the VCCT zone are harnessed by a bi-linear traction-separation law and a power law is applied to account for ERR per fracture mode [23]. Originally, the critical tractions (τ) were validated based on local strain fluctuation at the strap and lap parts of the CLS specimen (due to crack-front propagation over the strain recording point). The comparison of the three methods revealed that CZM and combined VCCT-CZM model can simulate the crack onset correctly. Pure VCCT model results in over-sized delamination due to severe propagation from the specimen edges towards the mid-line—the addition of external tensile load easily shuts down the simulation for reasonable values of control parameters. It should be noted that there are challenges to use the combined VCCT-CZM model to compute the entire delamination process through yet it is not necessary for analysing CLS testing. In the event that the simulated process zone is larger or in the order of the spatial size of stress gradients, CZM can dissipate strain energy and halt the crack in a realistic manner. In the combined method, the required process zone size is related also to the length of the transition zone between the methods (length L in Fig. 4). For the CFRP-W interface, the need for dissipation exists due to high strain energy induced by internal residual strains. The computational transition, after the crack reaches the VCCT zone in the combined method, tend to speed up the crack propagation because the mode II dominance and residual stresses remain essentially constant after each nodal release. At the VCCT zone, any degree of crack opening is due to the deformation of bulk material elements (CFRP/W) whereas the CZM zone allows crack opening due to the deformation of the interface elements (traction-separation model) and adjusting its momentary stiffness. This study focused on the analysing of the application of pure VCCT, pure CZM, and a combined VCCT-CZM crack model in the simulation of highly brittle and high-energy intensive mode II dominated fracture. As a case study, hybrid CFRP-W radiation shielding laminate was simulated in a CLS test setup. It is important to note that the crack onset modelling using VCCT-CZM for CFRP-W was initially given parameter values based on the procedure defined by Jokinen and Kanerva [15] for a pure CZM-based fitted CLS model. In order to simulate the entire delamination process over wide complex shapes, the effects of CZM element size and the transition zone length must be studied in the future. The simulation results were analysed from the point of view of crack onset stresses and the simulated crack nucleation process, namely delamination area in the FE model. The main outcomes of the simulation results are summarized as follows: Pure VCCT crack model is computationally challenging to apply for the CFRP-W laminate with very high internal thermal residual strains; Combined VCCT-CZM model can be used to simulate crack onset at the CFRP-W interface of the radiation protection laminate; Crack onset modelling using VCCT-CZM for CFRP-W requires a minimum of \(L = \) 1 mm process zone (CZM elements + transition zone). This investigation was funded by a grant from Business Finland related to the 'LuxTurrim5G' project and the related subtask (10098/31/2016) carried out by Tampere University of Technology. The authors want to gratefully acknowledge CSC IT Center for Science due to their expertise on computation services. Brander, T., Gantois, K., Katajisto, H., Wallin, M.: CFRP electronics housing for a satellite. In: European Conference on Spacecraft Structures, Materials and Mechanical Testing (Proceedings). Noordwijk, The Netherlands, 10-12 May, ESA SP-581 (2005)Google Scholar Atxaga, G., Marcos, J., Jurado, M., Carapelle, A., Orava, R.: Radiation shielding of composite space enclosures. In: International Astronautical Congress (Proceedings). Naples, Italy, October 1–5 (2012)Google Scholar Bangerter, B., Talwar, S., Arefi, R., Stewart, K.: Networks and devices for the 5G era. IEEE Commun. 52(2), 90–96 (2014)CrossRefGoogle Scholar Gaier, J.R., Hardebeck, W.C., Bunch, J.R., Davidson, M.L., Beery, D.B.: Effect of intercalation in graphite epoxy composites on the shielding of high energy radiation. J. Mater. Res. 13(8), 2297–2301 (1998)CrossRefGoogle Scholar Kanerva, M., Johansson, L.-S., Campbell, J.M., Revitzer, H., Sarlin, E., Brander, T., Saarela, O.: Hydrofluoric-nitric-sulphuric-acid surface treatment of tungsten for carbon fibre-reinforced composite hybrids in space applications. Appl. Surf. Sci. 328, 418–427 (2015)CrossRefGoogle Scholar Krueger, R.: Virtual crack closure technique: history, approach, and application. Appl. Mech. Rev. 57(2), 109–143 (2004)CrossRefGoogle Scholar Alfano, G.: On the influence of the shape of the interface law on the application of cohesive-zone models. Compos. Sci. Technol. 66(6), 723–730 (2006)CrossRefGoogle Scholar Orifici, A.C., Thomson, R.S., Degenhardt, R., Bisagni, C., Bayandor, J.: Development of a finite-element analysis methodology for the propagation of delaminations in composite structures. Mech. Compos. Mat. 43, 9–28 (2007)CrossRefGoogle Scholar Jokinen, J., Wallin, M., Saarela, O.: Applicability of VCCT in mode I loading of yielding adhesively bonded joints - a case study. Int. J. Adhes. Adhes. 62, 85–91 (2015)CrossRefGoogle Scholar Kanerva, M., Koerselman, J., Revitzer, H., Johansson, L.-S., Sarlin, E., Rautiainen, A., Brander, T., Saarela, O.: Structural assessment of tungsten-epoxy bonding in spacecraft composite enclosures with enhanced radiation protection. In: European Conference on Spacecraft Structures, Materials and Mechanical Testing: ESA SP-727 (Proceedings). Braunschweig, Germany, April 1-4 (2014)Google Scholar Kanerva, M., Sarlin, E., Hållbro, A., Jokinen, J.: Plastic deformation of powder metallurgy tungsten alloy foils for satellite enclosures. In: 30th Congress of the International Council of the Aeronautical Sciences ICAS (Proceedings). Daejeon, South Korea, September 23–30 (2016)Google Scholar Kanerva, M., Jokinen, J., Antunes, P., Wallin, M., Brander, T., Saarela, O.: Acceptance testing of tungsten-CFRP laminate interfaces for satellite enclosures. In: 20th International Conference on Composite Materials (Proceedings). Copenhagen, Denmark, July 19–24 (2015)Google Scholar Grady, J.E.: Fracture toughness testing of polymer matrix composites, National Aeronautics and Space Administration, USA, 1992, NASA-TP-3199xGoogle Scholar Wilkins D.J.: A comparison of the delamination and environmental resistance of a graphite epoxy and a graphite bismaleimide, Naval Air Systems Command USA, 1981, NAV-GD-0037 AD-A1122474Google Scholar Jokinen, J., Kanerva, M.: Analysis of cracked lap shear testing of tungsten-CFRP hybrid laminates. Eng. Fract. Mech. 175, 184–200 (2017)CrossRefGoogle Scholar Davila, C., Rose, C., Camanho, P.: A procedure for superposing linear cohesive laws to represent multiple damage mechanisms in the fracture of composites. Int. J. Fract. 158, 211–223 (2009)CrossRefGoogle Scholar Shokrieh, M., Rajabpour-Shirazi, H., Heidari-Rarani, M., Haghpanahi, M.: Simulation of mode I delamination propagation in multidirectional composites with R-curve effects using VCCT method. Comput. Mater. Sci. 65, 66–73 (2012)CrossRefGoogle Scholar Hell, S., Weigraeber, P., Felger, J., Becker, W.: A coupled stress and energy criterion for the assessment of crack initiation in single lap joints: a numerical approach. Eng. Fract. Mech. 117, 112–126 (2014)CrossRefGoogle Scholar Xie, D., Waas, A.M.: Discrete cohesive zone model for mixed-mode fracture using finite element analysis. Eng. Fract. Mech. 73(13), 1783–1796 (2006)CrossRefGoogle Scholar Basu, S., Mahajan, D.K., Van Der Giessen, E.: Micromechanics of the growth of a craze fibril in glassy polymers. Polymer 46(18), 7504–75018 (2005)CrossRefGoogle Scholar Sharma, P., Ganti, S.: Size-dependent Eshelby's tensor for embedded nano-inclusions incorporating surface/interface energies. J. Appl. Mech. Trans. ASME 71(5), 663–671 (2004)CrossRefGoogle Scholar Kanerva, M., Sarlin, E., Hoikkanen, M., Rämö, K., Saarela, O., Vuorinen, J.: Interface modification of glass fibre-polyester composite-composite joints using peel plies. Int. J. Adhes. Adhes. 59, 40–52 (2015)CrossRefGoogle Scholar Jokinen, J., Kanerva, M.: Crack onset analysis of adhesives for the CZM-VCCT method, Complas (Proceedings) Barcelona, Spain, September 5–7 (2017)Google Scholar Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Laboratory of Materials ScienceTampere University of TechnologyTampereFinland Jokinen, J. & Kanerva, M. Appl Compos Mater (2019) 26: 709. https://doi.org/10.1007/s10443-018-9746-5 Received 04 September 2018 Accepted 01 November 2018
CommonCrawl
arXiv.org > hep-ex > arXiv:1808.09877 High Energy Physics - Experiment Title:Measurement of inclusive forward neutron production cross section in proton-proton collisions at $\mathrm{\sqrt{s} = 13~TeV}$ with the LHCf Arm2 detector Authors:O. Adriani, E. Berti, L. Bonechi, M. Bongi, R. D'Alessandro, S. Detti, M. Haguenauer, Y. Itow, K. Kasahara, Y. Makino, K. Masuda, H. Menjo, Y. Muraki, K. Ohashi, P. Papini, S. Ricciarini, T. Sako, N. Sakurai, K. Sato, M. Shinoda, T. Suzuki, T. Tamura, A. Tiberio, S. Torii, A. Tricomi, W.C. Turner, M. Ueno, Q.D. Zhou (Submitted on 29 Aug 2018 (v1), last revised 29 Nov 2018 (this version, v2)) Abstract: In this paper, we report the measurement relative to the production of forward neutrons in proton-proton collisions at $\mathrm{\sqrt{s} = 13~TeV}$ obtained using the LHCf Arm2 detector at the Large Hadron Collider. The results for the inclusive differential production cross section are presented as a function of energy in three different pseudorapidity regions: $\eta > 10.76$, $8.99 < \eta < 9.22$ and $8.81 < \eta < 8.99$. The analysis was performed using a data set acquired in June 2015 that corresponds to an integrated luminosity of $\mathrm{0.194~nb^{-1}}$. The measurements were compared with the predictions of several hadronic interaction models used to simulate air showers generated by Ultra High Energy Cosmic Rays. None of these generators showed good agreement with the data for all pseudorapidity intervals. For $\eta > 10.76$, no model is able to reproduce the observed peak structure at around $\mathrm{5~TeV}$ and all models underestimate the total production cross section: among them, QGSJET II-04 shows the smallest deficit with respect to data for the whole energy range. For $8.99 < \eta < 9.22$ and $8.81 < \eta < 8.99$, the models having the best overall agreement with data are SIBYLL 2.3 and EPOS-LHC, respectively: in particular, in both regions SIBYLL 2.3 is able to reproduce the observed peak structure at around $\mathrm{1.5-2.5~TeV}$. Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: The LHCf collaboration, Adriani, O., Berti, E. et al. J. High Energ. Phys. (2018) 2018: 73 DOI: 10.1007/JHEP11(2018)073 Report number: CERN-EP-2018-239 Cite as: arXiv:1808.09877 [hep-ex] (or arXiv:1808.09877v2 [hep-ex] for this version) From: Eugenio Berti [view email] [v1] Wed, 29 Aug 2018 15:18:27 UTC (576 KB) [v2] Thu, 29 Nov 2018 13:37:57 UTC (1,160 KB) hep-ex
CommonCrawl
Further Hyperbolic Functions Characteristics of hyperbolas Domain and range of hyperbolas Graphing hyperbolas with translations Find the equation of a hyperbola Manipulation of Hyperbolic Equations Applications of hyperbolas The hyperbola is one type of conic section, along with the circle, ellipse and parabola. We can construct a hyperbola given two points, $F_1$F1​ and $F_2$F2​, which we call the foci (singular focus). The midpoint between the two foci is called the centre. We label the distance from the centre to each focus as $c$c. Our next step is to select some value $a$a, which will represent the distance from the centre to each vertex $V_1$V1​ and $V_2$V2​ of the hyperbola. Once we have the two foci and the value for $a$a, we define the hyperbola to be the curve consisting of all the points $P$P that satisfy the relationship $\left|PF_1-PF_2\right|=2a$|PF1​−PF2​|=2a. $P$P is a point that satisfies $\left|PF_1-PF_2\right|=2a$|PF1​−PF2​|=2a. The absolute value in the definition reflects the fact that both foci play an identical role in the construction of the curve. We use the labels "$F_1$F1​" and "$F_2$F2​", but this doesn't mean that $F_1$F1​ has any more significance than $F_2$F2​. Finally, let's draw a circle centred at $C$C, with radius $c$c. This circle will pass through both foci. If we draw a tangent to the hyperbola at a vertex, we will have constructed a right-angled triangle within the circle. This triangle has a hypotenuse of length $c$c and shorter sides $a$a and $b$b. A right-angled triangle with hypotenuse $c$c and shorter sides $a$a and $b$b. If we label the remaining side $b$b, then we arrive at the relationship $c^2=a^2+b^2$c2=a2+b2. This will be useful when we try to understand the properties of a hyperbola using only its equation, which will be given in terms of $a$a and $b$b. Now we can explore some of the characteristics of hyperbolas. Key characteristics The midpoint between the two foci is called the centre. The hyperbola also has two vertices, which are the points on the graph that are closest to the centre. For any hyperbola, the two foci, the two vertices, and the centre will all lie on the same straight line, called the transverse axis. At this stage we will concentrate on hyperbolas whose transverse axis is a horizontal line or a vertical line in the $xy$xy-plane. A hyperbola with a horizontal transverse axis can be said to open to the left and right, while a hyperbola with a vertical transverse axis opens in the up and down direction. The key components of a hyperbola: centre ($C$C), two vertices ($V$V), two foci ($F$F), transverse and conjugate axes, asymptotes. The behaviour of the graph far from the centre is described by the two asymptotes, which are the lines that the graph approaches but never reaches. In general the two asymptotes are not perpendicular. All of this information is captured in the equation for the hyperbola, and is summarised below for the two main orientations we will be considering. Graph of $\frac{\left(x-h\right)^2}{a^2}-\frac{\left(y-k\right)^2}{b^2}=1$(x−h)2a2​−(y−k)2b2​=1. Centre: $\left(h,k\right)$(h,k) Vertices: $\left(h\pm a,k\right)$(h±a,k) $c^2=a^2+b^2$c2=a2+b2, or $c=\sqrt{a^2+b^2}$c=√a2+b2 Foci: $\left(h\pm c,k\right)$(h±c,k) Transverse axis: $y=k$y=k Conjugate axis: $x=h$x=h Asymptotes: $y-k=\pm\frac{b}{a}\left(x-h\right)$y−k=±ba​(x−h) Graph of $\frac{\left(y-k\right)^2}{a^2}-\frac{\left(x-h\right)^2}{b^2}=1$(y−k)2a2​−(x−h)2b2​=1. Vertices: $\left(h,k\pm a\right)$(h,k±a) Foci: $\left(h,k\pm c\right)$(h,k±c) Transverse axis: $x=h$x=h Conjugate axis: $y=k$y=k Asymptotes: $y-k=\pm\frac{a}{b}\left(x-h\right)$y−k=±ab​(x−h) We can make a few more observations from this table. Firstly, consider a hyperbola centred at the origin with a horizontal transverse axis. It has the equation $\frac{\left(x-0\right)^2}{a^2}-\frac{\left(y-0\right)^2}{b^2}=1$(x−0)2a2​−(y−0)2b2​=1, which simplifies to $\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$x2a2​−y2b2​=1. The expressions for the other key characteristics will simplify in a similar way. Secondly, the transverse axis is perpendicular to the conjugate axis. Both of these lines act as axes of symmetry of the graph. Finally, given the value of $a$a and $b$b from the equation we can define a third variable $c=\sqrt{a^2+b^2}$c=√a2+b2, and we can see that this plays an important role in the coordinates of the foci. We can see that the value of $h$h and $k$k influence the location of the graph, while the value of $a$a and $b$b (and so $c$c) influence the shape of the graph. Let's determine the key characteristics of the hyperbola given by the equation $\frac{\left(x+1\right)^2}{16}-\frac{\left(y-6\right)^2}{9}=1$(x+1)216​−(y−6)29​=1. Looking at the equation, the term involving $y$y is being subtracted from the term involving $x$x, which means this hyperbola has a horizontal transverse axis and opens to the left and right. Here is the graph of the hyperbola. Graph of $\frac{\left(x+1\right)^2}{16}-\frac{\left(y-6\right)^2}{9}=1$(x+1)216​−(y−6)29​=1. The equation has the form $\frac{\left(x-h\right)^2}{a^2}-\frac{\left(y-k\right)^2}{b^2}=1$(x−h)2a2​−(y−k)2b2​=1, where: $h=-1$h=−1 $k=6$k=6 $a=4$a=4 $b=3$b=3 With $c=\sqrt{4^2+3^2}=5$c=√42+32=5, we can use these five values to find the key characteristics of the hyperbola as follows. Orientation: Horizontal transverse axis Centre: $\left(h,k\right)$(h,k) $\rightarrow$→ $\left(-1,6\right)$(−1,6) Vertices: $\left(h\pm a,k\right)$(h±a,k) $\rightarrow$→ $\left(-1\pm4,6\right)$(−1±4,6), which gives $\left(3,6\right)$(3,6) and $\left(-5,6\right)$(−5,6) Foci: $\left(h\pm c,k\right)$(h±c,k) $\rightarrow$→ $\left(-1\pm5,6\right)$(−1±5,6), which gives $\left(4,6\right)$(4,6) and $\left(-6,6\right)$(−6,6) Transverse axis: $y=k$y=k $\rightarrow$→ $y=6$y=6 Conjugate axis: $x=h$x=h $\rightarrow$→ $x=-1$x=−1 Asymptotes: $y-k=\pm\frac{b}{a}\left(x-h\right)$y−k=±ba​(x−h) $\rightarrow$→ $y-6=\pm\frac{3}{4}\left(x+1\right)$y−6=±34​(x+1), which gives $y=\frac{3}{4}x+\frac{27}{4}$y=34​x+274​ and $y=-\frac{3}{4}x+\frac{21}{4}$y=−34​x+214​ The graph of $\frac{x^2}{16}-\frac{y^2}{9}=1$x216​−y29​=1 is shown below. Loading Graph... What are the coordinates of the centre of the hyperbola? What are the coordinates of the vertices? Write each pair of coordinates on the same line, separated by a comma. What are the coordinates of the foci? Write each pair of coordinates on the same line, separated by a comma. What is the orientation of the graph? Horizontal transverse axis Vertical transverse axis What is the equation of the transverse axis? What is the equation of the conjugate axis? What are the equations of the asymptotes? Write each equation on the same line, separated by a comma. Consider the hyperbola with the equation $\frac{y^2}{9}-\frac{x^2}{25}=1$y29​−x225​=1. Based on the equation, what is the orientation of the graph of this hyperbola? Consider the hyperbola with the equation $\frac{\left(x-1\right)^2}{4}-\frac{\left(y-2\right)^2}{9}=1$(x−1)24​−(y−2)29​=1. What are the coordinates of the centre? Find the distance $c$c between the centre and a focus of the hyperbola. What are the equations of the asymptotes? Write each equation in the form $y=mx+c$y=mx+c, separated by a comma. M8-1 Apply the geometry of conic sections Apply the geometry of conic sections in solving problems
CommonCrawl
Non contiguous memory allocation methodology does require that a file be termed at the start. The file grows as needed with time. A major advantage is the reduced waste of disk space and flexibility when it comes to memory allocation. The Operating System will allocation memory to the file when needed. Non contiguous memory allocation, offers the following advantages over contiguous memory allocation: Allows the interdependence of code and data among processes. External fragmentation is none existent with non contiguous memory allocation. Virtual memory allocation is strongly supported in non contiguous memory allocation. Non contiguous memory allocation methods include Paging and Segmentation. Paging is a non contiguous memory allocation method in which physical memory is divided into fixed sized blocks called frames of size in the power of 2, ranging from 512 to 8192 bytes. Logical memory is also divided into same size blocks called pages. For a program of size n pages to be executed, n free frames are needed to load the program. Some of the advantages and disadvantages of paging as noted by Dhotre include the following: On advantages: Paging Eliminates Fragmentation Multiprogramming is supported Overheads that come with compaction during relocation are eliminated Some disadvantages that include: Paging increases the price of computer hardware, as page addresses are mapped to hardware Memory is forced to store variables like page tables Some memory space stays unused when available blocks are not sufficient for address space for jobs to run Segmentation is a non contiguous memory allocation technique that supports a user view of memory. A program is seen as a collection of segments such as main program, procedures, functions, methods, stack, objects, etc. Some of the advantages and disadvantages of segmentation as noted by Godse et al include the following: Fragmentation is eliminated in Segmentation memory allocation Segmentation fully supports virtual memory Dynamic memory segment growth is fully supported Segmentation supports Dynamic Linking Segmentation allows the user to view memory in a logical sense. On the disadvantages of segmentation: Main memory will always limit the size of segmentation, that is, segmentation is bound by the size limit of memory It is difficult to manage segments on secondary storage Segmentation is slower than paging Segmentation falls victim to external fragmentation even though it eliminates internal fragmentation A typical HDD would represent information as either 1 (e.g. spin up) or 0 (e.g. spin down). Let's assume you want to represent the information physically in a hex system with 16 states, and assume this is possible with using some physical form (maybe the same spin). What is the minimum physical size of a memory element in this new system in units of binary bits? It seems to me that the minimum is 8 bits = 1 byte. Therefore, going from a binary representation to a higher representation will, everything else equal, make the minimum variable size equal 1 byte instead of 1 bit. Is this logic correct? Asked By : student1 Answered By : Yuval Filmus One hexadecimal digit contains 4 binary digits. You can compute this as follows: $\log_2 16 = 4$. Alternatively, $2^4 = 16$. So the minimal memory element will contain 4 bits' worth of information. This also works when the number of states is not a power of 2, but you have to be more flexible in your interpretation. To avoid the noise as much as possible, I'm planning to take multiple scenes from RGB-D then try to merge them .... so is there any research papers , thoughts , ideas , algorithms or anything to help Asked By : Mohammad Eliass Alhusain Answered By : D.W. Yes, one technique is known as super-resolution imaging. There's a rich literature on the subject, at least for RGB images. You could check Google Scholar to see if there has been any research on super-resolution for RGB-D images (e.g., from 3D cameras such as Kinect, Intel RealSense, etc.). There is a some question that arise from the proof of Lemma 5.8.1 of Cover's book on information theory that confuse me. First question is why he assumes that we can "Consider an optimal code $C_m$. Is he assuming that we are encoding a finite number of words so that $\sum p_i l_i$ must have a minimum value? Here I give you the relevant snapshot. Second, there is an observation made on this notes that was also done in my class before proving the theory of optimality of huffman codes, that is, observe that earlier results allow us to restrict our attention to instantaneously decodeable codes I don't really understand why this observation is necessary. Asked By : Rodrigo To answer your first question, the index $i$ goes over the range $1,\ldots,m$. The assumption is that there are finitely many symbols. While some theoretical papers consider encodings of countably infinite domains (such as universal codes), usually the set of symbols is assumed to be finite. To answer your second question, the claim is that a Huffman code has minimum redundancy among the class of uniquely decodable codes. The proof of Theorem 10 in your notes, however, only directly proves that a Huffman code has minimum redundancy among the class of instantaneously decodable codes. It does so when it takes an optimal encoding for $p_1,\ldots,p_{n-2},p_{n-1}+p_n$ and produces an optimal encoding for $p_1,\ldots,p_n$ by adding a disambiguating bit to the codeword corresponding to $p_{n-1}+p_n$; it's not clear how to carry out a similar construction for an arbitrary uniquely decodable code. Assume that we have $l \leq \frac{u}{v}$ and assume that $u=O(x^2)$ and $v=\Omega(x)$. Can we say that $l=O(x)$? Asked By : user7060 Since $u = O(x^2)$, there exist $N_1,C_1>0$ such that $u \leq C_1x^2$ for all $x \geq N_1$. Since $v = \Omega(x)$, there exist $N_2,C_2>0$ such that $v \geq C_2x$ for all $x \geq N_2$. Therefore for all $x \geq \max(N_1,N_2)$ we have $$ l \leq \frac{u}{v} \leq \frac{C_1x^2}{C_2x} = \frac{C_1}{C_2} x. $$ So $l = O(x)$. While doing some digging around in the GNU implementation of the C++ standard library I came across a section in bits/hashtabe.h that refers to a hash function "in the terminology of Tavori and Dreizin" (see below). I have tried without success to find information on these people, in the hopes of learning about their hash function -- everything points to online versions of the file that the following extract is from. Can anyone give me some information on this? * @tparam _H1 The hash function. A unary function object with * argument type _Key and result type size_t. Return values should * be distributed over the entire range [0, numeric_limits<size_t>:::max()]. * * @tparam _H2 The range-hashing function (in the terminology of * Tavori and Dreizin). A binary function object whose argument * types and result type are all size_t. Given arguments r and N, * the return value is in the range [0, N). * * @tparam _Hash The ranged hash function (Tavori and Dreizin). A * binary function whose argument types are _Key and size_t and * whose result type is size_t. Given arguments k and N, the * return value is in the range [0, N). Default: hash(k, N) = * h2(h1(k), N). If _Hash is anything other than the default, _H1 * and _H2 are ignored. Asked By : moarCoffee I read that passage as saying that Tavori and Dreizin introduced the terminology/concept of a "range-hashing function". Presumably, that's a name they use for a hash function with some special properties. In other words, I read that as implying not that Tavori and Dreizen introduced a specific hash function, but that they talk about a category of hash functions and gave it a name. I don't know if that is what the authors actually meant; that's just how i would interpret it. I tried searching on Google Scholar for these names and found nothing that seemed relevant. A quick search turns up a reference to Ami Tavori at IBM (a past student of Prof. Meir Feder, working on computer science), but I don't know if that's who this is referring to. In unification, there is a "occur-check". Such as $X = a \, X$ fails to find a substitution for $X$ since it appears on right hand side too. The first-order unification, higher-order unification all have occur-check. the paper nominal unification described a kind of unification based on nominal concepts. But I did not mention "occur-check" at all. So, I am thinking why? does it has occur-check? Asked By : alim Answered By : alim Yes, it has the occur check. The ~variable transformation rule of nominal unification has a condition which states provided X does not occur in t what it is saying is exactly occur check. In Sipser's text, he writes: When a probabilistic Turning machine recognizes a language, it must accept all strings in the language and reject all strings not in the language as usual, except that now we allow the machine a small probability of error. Why is he using "recognizes" instead of "decides"? If the machine rejects all strings that are not in the language, then it always halts, so aren't we restricted to deciders in this case? The definition goes on: For $0 < \epsilon < 1/2$ we say that $M$ recognizes language $A$ with error probability $\epsilon$ if 1) $w \in A$ implies $P(M \text{ accepts } w) \ge 1 - \epsilon$, and 2) $w \notin A$ implies $P(M \text{ rejects } w) \ge 1 - \epsilon$. So it seems like the case of $M$ looping is simply not allowed for probabilistic Turning machines? Asked By : theQman Complexity theory makes no distinction between "deciding" and "recognizing". The two words are used interchangeably. Turing machines considered in complexity theory are usually assumed to always halt. Indeed, usually only time-bounded machines are considered (such as polytime Turing machines), and these halt by definition. In your particular case, you can interpret accept as halting in an accepting state, and reject as halting in a rejecting state. The Turing machine is thus allowed not to halt. However, the class BPP also requires the machine to run in polynomial time, that is, to halt in polynomial time. In particular, the machine must always halt. In Johnson's 1975 Paper 'Finding All the Elementary Circuits of a Directed Graph', his psuedocode refers to two separate data structures, logical array blocked and list array B. What is the difference in them and what do they represent? Moreover, what does 'Vk' mean? Asked By : Danish Amjad Alvi In the pseudocode, T array means an array where each element has type T. Logical is the type of a boolean (i.e., it can hold the value true or false). Integer list is the type of a list of integers. Thus, in the pseudocode, logical array blocked(n) is the declaration of an array called blocked containing n elements, where each element is a boolean. integer list array B(n) is the declaration of an array called B containing n elements, where each element is a list of integers. $V_K$ isn't clearly defined, but from context, I'd guess it is the set of vertices in $A_K$. I ran the following grammar (pulled from the dragon book) in the Java Cup Eclipse plugin: S' ::= S S ::= L = R | R L ::= * R | id R ::= L The items associated with state 0 given in the Automaton View are as follows: S ::= ⋅S, EOF S ::= ⋅L = R, EOF S ::= ⋅R, EOF L ::= ⋅* R, {=, EOF} L ::= ⋅id, {=, EOF} R ::= ⋅L, EOF Shouldn't the last item's lookahead set be {=, EOF}? This item could be derived from S ::= ⋅R (in which case the lookahead set is {EOF}) or from L ::= ⋅* R (in which case the lookahead set is {=, EOF}). Asked By : npCompleteNoob Answered By : rici In state 0, R ::= ⋅L can only be generated by S ::= ⋅R. In L ::= ⋅* R, the dot precedes *, not R, so no further items are generated by it. The dragon book uses this grammar as an example of the inadequacy of SLR, and the correct computation of the lookahead in this case is an instance; the SLR algorithm bases lookahead decisions on the FOLLOW set rather than actual lookahead possibilities in the state, which will eventually lead to a shift/reduce conflict on lookahead symbol =. Would it be possible to form such a group using the ADD instruction and the NOT instruction? Asked By : VermillionAzure Sure. The integers modulo $2^{32}$ form a group. The group operation is addition modulo $2^{32}$, which can be implemented by the ADD instruction. You don't need the NOT instruction. There are other groups you could form, such as the integers modulo 2, and many more. I recommend you read the definition of a group and play around with some examples. Consider the generator of the selections: for (i1 = 0; i1 < 10; i1++) for (i2 = i1; i2 < 10; i2++) for (i3 = i2; i3 < 10; i3++) for (i4 = i3; i4 < 10; i4++) printf("%d%d%d%d", i1, i2, i3, i4); 0000 0001 0002 ... 0009 0011 <-- 0010 is skipped 0012 ... 0019 0022 <-- 0020 and 0021 are skipped 0023 ... 8999 9999 Generated selections have the following property: the order in the selection does not mater, i.e. 0011 0101 1001 1010 1100 are the same. Basically it is 4-combination of the set { 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, ..., 9, 9, 9, 9} What do you say to name this type of the selection? I always get stuck when I say: 4-xxxxxxxx of the set {0, 1, 2, ..., 9} where xxxxxxxx is the name of this selection. Asked By : Alvin Answered By : David Richerby The sequences your program generates are non-decreasing sequences. I have been hearing the phrases quasipolynomial, superpolynomial and subexponential. I think know what quasipolynomial and subexponential is. I believe these are functions respectively of form $n^{\log^c n}$ and $n^{n^{1/c}}$ for some $c>1$. What does superpolynomial mean? Asked By : Turbo Superpolynomial means $n^{\omega(1)}$, that is, growing faster than any polynomial. More clearly, $n^{f(n)}$ for some function $f$ satisfying $\lim_{n\to\infty} f(n) = \infty$. So for these 5 conditions, I am trying to find the solution/formula for them. What would $a_n$ equal basically? If it helps, the recurrence relation these 5 conditions were generated from was $a_n = a_{n - 1} + 2n$. Any help would be greatly appreciated. $$ \begin{align*} a_0 &= 4 \\ a_1 &= 6 \\ a_2 &= 10 \\ a_3 &= 16 \\ a_4 &= 24 \end{align*} $$ Asked By : CMcorpse There are infinitely many sequences starting $4,6,10,16,24$. If I understand you correctly, this sequence was generating according to the rule $a_n = a_{n-1} + 2n$ with the initial value $a_0 = 4$. In that case, you have $$ a_n = a_0 + \sum_{k=1}^n (2k) = a_0 + 2 \frac{n(n+1)}{2} = n^2 + n + 4. $$ I am developing algorithm for solving following problem: Given a set of items with unknown feature(s) (but know distribution of feature(s)). Algorithm must choose what items to measure(every measure has some cost)). I use Value of information theory to find this measurements. After measurements are done algorithm choose K best items from the set, using some utility function that depends on feature values of items. I crafted few synthetic data sets, but perhaps there are some benchmark data sets that are used for this kind of problems? Asked By : farseer In this sort of situation, two standard answers are: Figure out what the practical applications of your algorithm are. Find a dataset associated with that particular application, and try your algorithm on it and see how well it works. Measure success using some metric that is appropriate for that particular application. Look through the research literature to find previously published papers that try to solve the same problem. Look at what benchmarks they used. Use the same benchmarks, so that you can compare how well your algorithm does to previously published algorithms. I have a deterministic function $f(x_1, x_2, ..., x_n)$ that takes $n$ arguments. Given a set of arguments $X = (x_i)$, I can compute $U_X = \{ i \in [1, n] : x_i \text{ was read during the evaluation of } f(X) \}$ Would it be valid to use the set $K_X = \{(i, x_i): i \in U_X\}$ as a memoization key for $f(X)$? In particular, I am worried that there may exist $X=(x_i)$ and $Y=(y_i)$ such that: $$ \tag1 U_X \subset U_Y $$ $$ \tag2 \forall i \in U_X, x_i = y_i $$ $$ \tag3 f(X) \neq f(Y) $$ In my case, the consequence of the existence of such $X$ and $Y$ would be that $K_X$ would be used as a memoization key for $f(Y)$, and would thus return the wrong result. My intuition says that, with $f$ being a deterministic function of its arguments, there should not even exist $X$ and $Y$ (with $U_X$ a strict subset of $U_Y$) such that both $(1)$ and $(2)$ hold (much less all three!), but I would like a demonstration of it (and, if it turns out to be trivial, at least pointers to the formalism that makes it trivial). Asked By : Jean Hominal As long as $f$ is deterministic and depends only on its arguments (not on other global variables), yes, you can safely use that as a memoization key. The bad case cannot happen. You can see this by thinking about what happens as it reads the $t$th argument and using induction on $t$. Let $i_1,i_2,i_3,\dots,i_t$ be the sequence of argument indices read (i.e., $f$ first reads $i_1$ when running on input $x$, then reads $i_2$, etc.). For each $t$, if $f(x)$ and $f(y)$ have behaved the same up until just before they read the $t$th of these, then the next index examined by $f(x)$ and $f(y)$ will be the same, say $i_t$; if additionally $x_{i_t} = y_{i_t}$, then it follows $f(x)$ and $f(y)$ will behave the same up until just before they read the $t+1$st of their arguments. Since $i_t \in U_X$, $x_{i_t} = y_{i_t}$ is guaranteed by your assumptions. Now use induction. This assumes everything is deterministic. For instance, $f$ had better not be allowed to read a random-number generator, the current time of day, or other global variables. Beware that on some platforms there can be surprising sources of non-determinism, e.g., due to aspects of behavior that are unspecified and might depend on external factors. For instance, I have a recollection that floating-point math in Java can introduce non-determinism in some cases, and so if you want to write code that is verifiably deterministic, you probably want to avoid all floating-point math. I'm studying program verification and came across the following triple: $$ \{\top\} \;P \; \{y=(x+1)\} $$ What's the meaning of the $\top$ symbol on the precondition? Does it mean $P$ can take any input? Asked By : Fratel The symbol $\top$, known as top, stands for "True". There is also a symbol $\bot$, known as bottom, which stands for "False". Top is always true, and bottom is always false. In your case, having a precondition that always holds is the same as having no precondition. I'm describing the semantics of a new optimization for Java using Operational Semantics but I'm not sure how to define the transition system. I found this link :http://www.irisa.fr/celtique/teaching/PAS/opsem-2016.pdf where in slide 20 the transition system is defined but I don't get it. Asked By : El Marce Answered By : gardenhead Operational semantics utilizes the tools of logic, so as a prerequisite we must understand judgements and inference rules. A judgement is like a proposition, but more general. It asserts a relation between two entities of our language. For example, in programming, we often employ the judgement $e: \tau$, asserting that expression $e$ has type $\tau$. Inference rules are used to define judgements. They have the general form $$ \frac{J_1 \dots J_n}{J} $$ which reads: if we know judgements $J_1$ through $J_n$, we can infer judgement $J$. For example, we may have the following self-explanatory inference rule for the previously defined typing judgement: $$ \frac{n: \text{int} \quad m: \text{int}}{n+m: \text{int}}. $$ A structural operational semantics is defined using a transition system between states. In a programing language, the states are all closed expression in the language, and the final states are values. Formally, we make use of two judgements: $e_1 \to e_2$, stating that expression $e_1$ transitions to state $e_2$ in one step $e \space \text{final}$, stating that expression $e$ is a final state of the system. Here are some example inference rules in a language with arithmetic and function abstraction: $$ \frac{}{n \text{ final}} $$ $$ \frac{}{n+0 \to n} $$ $$ \frac{n + m \to k}{s(n) + m \to s(k)} $$ $$ \frac{}{\lambda x. e \ \text{final}} $$ $$ \frac{(\lambda x. e_1)e_2}{[e_2/x]e_1} $$ Most languages do not have a formal definition, including Java (as far as I know). There are also other methods for defining the semantics of a language, but for describing an optimization, I believe structural dynamics is a wise choice, as it has a natural notion of time complexity (the number of transitions). A CFG is in strong GNF when all rewrite rules are in the following form: $A \rightarrow aA_1...A_n$ where $n \leq 2$. Asked By : user393454 Such an algorithm is described in Koch and Blum, Greibach Normal Form Transformation, Revisited. The definition of the variable < number>::=< digit > | < digit >< number > where < digit > is defined as < digit > ::= 1|2|3|4|5 Apparently reflects the following syntax diagram Please explain why this is the case. I am particularly confused with the clause after the vertical OR line (i.e. < digit >< number >) and whether it has something to do with the fact that number can consist of many digits. Asked By : user98937 Answered By : adrianN < number>::=< digit > | < digit >< number > means that a number is either just a digit, or a digit followed by a number. So 1 is a number (just a single digit), and 12 is also a number (a single digit, followed by a number that consists of just the single digit '2'). Similarly 123 is a number that consists of a single digit '1' and a number (that itself consists of a single digit '2' followed by a number (that consists of the single digit '3')). Given a digraph, determine if the graph has any vertex-disjoint cycle cover. I understand that the permanent of the adjacency matrix will give me the number of cycle covers for the graph, which is 0 if there are no cycle covers. But is it possible to check the above condition directly? Asked By : Shadow Your problem is answered in the Wikipedia article on vertex-disjoint cycle covers. According to the article, you can reduce this problem to that of finding whether a related graph contains a perfect matching. Details can be found in a paper of Tutte or in recitation notes of a course given by Avrim Blum. As a comment, in the graph-theoretic literature a vertex-disjoint cycle cover is known as a 2-factor. The CS188 course from Berkeley goes to great length in explaining why the optimality of $A^*$ algorithm is conditioned by the admissibility of heuristic. Note: admissibility of heuristic means that: $$\forall n, 0 \leq h(n) \leq h^*(n) $$ With $h^*(n)$ being the true optimal distance of $n$ to the goal. The course's proof reasoning is sound and makes perfect sense. However, this example uses a heuristic that is not admissible. For instance, the distance between the start state and the goal state is only $h^* = 2$ while the heuristic of the start state is $h = 17$ (see the example with the 8-puzzle, using Nilsson heuristic). Obviously, $17 \gt 2$ and therefore the heuristic is not admissible. However, after having implemented the algorithm and tested it, it seems that it is able to find the optimal solution each time. Trying to alter the heuristic function only makes it worse than optimal. So, if the admissibility of the heuristic a necessary condition for the guaranteed optimality of $A^*$, how comes that this example seems to refute this? Did I miss something? Or perhaps, does the condition on the admissibility means that sometimes, the algorithm will be optimal even with an unadmissible heuristic, but only an admissible one guarantees the optimality? I'd be curious to hear any thought about that. Asked By : Jivan It's exactly as you suspected: Sometimes, the algorithm will find an optimal solution even with an unadmissible heuristic... only an admissible heuristic guarantees the optimality of the solution returned. If you use an unadmissible heuristic, there's no guarantee what will happen. The solution you get back might be the best one; or it might not be. And, you probably won't have any way of telling whether the solution you got back was optimal or not, so you won't even know for sure when this is a problem and when it isn't. If you use an unadmissible heuristic, you'll often (but not always) get sub-optimal solutions. Question: How many 32 bit integers can be stored in a 16 bit cache line. Answer: 4 can somebody please explain for me why the answer is 4 i did not understand the reason and i think they should give us more given like the number of blocks in the cache.. Asked By : Sara Chatila There's probably a typo there: it's not 16 bits but rather 16 bytes. Each 32 bit integer takes 4 bytes, so 16/4 = 4. I was reading about Multi Layered Perceptron(MLP) and how can we learn pattern using it. Algorithm was stated as Initiate all weight to small values. Compute activation of each neuron using sigmoid function. Compute the error at the output layer using $\delta_{ok} = (t_{k} - y_{k})y_{k}(1-y_{k})$ compute error in hidden layer(s) using $\delta_{hj} = a_{j}(1 -a_{j})\sum_{k}w_{jk}\delta_{ok}$ update output layer using using $w_{jk} := w_{jk} + \eta\delta_{ok}a_{j} ^{hidden}$ and hidden layer weight using $v_{ij} := v_{ij} + \eta\delta_{hj}x_{i}$ Where $a_{j}$ is activation function, $t_{k}$ is target function,$y_{k}$ is output function and $w_{jk}$ is weight of neuron between $j$ and $k$ My question is that how do we get that $\delta_{ok}$? and from where do we get $\delta_{hj}$? How do we know this is error? where does chain rule from differential calculus plays a role here? Asked By : Sigma Answered By : Martin Thoma How do we get that $\delta_{ok}$? You calculate the gradient of the network . Have a look at "Tom Mitchel: Machine Learning" if you want to see it in detail. In short, your weight update rule is $$w \gets w + \Delta w$$ with the $j$-th component of the weight vector update being $$\Delta w^{(j)} = - \eta \frac{\partial E}{\partial w^{(j)}}$$ where $\eta \in \mathbb{R}_+$ is the learning rate and $E$ is the error of your network. $\delta_{ok}$ is just $\frac{\partial E}{\partial w^{(o,k)}}$. So I guess $o$ is an output neuron and $k$ a neuron of the last hidden layer. Where does the chain rule play a role? The chain rule is applied to compute the gradient. This is where the "backpropagation" comes from. You first calculate the gradient of the last weights, then the weights before, ... This is done by applying the chain rule to the error of the network. The weight initialization is not only small, but it has to be different for the different weights of one layer. Typically one chooses (pseudo)random weights. See: Glorotm, Benigo: Understanding the difficulty of training deep feedforward neural networks I have an exercise about cache memory, first the cache is empty : I have a cache memory with 16 lines and each lines have 16 octet, the address is 16 bits So I know that the INDEX will be composed of 4 bits and the offset will be composed of 4 bits too So i will have this : bits number : (15 ... TAG... 8)(7.. INDEX.. 4) (3 .... OFFSET .... 0 ) Now if have to say if there is cache default or NOT and say if it will Loads or Not. Adress that I have : 3000 : tag = 30 / index = 0 , offset 0 2040 : tag = 20 / index = 4 , offset = 0 2404 : tag =24 / index = 0 , offset = 4 3002 : tag = 30 / index = 0, offset = 2 20C4 tag = 20 / index = 12 , offset =4 3003 tag = 30 / index= 0 , offset =3 24C4 tag = 24 / index = 12 , offset = 4 If someone can explains me how I know if it loads and if I have a cache default, I would be very happy. Thanks Asked By : Foushi Answered By : TEMLIB Starting with an empty cache. Assuming direct map Tag memory will contain for the 16 indexes the addresses [15:8] and a "valid" bit 3000 : Miss -> Load 30 into tag 0 and fill line (3000..300F) 2040 : Miss -> Load 20 into tag 4 and fill line (2040..204F) 3001 : Hit : Already in the cache 2404 : Miss -> Load 24 into tag 0 and fill line (replaces 3000..300F -> 2400..240F) 3002 : Miss -> Load 30 into tag 0 and fill line (replaces 2400..240F -> 3000..300F) 20C4 : Miss -> Load 20 into tag C and fill line (20C0..20CF) 3003 : Hit : Already in the cache 24C4 : Miss -> Load 24 into tag C and fill line (24C0..24CF) Very few hits here, would greatly benefit from a two ways set-associative cache. I encountered some system of ~5000 random nodes connected by ~8000 non-hookean springs, with ~1300 nodes at the boundary fixed as the "wall", the potential of the springs are of the form $dx*e^{(dx/a)}$ where $a$ is a constant and $dx$ the strain (displacement/original length) of the spring, I am using Monte Carlo method to find the energy-minimized configuration after I performed some "perturbation", say, a simple shear or a isotropic expansion of the whole system. It seems that the conventional energy minimization schemes such as "steepest Descent", or "simulated annealing" is not working as efficiently here as the case of linear situations, it always fail to converge to a satisfactorily balanced state. Could someone share your experiences in dealing with such non-linear situations? Asked By : Long Liang Answered By : Long Liang OK, I finally fixed this issue, the right thing to do in such non-linear situation is to use simulated annealing. I am implementing a gradient guided simulated annealing, which works pretty efficiently. Thanks for everyone who gave me suggestions and guidance to the right path! Have fun (mixed with a lot of frustrations) with modeling! I'm writing a computer game and one of my game's objects must follow a movement path that is very similar to the following graph. The bold lines are the Y and X axis. In order to be able to code this movement thought I need the algebric equation that translates to this graph. For example f(x) = x^2. Is there any mathematic technique I could use to obtain the algebric from of this function? Asked By : exophrenik Answered By : Bakuriu We can start from $\sin(x)$ which has a nice regular graph. To avoid negative values you can simply use the absolute value $|\sin(x)|$. This produces a graph similar to your but with constant height. To decrease height as $x \to \infty$ we want to multiply that function by something that decrements. For example $\frac{1}{1+|x|}$ goes to $0$ as $x \to \infty$ and is symmetric. So $\frac{1}{1+|x|}|\sin(x)|$ is something similar to what you want with a maximum height of $1$. If you multiply by a constant $A$ you "set" the maximum height to $A$. If you want to change the width of the bumps you can simply multiply the argument of the $\sin$ function, for example $\frac{A}{1+|x|}|\sin(2x)|$ will have maximum height $A$ and a width that is half the width of the normal $\sin$, while using $\sin(x/2)$ would produce a bump that is twice the width of the normal $\sin$. See this wolfram alpha's graph for an example result. You can also change the degradation of height by setting a different power for the $|x|$. For example using $\frac{1}{1+x^2}$ you'd have a faster drop to $0$, while using $\frac{1}{1+\sqrt{|x|}}$ would produce a slower change in height. Consider the problem of representing in memory numbers in the range $\{1,\ldots,n\}$. Obviously, exact representation of such number requires $\lceil\log_2(n)\rceil$ bits. In contrast, assume we are allowed to have compressed representations such that when reading the number we get a 2-approximation for the original number. Now we can encode each number using $O(\log\log n)$ bits. For example, given an integer $x\in\{1,\ldots,n\}$, we can store $z = \text{Round}(\log_2 x)$. When asked to reconstruct an approximation for $x$, we compute $\widetilde{x} = 2^z$. Obviously, $z$ is in the range $\{0,1,\ldots,\log_2 n\}$ and only requires $O(\log\log n)$ bits to represent. In general, given a parameter $\epsilon>0$, what is the minimal number of bits required for saving an approximate representation of an integer in the above set, so that we can reconstruct its value up to a multiplicative error of $(1+\epsilon)$? The above example shows that for $\epsilon=1$, approximately $\log\log n$ bits are enough. This probably holds asymptotically for every constant $\epsilon$. But how does epsilon affect the memory (e.g., does it require $\Theta(\frac{1}{\epsilon^2}\log\log n)$ bits?). Asked By : R B Storing $x$ to within a $1+\epsilon$ approximation can be done with $\lg \lg n - \lg(\epsilon) + O(1)$ bits. Given an integer $x$, you can store $z = \text{Round}(\log_{1+\epsilon} x)$. $z$ is in the range $\{0,1,\dots,\log_{1+\epsilon} n\}$, so requires about $b = \lg \log_{1+\epsilon} n$ bits. Doing a bit of math, we find $$b = \lg \frac{\lg n}{\lg(1+\epsilon)} = \lg \lg n - \lg \lg (1+\epsilon).$$ Now $\lg(1+\epsilon) = \log(1+\epsilon)/\log(2) \approx \epsilon/\log(2)$, by a Taylor series approximation. Plugging in, we get $$b = \lg \lg n - \lg(\epsilon) + \lg(\log(2)),$$ which yields the claimed answer. I'm studying with "Numerical Solution of Partial Differential Equations by K.W.Morton and D.F.Mayers". On page 25, it says "2(add) + 2(multiply) operations per mesh point for the explicit algorithm (2.19)", but it seems 3(add) + 2(multiply) to me, how did I get wrong? (2.19) $U_j^{n+1}=U_j^{n}+\mu(U_{j+1}^{n}-2U_j^{n}+U_{j-1}^{n})$ My counting is 2(add) and 1(multiply) inside the bracket 1(multiply) for $\mu$ and the brackets 1(add) for $U_j^{n}$ and the rest Answered By : Tom van der Zanden Note that $U^n_j + \mu (U^n_{j+1}-2U^n_j+U^n_{j-1}) = (1-2\mu)U^n_j+\mu(U^n_{j+1}+U^n_{j-1})$ $1-2\mu$ may be precomputed. For a certain maximization problem, a "constant-factor approximation algorithm" is an algorithm that returns a solution with value at least $F\cdot \textrm{Max}$, where $F<1$ is some constant and $\textrm{Max}$ is the exact maximal value. What term describes an algorithm in which the approximation factor is a function $F(n)$ of the problem size, and $F(n)\to 1$ as $n\to \infty$? Asked By : Erel Segal-Halevi You can say your algorithm is asymptotically optimal. One example is universal codes, which are a certain type of codes for the natural numbers. They satisfy the following property. Let $D$ be a monotone probability distribution over the natural numbers, that is $\Pr[D=n] > \Pr[D=n+1]$. The average codeword length under a universal code is $H(D)(1 + o(1))$. Since $H(D)$ is the optimal length, universal codes are asymptotically optimal, in exactly the same sense as the one you're after. Good evening! I am studying scheduling problems and I have some difficulties understanding constraints of potentials: Let be $t_j$ the time when a task $j$ starts, and $t_i$ the time when a task $i$ starts. I'm assuming that $a_{ij}$ is the length of the task $i$ but I'm not sure. Why is a "constraint of potential" mathematically expressed by: $$t_j-t_i \le a_{ij}$$ Shouldn't it be the reverse, $t_j-t_i \ge a_{ij}$? If we know that the length of task $i$ is $a_{ij}$, isn't it impossible to do something in less that the necessary allocated time? Asked By : Marine1 I suspect there's some misunderstanding about the definition/meaning of $a_{ij}$. The time to complete task $i$ depends only on $i$ (not on $j$), so it wouldn't make sense to use notation like $a_{ij}$ for the length of task $i$: we'd instead expect to see only the index $i$, but not $j$, appear in that notation. I suspect you'll probably need to go back to your textbook or other source on scheduling and look for a precise definition of the notation that it uses. If your textbook doesn't define its notation, look for a better textbook. As far as the constraint $t_j - t_i \le a_{ij}$, that is expressing that $t_j$ should start at most $a_{ij}$ seconds after $t_i$ starts. So, it'd make more sense for $a_{ij}$ to be a permissible delay that expresses the maximum time you can wait to start task $j$, once task $i$ has been started. Hello I am a layman trying to analyze game data from League of Legends, specifically looking at predicting the win rate for a given champion given an item build. A player can own up to 6 items at the end of a game. They could have purchased these items in different orders or adjusted their inventory position during the course of the game. In this fashion the dataset may contain the following rows with: champion id | items ids | win(1)/loss(0) ---------------------------------------------------------------------------- 45 | [3089, 3135, 3151, 3157, 3165, 3285] | 1 45 | [3151, 3285, 3135, 3089, 3157, 3165] | 1 45 | [3165, 3285, 3089, 3135, 3157, 3151] | 0 While the items are in a different order the build is the same, my initial thought would be to simply multiply the item ids as this would give me an integer value representing that combination of 6 items. While there are hundreds of items, in reality a champion draws off a small subset (~20) of those to form the core (3 items) of their build. A game may also finish before players have had time to purchase 6 items: items ids ------------------------------------------ [3089, XXXX, 3151, 3285, 3165, 0000] [XXXX, 3285, XXXX, 3165, 3151, 0000] [3165, 3285, 3089, XXXX, 0000, 0000] XXXX item from outside core subset 0000 empty inventory slot As item 3089 compliments champion 45 core builds that have item 3089 have a higher win rate than core builds which are missing item 3089. The size of the data set available for each champion varies between 10000 and 100000. The mean is probably around 35000. Is this a suitable problem for supervised classification? How should I approach finding groups of core items and their win rates? Asked By : Justin Yes. If you have a non-trivial data set of this form, this would be a reasonable fit for statistical analysis. For independent variables, you have one binary feature for each core item (indicating whether that item was purchased or not); the outcome is a binary variable (win or loss). Accordingly, one reasonable approach would be to try logistic regression. You'll have one independent variable $X_i$ for each item; $X_i=1$ means that item $i$ is one of the 6 items that the champion purchased in this game, $X_i=0$ means item $i$ was not purchased. You'll have a dependent variable $Y$; $Y=1$ means that the champion won this game, $Y=0$ means the champion lost. Then logistic regression will tell you which items tend to be associated with winning games. There are other methods you could try as well: pretty much any method for supervised classification that works well with binary/categorical variables. The one thing I don't recommend you do is multiply the item codes. That's not going to help. Instead, just have 20 features, where each feature indicates whether a particular item was purchased or wasn't. I am reading the article a kind of old article entitled "Adaptive image region-growing" by Chang Y Li X, published in IEEE transactions on image processing DOI:10.1109/83.336259. Well, the problem is that the method is not well explained. For example, in equation (2) and (3) I don't get the meaning of Pr() ? any idea about that ? Asked By : ALJI Mohamed $\Pr(E)$ normally refers to the probability of the event $E$. Consult a textbook on probability to learn more about standard notation for probability and statistics. If I have the integer ""1234" stored in a file, the size of the file when I use the command "ls -l" or "wc -c" is 5. If I have "12345" the size of the file is 6. Basically the number of bytes in a file = (number of individual digits) + 1. Now, I am assuming the "+1" comes from the space it takes to create the name of the file. But why is the rest of the byte count equal to the number of individual characters? Does the file system read the file character by character even we "write an integer" to the file? Elaboration on this would be very helpful. Asked By : Jonathan The +1 is for the newline character. The byte count equals the number of characters, in this case, because each character takes exactly one byte. That need not be true in general (see, e.g., UTF-8), but it is for the example you listed. You didn't write an integer to the file. You wrote a string -- a sequence of characters, namely the character 1, then the character 2, then 3, then 4, then a newline. Files don't store integers or other types. They store a sequence of bytes. Each character is converted to one or more bytes using an encoding, and then the sequence of bytes is written. (This question was originally posted at http://academia.stackexchange.com/questions/68675 but was considered too specific for Academia.) In computer science/engineering research papers relating to performance improvement wherein execution time is normalized to control benchmarks (i.e. speedup plots), how is that normalization generally implemented when the control(s) is/are executed multiple times in order to avoid the possibility of other processes/interrupts/etc. taking processor time and skewing the results? My first thought would be to just use the minimum time achieved for each benchmark, but as that would probably not be the minimum possible execution time in most cases and, depending on the situation, overall execution time rather than best-case execution time may be more important, is it better to just accept the skewing and go off the means? Or is it better to use the median for each set of results? Asked By : JAB One simple approach is to use the median. It is less sensitive to outliers than the mean, but still serves as a good summary of typical running time. Alternatively, you could use the mean but in that case you should inspect all the results yourself to check for the possibility of outliers. I personally would recommend the median. Papers often show confidence intervals as well, so that you can assess the effect of statistical noise. Normally, benchmarks are run on an isolated system to minimize interference from other tasks. Note that there are many challenges with benchmarking. Even simple irrelevant changes can cause significant changes to performance, e.g., because they happened to change cache alignment to something that is randomly better or worse. Therefore, it can be difficult to separate out whether your optimization led to a 3% performance improvement because your optimization is an improvement, or if that's just randomness (e.g., just recompiling with slightly different settings can change the running time by +/- 3%, and you happened to get unlucky once and lucky the other time). So I'm reading "Search Through Systematic Set Enumeration" by Ron Rymon (currently available online for free. I'm having a problem with the notation in the following definition presented bellow: The Set-Enumeration (SE)-tree is a vehicle for representing and/or enumerating sets in a best-first fashion. The complete SE-tree systematically enumerates elements of a power-set using a pre-imposed order on the underlying set of elements. In problems where the search space is a subset of that power-set that is (or can be) closed under set-inclusion, the SE-tree induces a complete irredundant search technique. Let E be the underlying set of elements. We first index E's elements using a one-to-one function ind: E -> $ \mathbb{N} $. Then, given any subset S $\subseteq$ E, we define its SE-tree view: Definition 2.1 A Node's View $View(ind,S) \stackrel{def}{=} \{e \in E | ind(e) \gt max_{e' \in S} ind(e')\}$ In the paper there is an example of a tree made with what appears to be E={1,2,3,4}. I have some familiarity with set-builder notation, but much of the other parts of the "node's view" is confusing me. I skimmed ahead to see if there were clarifications, but I didn't manage to find them so either: a) the author is assuming a competency I do not have, b) the explanation is there and I couldn't find it, or c) the author is doing a horrible job as an author. So with the hope that it is one of the first two: I'm assuming that the prime in e' is for the complement of the set e, so if e = {1}, then e' = {2,3,4}. Is this correct? What is this ind function? What would ind({3,4}) be for example? $max_{e' \in S}$? Is this the maximum height of the sub tree of the complement of e? Any assistance on this would be most appreciated. Asked By : BrotherJack Answered By : Pål GD No, the prime is not complement, $e'$ is just a different variable than $e$. In words, $\text{View}(\text{ind},S)$ is the set of all edges whose index is higher than the indices of edges in $S$. The function $\text{ind}: E \to \mathbb{N}$ gives a number to each edge, and $\max_{e' \in S}\text{ind}(e')$ is simply the highest index of $S$. Then $\text{View}(\text{ind},S)$ is the set of all edges whose index is higher than $\max_{e' \in S}\text{ind}(e')$. When we perform average case analysis of algorithms, we assume that the inputs to the algorithm are sampled uniformly from some underlying space. For example, the average case analysis of quicksort assumes that the unsorted array is uniformly sampled from the $n!$ permutations of $\{1,\ldots,n\}$. Suppose instead that the inputs to an algorithm are chosen non-uniformly over the input space. Is the resulting analysis still "average case" analysis? If the distribution causes the algorithm to perform at its worst (resp. best), is there a standard name for it? E.g. "adversarial (resp. favourable) distribution of inputs". Asked By : PKG Yes, the expected running time under some other distribution would still count as an example of average-case analysis. However, when you describe it to someone, make sure you explain what distribution you're using. I wouldn't recommend that you just call it "average-case analysis" without also explaining that you're using a non-standard probability distribution. The worst-case distribution is always a point distribution that assigns all its probability to a single input. In other words, the worst-case probability distribution that makes the expected running time as large as possible is in fact just a single input: the input that makes the algorithm run as long as possible. Consequently, this kind of worst-case analysis coincides with the "standard" notion of worst-case running time. For this reason, there is no need for a special "name" for this; worst-case running time already covers it. The same is true for best-case running time and the probability distribution that makes the expected running time as small as possible. I am getting properly stuck into reinforcement learning and I am currently reading the review paper by Kober et al. (2013). And there is one constant feature that I cannot get my head around, but which is mentioned a lot, not just in this paper, but others too; namely the existence of gradients. In section 2.2.2. they say: The approach is very straightforward and even applicable to policies that are not differentiable. w.r.t. to finite difference gradients. What does it mean to say that the gradients exist and indeed, how do we know that they exist? When wouldn't they exist? Asked By : Astrid The gradient doesn't exist / isn't well-defined for non-differentiable functions. What they mean by that statement is that there is an analogous version of gradients that can be used, instead of the gradient. Discrete functions In the discrete case, finite differences are the discrete version of derivatives. The derivative of a single-variable continuous function $f:\mathbb{R} \to \mathbb{R}$ is $df/dx$; the partial difference (a discrete derivative) of a single-variable discrete function $g:\mathbb{Z} \to \mathbb{Z}$ is $\Delta f : \mathbb{Z} \to \mathbb{Z}$ given by $$\Delta f(x) = f(x+1)-f(x).$$ There's a similar thing that's analogous to a gradient. If we have a function $f(x,y)$ of two variables, the gradient consists of partial derivatives $\partial f / \partial x$ and $\partial f / \partial y$. In the discrete case $g(x,y)$, we have partial differences $\Delta_x f$ and $\Delta_y f$, where $$\Delta_x f(x,y) = f(x+1,y) - f(x)$$ and similarly for $\Delta_y f$. Continuous functions If you have a continuous function that is not differentiable at some points (so the gradient does not exist), sometimes you can use the subgradient in lieu of the gradient. Non contiguous memory allocation methodology Minimum variable size if the information are coded... Is there any merging procedure for multiple scenes... Problems with the proof of Huffman Optimality in C... Use of Landau notation for determining bounds Information on Tavori and Dreizin ranged hash func... occur-check, does nominal unification has it? Recognizing vs Deciding in defining class BPP What is the difference in 'logical array blocked' ... Why is particular token missing in LALR lookahead ... Can you form a group with assembly instructions un... Selection type generated by nested loops where ini... Computer science asymptotic terminology Finding a solution to the following conditions? benchmark data set for Metareasoning problem Can I use the set of "used arguments values" as a ... What's the meaning of the $\top$ symbol in a Hoare... How is a Transition System in Operational Semantic... Is there an algorithm for converting a CFG in Grei... Why does the following Backus-Naur form reflect th... Determining if a digraph has any vertex-disjoint c... Admissibility of heuristic in the $A^*$ algorithm Memory Hierarchy and storing data in caches error computation in multi layered perceptron Know when we have a cache default and if it loads What is a proper way of solving a multibody nonlin... Producing an algebric equation from a graph Compactly representing integers when allowed a mul... The number of operations of Explicit and Implicit ... Term for an approximation that becomes better as t... How to understand the mathematical constraint of p... Classification problem where one attribute is a ve... What PR stands for in Adaptive image region-growin... How a file is determined Data normalization when control has multiple value... Assistance with Notation in the Paper Entitled: "S... Nonuniform input distributions in average case ana... When do the gradients exist in a Reinforcement Lea... Do approximation results for CSPs hold even when d... What do the terms 'Sample' and 'Sampling' mean in ... How can I model overlapping service times in queue... Does a multi-agent system have to have a synchrono... How to define an approximation algorithm when it f... Confusion regarding DAGs Articulation vertex in complementary graph Change in Training Accuracy Does preparing min DFA by combining equivalent sta... What is the number of degrees of freedom of my pro... Why does if A is a spanning tree which doesn't hav... Lower bound for comparison algorithm - checking wh... How to prove that the probability that a random gr... How does literals in compiled languages differ fro... CRF message passing as convolution operation Algorithm for converting (coordinates + reach radi... optimal lifted phylogenetic alignment - how to imp... Maximal Elements in a Lower Set Spinlock using cache coherence Application of a directed almost tree Feasible to apply image "interlacing" concept to v... Algorithm to build a "hand" calculation for riichi... Difference between word logical addressing and byt... Formal notations of equivalencence between floatin... The extension of a code is itself a code Timely lower bounded Turing machines What is the archive attribute for windows files? Covering radius of code How to choose symbols to replace for this left-mos... Mapping several memories to one address space Convolutional and Linear block codes Under what condition is P/poly equal to the class ... Identity productions in regular grammar Something wrong with this definition of factorial ... Different boolean degrees polynomially related-2? Let $L_4$ $\subseteq$ {0,1}$^*$ be the set of all ... Implement opposite() method to tell if there are t...
CommonCrawl
Transactions of the American Mathematical Society Published by the American Mathematical Society, the Transactions of the American Mathematical Society (TRAN) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Transactions of the American Mathematical Society is 1.43. Journals Home eContent Search About TRAN Editorial Board Author and Submission Information Journal Policies Subscription Information New range theorems for the dual Radon transform by Alexander Katsevich PDF Trans. Amer. Math. Soc. 353 (2001), 1089-1102 Request permission Three new range theorems are established for the dual Radon transform $R^*$: on $C^\infty$ functions that do not decay fast at infinity (and admit an asymptotic expansion), on $\mathcal {S}(Z_n)$, and on $C_0^\infty (Z_n)$. Here $Z_n:=S^{n-1}\times \mathbb {R}$, and $R^*$ acts on even functions $\mu (\alpha ,p)=\mu (-\alpha ,-p), (\alpha ,p)\in Z_n$. Margareta Heilmann, $L_p$-saturation of some modified Bernstein operators, J. Approx. Theory 54 (1988), no. 3, 260–273. MR 960049, DOI 10.1016/0021-9045(88)90003-2 M. V. Fedoryuk, Metod perevala, Izdat. "Nauka�, Moscow, 1977 (Russian). MR 0507923 J. L. Walsh, On interpolation by functions analytic and bounded in a given region, Trans. Amer. Math. Soc. 46 (1939), 46–65. MR 55, DOI 10.1090/S0002-9947-1939-0000055-0 F. B. Gonzalez, Radon transforms on Grassmann manifolds, Ph.D. thesis, M.I.T., 1984. Fulton B. Gonzalez, Radon transforms on Grassmann manifolds, J. Funct. Anal. 71 (1987), no. 2, 339–362. MR 880984, DOI 10.1016/0022-1236(87)90008-5 I. S. Gradshteyn and I. M. Ryzhik, Table of integrals, series, and products, 5th ed., Academic Press, Inc., Boston, MA, 1994. Translation edited and with a preface by Alan Jeffrey. MR 1243179 I. M. Gel′fand and G. E. Shilov, Generalized functions. Vol. 1, Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1964 [1977]. Properties and operations; Translated from the Russian by Eugene Saletan. MR 0435831 Sigurđur Helgason, The Radon transform on Euclidean spaces, compact two-point homogeneous spaces and Grassmann manifolds, Acta Math. 113 (1965), 153–180. MR 172311, DOI 10.1007/BF02391776 Sigurdur Helgason, The Radon transform, Progress in Mathematics, vol. 5, Birkhäuser, Boston, Mass., 1980. MR 573446, DOI 10.1007/978-1-4899-6765-7 Sigurdur Helgason, Ranges of Radon transforms, Computed tomography (Cincinnati, Ohio, 1982) Proc. Sympos. Appl. Math., vol. 27, Amer. Math. Soc., Providence, R.I., 1982, pp. 63–70. MR 692054 Alexander Hertle, Continuity of the Radon transform and its inverse on Euclidean space, Math. Z. 184 (1983), no. 2, 165–192. MR 716270, DOI 10.1007/BF01252856 Lars Hörmander, The analysis of linear partial differential operators. I, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 256, Springer-Verlag, Berlin, 1983. Distribution theory and Fourier analysis. MR 717035, DOI 10.1007/978-3-642-96750-4 Alexander I. Katsevich, Range of the Radon transform on functions which do not decay fast at infinity, SIAM J. Math. Anal. 28 (1997), no. 4, 852–866. MR 1453309, DOI 10.1137/S0036141095289518 Alfred K. Louis, Orthogonal function series expansions and the null space of the Radon transform, SIAM J. Math. Anal. 15 (1984), no. 3, 621–633. MR 740700, DOI 10.1137/0515047 Peter D. Lax and Ralph S. Phillips, The Paley-Wiener theorem for the Radon transform, Comm. Pure Appl. Math. 23 (1970), 409–424. MR 273309, DOI 10.1002/cpa.3160230311 Donald Ludwig, The Radon transform on euclidean space, Comm. Pure Appl. Math. 19 (1966), 49–81. MR 190652, DOI 10.1002/cpa.3160190207 A.G. Ramm, The Radon transform is an isomorphism between $L^2(B)$ and $H_e(Z_a)$, Appl. Math. Lett. 8 (1995), 25–29. A. G. Ramm, Inversion formula and singularities of the solution for the back-projection operator in tomography, Proc. Amer. Math. Soc. 124 (1996), no. 2, 567–577. MR 1301044, DOI 10.1090/S0002-9939-96-03155-3 A. G. Ramm and A. I. Katsevich, The Radon transform and local tomography, CRC Press, Boca Raton, FL, 1996. MR 1384070 W. R. Madych and D. C. Solmon, A range theorem for the Radon transform, Proc. Amer. Math. Soc. 104 (1988), no. 1, 79–85. MR 958047, DOI 10.1090/S0002-9939-1988-0958047-7 Donald C. Solmon, Asymptotic formulas for the dual Radon transform and applications, Math. Z. 195 (1987), no. 3, 321–343. MR 895305, DOI 10.1007/BF01161760 Leo F. Epstein, A function related to the series for $e^{e^x}$, J. Math. Phys. Mass. Inst. Tech. 18 (1939), 153–173. MR 58, DOI 10.1002/sapm1939181153 R. Wong, Asymptotic approximations of integrals, Computer Science and Scientific Computing, Academic Press, Inc., Boston, MA, 1989. MR 1016818 Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 44A12 Retrieve articles in all journals with MSC (2000): 44A12 Alexander Katsevich Affiliation: Department of Mathematics, University of Central Florida, Orlando, Florida 32816 MR Author ID: 320907 Email: [email protected] Received by editor(s): January 20, 1998 Received by editor(s) in revised form: June 24, 1999 Published electronically: October 11, 2000 Additional Notes: This research was supported in part by NSF grant DMS-9704285 Journal: Trans. Amer. Math. Soc. 353 (2001), 1089-1102 MSC (2000): Primary 44A12 DOI: https://doi.org/10.1090/S0002-9947-00-02641-6
CommonCrawl
Title: PENELLOPE II. CVSO 104: a pre-main sequence close binary with an optical companion in Ori OB1 Authors: FRASCA, Antonio Boffin, H. M. J. Manara, C. F. ALCALA', JUAN MANUEL Ábrahám, P. COVINO, Elvira Fang, M. Gangi, Manuele Ettore Herczeg, G. J. Kóspál, Á. Venuti, L. Walter, F. M. ALONSO SANTIAGO, JAVIER Grankin, K. Siwak, M. Alecian, E. Cabrit, S. Journal: ASTRONOMY & ASTROPHYSICS First Page: A138 Abstract: We present results of our study of the close pre-main sequence spectroscopic binary CVSO 104 in Ori OB1, based on data obtained within the PENELLOPE legacy program. We derive, for the first time, the orbital elements of the system and the stellar parameters of the two components. The system is composed of two early M-type stars and has an orbital period of about 5 days and a mass ratio of 0.92, but contrarily to expectations does not appear to have a tertiary companion. Both components have been (quasi-)synchronized, but the orbit is still very eccentric. The spectral energy distribution clearly displays a significant infrared excess compatible with a circumbinary disk. The analysis of HeI and Balmer line profiles, after the removal of the composite photospheric spectrum, reveals that both components are accreting at a similar level. We also observe excess emission in H$\alpha$ and H$\beta$, which appears redshifted or blueshifted by more than 100 km/s with respect to the mass center of the system depending on the orbital phase. This additional emission could be connected with accretion structures, such as funnels of matter from the circumbinary disk. We also analyze the optical companion located at about 2".4 from the spectroscopic binary. This companion, that we named CVSO 104B, turns out to be a background Sun-like star not physically associated with the PMS system and not belonging to Ori OB1. URL: https://www.aanda.org/articles/aa/full_html/2021/12/aa41686-21/aa41686-21.html https://www.aanda.org/articles/aa/pdf/2021/12/aa41686-21.pdf http://arxiv.org/abs/2109.06305v1 DOI: 10.1051/0004-6361/202141686 aa41686-21.pdf Pdf editoriale 4.95 MB Adobe PDF View/Open
CommonCrawl
Robust feature space separation for deep convolutional neural network training Ali Sekmen1, Mustafa Parlaktuna1, Ayad Abdul-Malek1, Erdem Erdemir1 & Ahmet Bugra Koku2 Discover Artificial Intelligence volume 1, Article number: 12 (2021) Cite this article This paper introduces two deep convolutional neural network training techniques that lead to more robust feature subspace separation in comparison to traditional training. Assume that dataset has M labels. The first method creates M deep convolutional neural networks called \(\{\text {DCNN}_i\}_{i=1}^{M}\). Each of the networks \(\text {DCNN}_i\) is composed of a convolutional neural network (\(\text {CNN}_i\)) and a fully connected neural network (\(\text {FCNN}_i\)). In training, a set of projection matrices \(\{\mathbf {P}_i\}_{i=1}^M\) are created and adaptively updated as representations for feature subspaces \(\{\mathcal {S}_i\}_{i=1}^M\). A rejection value is computed for each training based on its projections on feature subspaces. Each \(\text {FCNN}_i\) acts as a binary classifier with a cost function whose main parameter is rejection values. A threshold value \(t_i\) is determined for \(i^{th}\) network \(\text {DCNN}_i\). A testing strategy utilizing \(\{t_i\}_{i=1}^M\) is also introduced. The second method creates a single DCNN and it computes a cost function whose parameters depend on subspace separations using the geodesic distance on the Grasmannian manifold of subspaces \(\mathcal {S}_i\) and the sum of all remaining subspaces \(\{\mathcal {S}_j\}_{j=1,j\ne i}^M\). The proposed methods are tested using multiple network topologies. It is shown that while the first method works better for smaller networks, the second method performs better for complex architectures. Avoid the common mistakes There is an explosion of deep learning applications since the reintroduction of Convolutional Neural Networks (CNNs) in image classification [1] in 2012 and successive years to ImageNet dataset [2,3,4]. It has been successfully applied in self-driving cars for traffic related objects and person detection and classification [5], face recognition for social media platforms [6], natural language processing [7], and symbolic mathematics [8]. A deep convolutional neural network architecture was developed in [9] for classification of skin lesions using large set of images for training. Another architecture was developed in [10] for detection of diabetic retinopathy in retinal fundus photographs. Predicting 3-D structure of a protein using only amino acid sequences is a challenging task. DeepMind recently announced that their algorithm called AlphaFold can predict protein structures with atomic accuracy using deep learning [11, 12]. There is a growing interest to explain the mathematical foundation of CNNs. The work in [13] uses wavelet theory to explain computational invariants in convolutional layers. It attempts to predict kernel parameters without the need for training. There are other attempts to establish mathematical foundation of deep convolutional neural networks [14, 15]. As manifold assumption hypotheses, in many real-world problems, high-dimensional data approximately lies on a lower dimensional manifolds [15]. It has been shown that the trajectories observed in F video frames of M independently moving rigid bodies come from a union of M 4-dimensional subspaces of \(\mathbb {R}^{2F}\) [16]. It is experimentally shown that face images of a person with the same facial expression under different illumination approximately lies in a 9-dimensional subspace [17]. A general framework for clustering of data that comes from a union of independent subspaces is given in [18, 19] and a practical algorithm is given in [20]. A detailed treatment of subspace segmentation problem can be found in [21]. Auto-encoder based deep learning is also applied to subspace clustering [22, 23]. The popularity of CNN stems from the fact that it acts as an automatic feature extractor with its cascaded layers that can generate increasingly complex features from input datasets. As opposed to unnatural aspects of some existing feature extractors, such as SIFT [24] for vision data and MFCC [25] for audio data, the final layer of a CNN typically generates feature vectors that is linearly separable by a Fully Connected Neural network (FCNN). A typical Deep Convolutional Neural Network (DCNN) is trained using Stochastic Gradient Descent (SGD) based algorithm such as Adam [26]. The work in [27] uses manifold learning to improve feature representations of a transfer learning model. The work in [28] uses local linear embedding on output of each convolutional layer for particularly recognizing actions. Some other related work are in [29, 30] This research proposes two novel DCNN architectures and associated training methods with the main goal of converting input data into feature vectors on more separable manifolds (or subspaces) at CNN output. The first method creates multiple DCNNs, each of which adaptively generates a projection matrix for each feature subspace [31, 32]. A rejection value is computed for each label based on its projections on feature subspaces. The second method is based on the idea of maximizing the geodesic distance between a feature subspace and the sum of the remaining feature subspaces [33]. Paper contributions This work develops a classification method with using M deep convolutional neural networks in parallel. In training, a set of projection matrices is created and adaptively updated as representations for feature subspaces. A rejection value is computed for each training based on its projections on feature subspaces. A threshold value is determined for each network and a testing strategy utilizing all thresholds is also introduced. This work also develops another classification method using a single DCNN what minimizes a cost function whose parameters depend on subspace separations using the geodesic distance on the Grasmannian manifold of a feature subspace and the sum of all remaining feature subspaces. Experiments on real data (datasets on digits, alphabets, and fashion products) are performed to justify the proposed architectures. Five different deep convolutional network topologies are used to show that the proposed technique works better. The proposed methods are tested using five network topologies. It is shown that while the first method works better for smaller networks, the second method performs better for complex architectures. A new matrix rank estimation technique is introduced. Section gives a detailed treatment of the first novel deep convolutional network with multiple CNNs. Section introduces the second novel network with maximum subspace separation based on principal angles between subspaces. The numerical experiments are presented in Section and some future work is motivated in Section . Feature space separation—first approach Figure 1 shows the network architecture. Let \(\{\mathcal {C}_i\}_{i=1}^M\) be the sets of M input classes. For each input class, a DCNN is constructed. Let \(\{\text {DCNN}_i\}_{i=1}^{M}\) be the mentioned DCNNs. The kernel (filter) parameters of \(\{\text {CNN}_i\}_{i=1}^{M}\) and weight and biases of \(\{\text {FCNN}_i\}_{i=1}^{M}\) are randomly initialized. During training, a set of projection matrices \(\{\mathbf {P}_i\}_{i=1}^M\) are created and iteratively updated using feature subspaces \(\{\mathcal {S}_i\}_{i=1}^M\). Let \(\mathbf {x}_j^n\) be \(n^{th}\) input in class \(\mathcal {C}_j\) and let \(\mathbf {f}_j^n\) be the corresponding feature vector at the CNN output. Let \(d_{j,i}^n\) be the distance between \(\mathbf {f}_j^n\) and \(\mathcal {S}_i\), which is computed at each iteration as in Equation (1). $$\begin{aligned} d_{j,i}^n=||\left( \mathbf {I} -\mathbf {P}_i\right) \mathbf {f}_j^n||_2. \end{aligned}$$ The objective of training is to minimize \(d_{j,i}^n\) if \(j=i\) and maximize it otherwise. A set of threshold values \(\{t_i\}_{i=1}^M\) are generated as a result of trained DCNNs and these thresholds are used for testing when an unknown input is provided. Depending on some test topologies (as used in Section ), \(\text {FCNN}_i\), may be omitted. System architecture for first approach Algorithm 1 summarizes high-level overall training and generation of threshold values. All network parameters are randomly initialized and training is performed using the steps described in the subsequent subsections. Singular value decomposition for subspace approximation A set of feature subspaces \(\{\mathcal {S}_i\}_{i=1}^M\) are created by forward feeding of \(\{\mathcal {C}_i\}_{i=1}^M\) into \(\text {CNN}_i\). Since all filter parameters are randomly initialized, those subspaces are expected to be not very good to start with. In order to match a subspace for \(i{\text{th}}\) feature space, all input data in \(\mathcal {C}_i\) is passed through \(\text {CNN}_i\) and a data matrix \(\mathbf {W}_i\), whose columns are the feature vectors at the CNN output for \(\mathcal {C}_i\), is created. Singular Value Decomposition (SVD) of \(\mathbf {W}_i\) is taken and its rank \(r_i\) is estimated. In this work, a new rank estimation algorithm was developed as described in Algorithm 2. Some other rank estimation techniques can be found in [34, 35]. Let the effective rank of \(\mathbf {W}_i\) be \(r_i\). Then, $$\begin{aligned} \mathbf {W}_i = \mathbf {U}_i\pmb {\Sigma }_i\mathbf {V}_i^T \end{aligned}$$ The subspace \(\mathcal {S}_i = {{\,\mathrm{span}\,}}\{\mathbf {U}_i[1:r_i] \}\) and the projection matrix is given as $$\begin{aligned} \mathbf {P}_i = {\hat{\mathbf {U}}}_i {\hat{\mathbf {U}}}_i^T. \end{aligned}$$ where \({\hat{\mathbf {U}}}_i = \mathbf {U}_i[1:r_i]\). \(\text {DCNN}_i\) training Algorithm 3 gives the details of training for each \(\text {DCNN}_i\). Note that the rejection is defined as \(\mathbf {r}_{j,i}^n=(\mathbf {I} -\mathbf {P}_i)\mathbf {f}_j^n\) and it is fed into \(\text {FCNN}_i\). Computing class separation Assume all input set \(\mathcal {C}_j\) is feedforwarded via \(\text {CNN}_i\). Let \(\mathbf {r}_{j,i}\) be a vector whose entries are the norms of rejection values, i.e., distances \(d_{j,i}^n\) for all n input vectors in \(\mathcal {C}_j\). Let \(\min (\mathbf {r}_{j,i})\)), \(\max (\mathbf {r}_{j,i})\), and \(\text {mean}(\mathbf {r}_{j,i})\) be the minimum, maximum, and mean values of \(\mathbf {r}_{j,i}\). Then, \(\max (\mathbf {r}_{i,i})\) is compared with \(\min (\min {(\mathbf {r}_{j,i})}_{j=1,j\ne i}^M)\) to assess if full class separation was achieved. Algorithm 4 summarizes the steps. Training algorithm—faster In order to speed up training process, it is possible to use only non-separated input for the next training iteration. The class separation values are computed every K iterations and the input sets is updated accordingly. The entire dataset of the corresponding class is still used for computing \(\mathbf {P}_i\). Algorithm 5 presents the steps. Computing thresholds The set of threshold values \(\{t_i\}_{i=1}^M\) are computed after completion of training. Even though there are multiple ways that can be considered, this work uses three approaches as listed below. Each threshold is used to determine if an unknown input belongs to a particular class. Let \(\mathbf {x}\) be an unknown input that is passed though \(\text {DCNN}_i\). If the rejection value of \(\mathbf {x}\) is less than \(t_i\), then \(\mathbf {x}\) belongs to \(i^{th}\) class. \(t_i = \max (\mathbf{r} _{i,i})\) \(t_i = \min (\mathbf{r} _{j,i})\) ; \(j:1\rightarrow M\) , \(j\ne i\) \(t_i = \text {mean}(\max (\mathbf{r} _{i,i}) + [\min (\mathbf{r} _{j,i})\) ; \(j:1\rightarrow M\), \(j\ne i])\) In order to find the label of an input image x, it is feedforwarded via all DCNNs and associated rejection values are computed and they are compared with corresponding threshold values. 0 or 1 is used to show x's connection to that class. If there is only one network connection, that x is labeled as that class. If there does not exist a connection, the rejection values are ranked and x is labeled as the class with the minimum rejection value. Algorithm 6 summarizes the steps to determine the class label of an unknown image. Feature space separation—second approach A CNN, after training, transfers input classes \(\{\mathcal{C}_i\}_{i=1}^M\) into feature spaces, typically manifolds or subspaces \(\{\mathcal {S}_i\}_{i=1}^M\) at its output layer. The goal of our second approach is to maximize separation of each feature subspace \(\mathcal {S}_i\) with the sum of the remaining feature subspaces \(\{\mathcal {S}_j\}_{j=1,j\ne i}^M\). It creates a single DCNN and computes a cost function whose parameters depend on subspace separations using the geodesic distance on the Grasmannian manifold of subspaces \(\mathcal {S}_i\) and \(\underset{j=1, j\ne i}{\overset{M}{\sum }}S_j\). Figure 2 illustrates the approach. We consider a single DCNN and train it using all available data. This is called pre-training. After the pre-training, each input class \(\mathcal {C}_i\) is passed via CNN and a data matrix \(\mathbf {W}_i\) whose columns are features corresponding each input data in \(\mathcal {C}_i\) is constructed. Using SVD of \(\mathbf {W}_i\), a subspace is matched to \(\mathbf {W}_i\) and it is called \(\mathcal {S}_i\). Then, separation of \(\mathcal {S}_i\) form the sum of the remaining subspaces is computed and the network parameters are updated to minimize the separation. Algorithm 7 gives the steps for training of DCNN. System architecture for second approach There are different measures for separation of subspaces. Each subspace can be represented as a point in a Grassmannian manifold [36] and various distances such as geodesic arc length, chordal distance, or projection distance can be considered. In this work, the projection distance is considered as follows: $$\begin{aligned} d^2\left( \mathcal {S}_i, \mathcal {U}\right) = \sum _{k=1}^p\sin ^2\left( \theta _k\right) \end{aligned}$$ where \(\theta _1, \theta _2, \ldots , \theta _p\) are the principal angles between \(\mathcal {S}\) and \(\mathcal {U}\). The principal angles are calculated as follows (using concepts from Algorithm 7). Let \(\mathbf {U}\) and \(\mathbf {U}_i[1:r_i]\) be an orthonormal basis matrices for \(\mathcal {U}=\underset{j=1, j\ne i}{\overset{M}{\sum }}\mathcal {S}_j\) and \(\mathcal {S}_i\), respectively. If \(1\ge \sigma _1\ge \sigma _2\ge \dots \ge \sigma _{r_i}\ge 0\) are the singular values of \(\mathbf {U}^T \mathbf {U}_i[1:r_i]\), then the principle angles are given by $$\begin{aligned} \theta _k = \arccos \left( \sigma _k\right) \;\;\;\; k=1,\dots , r_i. \end{aligned}$$ Results for first method MNIST [37] (handwritten digits - Figure 3), and Fashion-MNIST [38] (fashion products - Figure 4) datasets are used for testing. The MNIST data that support the findings of this study are available from the MNIST Database [http://yann.lecun.com/exdb/mnist/]. The Fashion-MNIST data that support the findings of this study are available in github repository [https://github.com/zalandoresearch/fashion-mnist/tree/master/data/fashion]. MNIST sample images Fashion MNIST sample images In order to measure impacts of size on performance, we constructed three network topologies. Another topology with dropout layer to FCNN is also considered. Finally, a network topology with only CNN without FCNN is tested. Table 1 provides more details. The proposed method is tested with five different topologies (as shown in Table 1) and compared to the results obtained with traditional DCNN approach. Out of five, three topologies are to assess the effect of size on the performance, one topology topology adds dropout layers to the fully connected network and one topology has only CNN but not FCNN. Since the last topology does not have label categories layer, it cannot be tested traditionally. The results are shown in Table 2. Our first method performs better with small sized networks. This can be observed with Topologies 2 and 3. Our method performs better with smaller number of filters. The overfitting problem for traditional DCNN training is addressed with introduction of dropout layer at Topology 4. With this, performance is improved without any impacting the new approach. This greatly improves the performance while it does not have an effect on the new approach. Due to having an iteratively refined subspace to describe a class, the subspace overfits to features that occur more. In Topology 5, the CNN achieves a high accuracy close to the one with FCNN in Topology 3. In other words, CNN is able to generate features that can be separated without an FCNN. Table 1 Network topologies—first method Table 2 Performances—first method Table 3 Network topologies—second method Results for second method In this part, EMNIST dataset [39], that includes digits and letters, is used for testing purposes. The EMNIST (Extended MNIST) data that support the findings of this study are available from Western Sydney University repository [https://rds.westernsydney.edu.au/Institutes/MARCS/BENS/EMNIST/emnist-gzip.zip [39]]. We considered five network topologies to reflect different complexities and impact of dropout layer. The network topologies and the performance results are shown respectively in Table 3 for and Table 4. The experimental results show that the angle based approach performs better for all topologies. It should be noted that in order to rule out that improvements were due to retraining of FCNN, the same extra training for Network-2 was used and 96.02% performance was obtained. In other words, traditional training (CNN + FCNN) generates 94.24%, traditional training with additional FCNN training generates 96.02%, and the new architecture generates 98.37%. Table 4 Performances—second method This paper introduced two methods that aim at enhancing feature subspace separation during the training process. The first method creates multiple deep convolutional neural networks and network parameters are optimized based on projection of data on subspaces. In the second method, the geodesic distances on Grassmanian manifolds of subspaces is minimized for a particular feature subspaces and the sum of all remaining feature subspaces. As a future work, other subspace based methods, representing each feature subspace by some orthogonal subspaces and training to keep orthogonality as new data training data arrives should also improve accuracy. Such a network training may be more robust especially for adversarial effects. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. The EMNIST (Extended MNIST) data that support the findings of this study are available from Western Sydney University repository [https://rds.westernsydney.edu.au/Institutes/MARCS/BENS/EMNIST/emnist-gzip.zip [39]]. The MNIST data that support the findings of this study are available from the MNIST Database [http://yann.lecun.com/exdb/mnist/]. The Fashion-MNIST data that support the findings of this study are available in github repository [https://github.com/zalandoresearch/fashion-mnist/tree/master/data/fashion]. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th international conference on neural information processing systems, vol 1. Curran Associates Inc.; 2012. p. 1097-1105. He K, Zhang X, Ren S, Sun J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: 2015 IEEE international conference on computer vision (ICCV). He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). 2016. p. 770–778. Litjens GJ, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, van der Laak JA, van Ginneken B, Sánchez CI A survey on deep learning in medical image analysis. CoRR, arXiv:abs/1702.05747. 2017. Angelova A, Krizhevsky A, Vanhoucke V, Ogale A, Ferguson D. Real-time pedestrian detection with deep network cascades. In: Proceedings of BMVC 2015. 2015. Parkhi OM, Vedaldi A, Zisserman A. Deep face recognition. In: Proceedings of the British machine vision conference (BMVC). 2015. Young T, Hazarika D, Poria S, Cambria E. Recent trends in deep learning based natural language processing. CoRR, arXiv:abs/1708.02709. 2017. Lample G, Charton F. Deep learning for symbolic mathematics. CoRR, arXiv:abs/1912.01412. 2019. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PQ, Mega J, Webster D. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316(22):2402–10. Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with alphafold. Nature. 2021;596:583–9. Yang J, Anishchenko I, Park H, Peng Z, Ovchinnikov S, Baker D. Improved protein structure prediction using predicted interresidue orientations. Proc Natl Acad Sci. 2020;117(3):1496–503. Stéphane M. Understanding deep convolutional networks. Philos Trans R Soc Lond A Math Phys Eng Sci. 2016;374(2065):20150203. Zhou D-X. Theory of deep convolutional neural networks: Downsampling. Neural Netw. 2020;124:319–27. Berner J, Grohs P, Kutyniok G, Petersen P. The modern mathematics of deep learning. CoRR, arXiv:abs/2105.04026. 2021. Kanatani K, Matsunaga C. Estimating the number of independent motions for multibody motion segmentation. In: 5th Asian conference on computer vision. 2002. p. 7–9. Georghiades AS, Belhumeur PN, Kriegman DJ. From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell. 2001;23(6):643–60. Aldroubi A, Sekmen A, Koku AB, Cakmak AF. Similarity matrix framework for data from union of subspaces. Appl Comput Harmon Anal. 2018;45(2):425–35. Article MathSciNet Google Scholar Aldroubi A, Hamm K, Koku AB, Sekmen A. Cur decompositions, similarity matrices, and subspace clustering. Front Appl Math Stat. 2019;4:65. Aldroubi A, Sekmen A. Nearness to local subspace algorithm for subspace and motion segmentation. IEEE Signal Process Lett. 2012;19(10):704–7. Vidal R. A tutorial on subspace clustering. IEEE Signal Process Mag. 2010;28:52–68. Huang Q, Zhang Y, Peng H, Dan T, Weng W, Cai H. Deep subspace clustering to achieve jointly latent feature extraction and discriminative learning. Neurocomputing. 2020;404:340–50. Lv J, Kang Z, Lu X, Xu Z. Pseudo-supervised deep subspace clustering. CoRR, arXiv:abs/2104.03531, 2021. Lowe DG. Distinctive image features from scale-invariant keypoints. Int J Comput Vis. 2004;60(2):91–110. Davis SB, Mermelstein P. Readings in speech recognition. Chapter comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. San Francisco, CA: Morgan Kaufmann Publishers Inc.; 1990. p. 65–74. Kingma D, Ba J. Adam: a method for stochastic optimization. In: International conference on learning representations, 12 2014. Zhu R, Dornaika F, Ruichek Y. Semi-supervised elastic manifold embedding with deep learning architecture. Pattern Recognit. 2020;107:107425. Chen X, Weng J, Wei L, Jiaming X, Weng J-S. Deep manifold learning combined with convolutional neural networks for action recognition. IEEE Trans Neural Netw Learn Syst. 2018;29(9):3938–52. Dorfer M, Kelz R, Widmer G. Deep linear discriminant analysis. CoRR, arXiv:abs/1511.04707. 2015. Chan T-H, Jia K, Gao S, Lu J, Zeng Z, Ma Y. Pcanet: A simple deep learning baseline for image classification? CoRR. arXiv:abs/1404.3606. 2014. Parlaktuna M, Sekmen A, Koku AB, Abdul-Malek A. Enhanced deep learning with improved feature subspace separation. In: 2018 international conference on artificial intelligence and data processing (IDAP). 2018. p. 1–5. Parlaktune M. Enhanced deep learning with improved feature subspace separation. Master's thesis, Tennessee State University, 2018. Abdul-Malek A. Deep learning and subspace segmentation: theory and applications. PhD thesis, Tennessee State University, 2019. Vidal R, Ma Y, Sastry S. Generalized principal component analysis (GPCA). IEEE Trans Pattern Anal Mach Intell. 2005;27(12):1945–59. Roy O, Vetterli M. The effective rank: a measure of effective dimensionality. In: 2007 15th European signal processing conference. 2007. p. 606–610 Zhang J, Zhu G, Heath Jr. RW, Huang K. Grassmannian learning: embedding geometry awareness in shallow and deep learning. CoRR, arXiv:abs/1808.02229. 2018. Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324. Xiao H, Rasul K, Vollgraf R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. 2017. Cohen G, Afshar S, Tapson J, van Schaik A. EMNIST: an extension of MNIST to handwritten letters. CoRR, arXiv:abs/1702.05373. 2017. Department of Computer Science, Tennessee State University, Nashville, TN, 37209, USA Ali Sekmen, Mustafa Parlaktuna, Ayad Abdul-Malek & Erdem Erdemir Department of Mechanical Engineering, Middle East Technical University, Ankara, Turkey Ahmet Bugra Koku Ali Sekmen Mustafa Parlaktuna Ayad Abdul-Malek Erdem Erdemir Correspondence to Erdem Erdemir. This research is supported by DoD Grants W911NF-15-1-0495 and W911NF-20-1-0284. Sekmen, A., Parlaktuna, M., Abdul-Malek, A. et al. Robust feature space separation for deep convolutional neural network training. Discov Artif Intell 1, 12 (2021). https://doi.org/10.1007/s44163-021-00013-1 Deep Convolutional Neural Networks Subspace Separation Robust Deep Learning
CommonCrawl
by cfd.ninja | Mar 19, 2020 | Ansys CFX The NACA four-digit wing sections define the profile by:[1] Last two digits describing maximum thickness of the airfoil as percent of the chord.[2] For example, the NACA 2412 airfoil has a maximum camber of 2% located 40% (0.4 chords) from the leading edge with a maximum thickness of 12% of the chord. The NACA 0015 airfoil is symmetrical, the 00 indicating that it has no camber. The 15 indicates that the airfoil has a 15% thickness to chord length ratio: it is 15% as thick as it is long. Equation for a symmetrical 4-digit NACA airfoil Plot of a NACA 0015 foil generated from formula The formula for the shape of a NACA 00xx foil, with "x" being replaced by the percentage of thickness to chord, is {\displaystyle y_{t}=5t\left[0.2969{\sqrt {x}}-0.1260x-0.3516x^{2}+0.2843x^{3}-0.1015x^{4}\right],} x is the position along the chord from 0 to 1.00 (0 to 100%), {\displaystyle y_{t}} is the half thickness at a given value of x (centerline to surface), t is the maximum thickness as a fraction of the chord (so t gives the last two digits in the NACA 4-digit denomination divided by 100). Note that in this equation, at x/c = 1 (the trailing edge of the airfoil), the thickness is not quite zero. If a zero-thickness trailing edge is required, for example for computational work, one of the coefficients should be modified such that they sum to zero. Modifying the last coefficient (i.e. to −0.1036) will result in the smallest change to the overall shape of the airfoil. The leading edge approximates a cylinder with a radius of {\displaystyle r=1.1019{\frac {t^{2}}{c}}.} Now the coordinates {\displaystyle (x_{U},y_{U})} of the upper airfoil surface and {\displaystyle (x_{L},y_{L})} of the lower airfoil surface are {\displaystyle x_{U}=x_{L}=x,\quad y_{U}=+y_{t},\quad y_{L}=-y_{t}.} Symmetrical 4-digit series airfoils by default have maximum thickness at 30% of the chord from the leading edge. Equation for a cambered 4-digit NACA airfoil Plot of a NACA 2412 foil. The camber line is shown in red, and the thickness – or the symmetrical airfoil 0012 – is shown in purple. The simplest asymmetric foils are the NACA 4-digit series foils, which use the same formula as that used to generate the 00xx symmetric foils, but with the line of mean camber bent. The formula used to calculate the mean camber line is {\displaystyle y_{c}={\begin{cases}{\dfrac {m}{p^{2}}}\left(2p\left({\dfrac {x}{c}}\right)-\left({\dfrac {x}{c}}\right)^{2}\right),&0\leq x\leq pc,\\{\dfrac {m}{(1-p)^{2}}}\left((1-2p)+2p\left({\dfrac {x}{c}}\right)-\left({\dfrac {x}{c}}\right)^{2}\right),&pc\leq x\leq c,\end{cases}}} m is the maximum camber (100 m is the first of the four digits), p is the location of maximum camber (10 p is the second digit in the NACA xxxx description). For this cambered airfoil, because the thickness needs to be applied perpendicular to the camber line, the coordinates , of respectively the upper and lower airfoil surface, become {\displaystyle {\begin{aligned}x_{U}&=x-y_{t}\,\sin \theta ,&y_{U}&=y_{c}+y_{t}\,\cos \theta ,\\x_{L}&=x+y_{t}\,\sin \theta ,&y_{L}&=y_{c}-y_{t}\,\cos \theta ,\end{aligned}}} {\displaystyle \theta =\arctan {\frac {dy_{c}}{dx}},} {\displaystyle {\frac {dy_{c}}{dx}}={\begin{cases}{\dfrac {2m}{p^{2}}}\left(p-{\dfrac {x}{c}}\right),&0\leq x\leq pc,\\{\dfrac {2m}{(1-p)^{2}}}\left(p-{\dfrac {x}{c}}\right),&pc\leq x\leq c.\end{cases}}} In this tutorial you will learn to simulate a NACA Airfoil (4412) using ANSYS CFX. First, we will import the points of the NACA profile and then we will generate the geometry using DesignModeler and SpaceClaim, the mesh using an unstructured mesh in Ansys Meshing. You can download the file in the following link. by cfd.ninja | Dec 1, 2020 | Ansys CFX, Ansys for Beginners OpenFOAM vs ANSYS CFX by cfd.ninja | Mar 19, 2020 | Ansys CFX, OpenFOAM OpenFOAM is the free, open source CFD software developed primarily by OpenCFD Ltd since 2004. It has a large user base across most areas of engineering and science, from both commercial and academic organisations. We share the same tutorial using ANSYS Fluent. Ansys CFX – Compressible Flow Compressibility effects are encountered in gas flows at high velocity and/or in which there are large pressure variations. When the flow velocity approaches or exceeds the speed of sound of the gas or when the pressure change in the system ( $\Delta p /p$) is large, the variation of the gas density with pressure has a significant impact on the flow velocity, pressure, and temperature.
CommonCrawl
Heuristically false conjectures I was very surprised when I first encountered the Mertens conjecture. Define $$ M(n) = \sum_{k=1}^n \mu(k) $$ The Mertens conjecture was that $|M(n)| < \sqrt{n}$ for $n>1$, in contrast to the Riemann Hypothesis, which is equivalent to $M(n) = O(n^{\frac12 + \epsilon})$ . The reason I found this conjecture surprising is that it fails heuristically if you assume the Mobius function is randomly $\pm1$ or $0$. The analogue fails with probability $1$ for a random $-1,0,1$ sequence where the nonzero terms have positive density. The law of the iterated logarithm suggests that counterexamples are large but occur with probability 1. So, it doesn't seem surprising that it's false, and that the first counterexamples are uncomfortably large. There are many heuristics you can use to conjecture that the digits of $\pi$, the distribution of primes, zeros of $\zeta$ etc. seem random. I believe random matrix theory in physics started when people asked whether the properties of particular high-dimensional matrices were special or just what you would expect of random matrices. Sometimes the right random model isn't obvious, and it's not clear to me when to say that an heuristic is reasonable. On the other hand, if you conjecture that all naturally arising transcendentals have simple continued fractions which appear random, then you would be wrong, since $e = [2;1,2,1,1,4,1,1,6,...,1,1,2n,...]$, and a few numbers algebraically related to $e$ have similar simple continued fraction expansions. What other plausible conjectures or proven results can be framed as heuristically false according to a reasonable probability model? nt.number-theory pr.probability soft-question Douglas Zare $\begingroup$ I think that for someone armed with a knowledge of modern probability theorem -- especially, Kolmogorov's Law of the Iterated Logarithm -- Mertens' Conjecture is wildly _im_plausible, and thus I too am not surprised that it is false. I think it's a good example of how "probabilistically naive" a good 19th century mathematician could be. As for the continued fraction expansion for $e$, that's a shocking result -- the first time someone told me this, I thought they were putting me on -- but I don't see how it contradicts any probabilistic model. ... $\endgroup$ – Pete L. Clark Jan 16 '10 at 18:11 $\begingroup$ ... In fact, in the branches of mathematics that I know best (number theory, especially), the biggest tool for making plausible conjectures is finding some reasonable probabilistic model. For instance, the idea that the values of the Mobius function should be (very close to) IID random variables is the most persuasive argument I know for the Riemann hypothesis. $\endgroup$ – Pete L. Clark Jan 16 '10 at 18:17 $\begingroup$ The situation with $M(x)$ is a bit more subtle than that. Reliance on Khinchine's Law of the iterated logarithm for random walks (which is what is relevant here) would lead to the conclusion that $M(x) = o(\sqrt{x}\log\log(x))$ is false. But those who have considered this question recently (Kotnik and van de Lune, and Ng) offer evidence that $M(x) = O(\sqrt{x}(\log\log\log(x))^b)$ for some positive $b$. On probabilistic grounds this is just as "wildly implausible" as the Mertens Conjecture. $\endgroup$ – engelbrekt Jan 16 '10 at 19:53 $\begingroup$ Mertens did hand calculations, and made his conjecture on the basis of that. He would have known of Stieltjes' claim in the Comptes Rendus to have proved $M(x)/\sqrt{x}$ bounded, though I doubt he would have known of Stieltjes' opinion expressed in a letter to Hermite that $|M(x)| \leq \sqrt{x}$. Shortly after Mertens, von Sterneck developed formulas that enabled him to push hand calculations up to 150.000. Von Sterneck was actually the first to see the growth of $M(x)$ from a probabilistic angle, though he compared with the Central Limit Theorem rather than the Law of the Iterated Logarithm. $\endgroup$ – engelbrekt Jan 16 '10 at 23:16 $\begingroup$ $log\log(x)$ is exponentially large compared with $\log\log\log(x)$ ... I don't think that Mertens was unreasonable in making his conjecture. He was a very competent analytic number theorist within the limitations of his time, when the unreliability of computational evidence in analytic number theory was as yet unsuspected. Only with Littlewood's 1914 disproof of $\pi(x) < \mathrm{li}(x)$ did the pitfalls become clear. This inequality had, for the time, superb computational support. $\endgroup$ – engelbrekt Jan 16 '10 at 23:30 I think this example fits, in 1985 H. Maier disproved a very reasonable conjecture on the distribution of prime numbers in short intervals. The probabilistic approach had been thoroughly examined by Harald Cramer. Nice paper by Andrew Granville including this episode in (mathematical) detail, page 23 (or 13 out of 18 in the pdf): www.dms.umontreal.ca/~andrew/PDF/cramer.pdf Will Jagy $\begingroup$ That's a nice exposition by Granville of subtly differing heuristics on the primes. I'd love to see more examples. $\endgroup$ – Douglas Zare Jan 19 '10 at 15:33 $\begingroup$ The second Hardy-Littlewood conjecture (widely believed to be false) is in a similar spirit: en.wikipedia.org/wiki/… $\endgroup$ – Terry Tao Jan 23 '10 at 5:44 $\begingroup$ Interesting, thanks. It's surprising that there are admissible prime constellations that dense. $\endgroup$ – Douglas Zare Jan 30 '10 at 20:08 Just run across this question, and am surprised that the first example that came to mind was not mentioned: Fermat's "Last Theorem" is heuristically true for $n > 3$, but heuristically false for $n=3$ which is one of the easier cases to prove. if $0 < x \leq y < z \in (M/2,M]$ then $|x^n + y^n - z^n| < M^n$. There are about $cM^3$ candidates $(x,y,z)$ in this range for some $c>0$ (as it happens $c=7/48$), producing values of $\Delta := x^n+y^n-z^n$ spread out on the interval $(-M^n,M^n)$ according to some fixed distribution $w_n(r) dr$ on $(-1,1)$ scaled by a factor $M^n$ (i.e., for any $r_1,r_2$ with $-1 \leq r_1 \leq r_2 \leq 1$ the fraction of $\Delta$ values in $(r_1 M^n, r_2 M^n)$ approaches $\int_{r_1}^{r_2} w_n(r) dr$ as $M \rightarrow \infty$). This suggests that any given value of $\Delta$, such as $0$, will arise about $c w_n(0) M^{3-n}$ times. Taking $M=2^k=2,4,8,16,\ldots$ and summing over positive integers $k$ yields a rapidly divergent sum for $n<3$, a barely divergent one for $n=3$, and a rapidly convergent sum for $n>3$. Specifically, we expect the number of solutions of $x^n+y^n=z^n$ with $z \leq M$ to grow as $M^{3-n}$ for $n<3$ (which is true and easy), to grow as $\log M$ for $n=3$ (which is false), and to be finite for $n>3$ (which is true for relatively prime $x,y,z$ and very hard to prove [Faltings]). More generally, this kind of analysis suggests that for $m \geq 3$ the equation $x_1^n + x_2^n + \cdots + x_{m-1}^n = x_m^n$ should have lots of solutions for $n<m$, infinitely but only logarithmically many for $n=m$, and finitely many for $n>m$. In particular, Euler's conjecture that there are no solutions for $m=n$ is heuristically false for all $m$. So far it is known to be false only for $m=4$ and $m=5$. Generalization in a different direction suggests that any cubic plane curve $C: P(x,y,z)=0$ should have infinitely many rational points. This is known to be true for some $C$ and false for others; and when true the number of points of height up to $M$ grows as $\log^{r/2} M$ for some integer $r>0$ (the rank of the elliptic curve), which may equal $2$ as the heuristic predicts but doesn't have to. The rank is predicted by the celebrated conjecture of Birch and Swinnerton-Dyer, which in effect refines the heuristic by accounting for the distribution of values of $P(x,y,z)$ not just "at the archimedean place" (how big is it?) but also "at finite places" (is $P$ a multiple of $p^e$?). The same refinement is available for equations in more variables, such as Euler's generalization of the Fermat equation; but this does not change the conclusion (except for equations such as $x_1^4 + 3 x_2^4 + 9 x_3^4 = 27 x_4^4$, which have no solutions at all for congruence reasons), though in the borderline case $m=n$ the expected power of $\log M$ might rise. Warning: there are subtler obstructions that may prevent a surface from having rational points even when the heuristic leads us to expect plentiful solutions and there are no congruence conditions that contradict this guess. An example is the Cassels-Guy cubic $5x^3 + 9y^3 + 10z^3 + 12w^3 = 0$, with no nonzero rational solutions $(x,y,z,w)$: Cassels, J.W.S, and Guy, M.J.T.: On the Hasse principle for cubic surfaces, Mathematika 13 (1966), 111--120. Noam D. Elkies This is quite elementary, but surprised me when I first saw it, and I still think it's remarkable. The number of pairs of integers $(x, y)$ such that $x^2 + y^2 \leq n$ is asymptotically $\pi n$, since they are the lattice points inside a circle of radius $\sqrt{n}$. Therefore the average number of ways of writing a positive integer as a sum of two squares is $\pi$. Or $\pi/8$ if we regard solutions as the same when they differ only in signs or the order of the terms. One would therefore expect a positive proportion of the natural numbers to have a representation as a sum of two squares. Not a $\pi/8$-fraction, since some integers have several representations, but some slightly smaller positive density, since identities like $4^2 + 7^2 = 1^2 + 8^2$ look pretty much like random coincidences. But actually almost no numbers are sums of two squares. Whenever the prime factorization of $n$ contains some prime $p\equiv 3$ (mod 4) to an odd power, $n$ cannot be a sum of two squares, as is easily seen by considering the equation modulo powers of $p$. And by Dirichlet's theorem, almost all numbers have some such prime to power 1 in their factorization. 2 revisions, 2 users Johan Wästlund 88% $\begingroup$ Interesting example. It can be explained by refining the heuristic to account for the distribution of sums-of-two-squares at finite places as well as the archimedean place. (I'm not sure how Dirichlet's theorem applies: there are infinite sets $S$ of primes for which $\prod_{p\in S} ((p-1)/p)$ converges to a positive number. But one can give an analytic argument using the positivity of $L(1,\chi_4) = 1 - \frac13 + \frac15 - \frac17 + - \cdots$.) $\endgroup$ – Noam D. Elkies Jul 30 '12 at 14:46 $\begingroup$ If you'll accept another minor correction, you want to consider $x^2+y^2=n$ modulo $p$, not modulo $4$, to get a contradiction. I agree that this is a very nice example. It is sort of a minimal version of Noam's example, in that an archimedean estimate predicts infinitely many solutions when there are actually finitely many for non-archimedean reasons, but it is simpler in that the non-archimedean reasons are elementary number theory rather than BSD. $\endgroup$ – David E Speyer Jul 30 '12 at 15:07 $\begingroup$ Regarding Dirichlet: The "probability" that $n$ has an even number of factors $p$ (for instance none) is $p/(p+1)$. So the probability that every prime $p\equiv 3$ (mod 4) occurs to an even power is $\Pi_{p\equiv 3\,(4)} (p/(p+1))$, which is zero. This is a particularly simple case of Dirichlet's theorem that follows, as pointed out by Noam, from the fact that $1-1/3+1/5-1/7+\dots$ is nonzero. $\endgroup$ – Johan Wästlund Jul 30 '12 at 15:41 $\begingroup$ Dirichlet proved that there are infinitely many primes in such a progression. Did he prove that the sum of their reciprocals diverges? That's not too hard given what he did (and yes, the $3 \bmod 4$ case is particularly easy) but I don't know whether he stated the result. $\endgroup$ – Noam D. Elkies Jul 30 '12 at 19:28 $\begingroup$ [I took the liberty of carrying out D.Speyer's suggestion; since the valuation of $n$ at $p$ is allowed to be any odd number, I made it "modulo powers of $p$"] $\endgroup$ – Noam D. Elkies Aug 2 '12 at 8:51 CS theory has a slew of these examples. In particular, take any problem which is known to be in $RP$, but its membership in $P$ is (currently) unknown. Example: is it possible, using walks consisting of polynomially many steps, to estimate the volume of a convex body? In the terminology of your question, the answer is 'yes' if you say that random steps are a reasonable model of the steps made by a smart algorithm. On the other hand, a deterministic method of choosing the steps is unknown. (PS the reference on this particular problem is "A random polynomial-time algorithm for approximating the volume of convex bodies" by Dyer, Frieze, Kannan.) $\begingroup$ Re "take any problem which is known to be in RP, but its membership in NP is (currently) unknown": as RP is (known to be) contained in NP, you probably mean P? $\endgroup$ – aorq Jan 16 '10 at 16:04 $\begingroup$ @Anonymous Rex, YES, thank you, also have you thought about using the dinosaur from qwantz.com as your avatar? hopefully i don't get banned from mathoverflow for the non-mathematical nature of this comment. i'll point out that maybe the problem i should have given is the primality testing one, where the deterministic solution was found apparently by derandomizing some other randomized strategy for it (but not the miller-rabin primality test). $\endgroup$ – Matus Telgarsky Jan 16 '10 at 16:39 $\begingroup$ Re qwantz: Although I like dinosaur comics, I tried to find a T. Rex that was in the public domain, as best as I could ascertain. $\endgroup$ – aorq Jan 16 '10 at 17:09 $\begingroup$ How is this an answer to the original question? As far as I can see, here the probabilistic model suggests the answer is "yes", and the most plausible conjecture is that the true answer is also "yes" (and more generally, P = RP, using the Nisan–Wigderson generator). So, here the plausible conjecture is heuristically true, or am I missing something? $\endgroup$ – Emil Jeřábek supports Monica Jun 7 '11 at 10:32 The Alon-Tarsi Conjecture states that the number of even Latin squares is not equal to the number of odd Latin squares for even $n$. Although, it can be shown that the gcd of these two numbers grows super-exponentially with $n$ (i.e. these two numbers have many common divisors). Moreover, it seems that they're asymptotic (using an heuristic argument). Douglas S. Stones $\begingroup$ Could you elaborate on the heuristic argument? If f(n) and g(n) are two random functions which grow rapidly, then f(n)-g(n) should not often equal 0. So, it's not yet clear to me how this conjecture is heuristically false. $\endgroup$ – Douglas Zare Jan 23 '10 at 5:24 $\begingroup$ Well, I think it's peculiar. Ln(even)-Ln(odd) share a super-exponential divisor, are likely to be asymptotic and are actually equal for odd n. These are properties of sequences that are equal. The heuristic argument is basically trying to find a trade in a Latin square (i.e. edit only a certain small section of the matrix, while preserving the Latin property) -- typically, there will be lots of switches available (but trying to show that almost all Latin squares admit such a switch is difficult). See Wanless "Cycle switches in Latin squares" 2004 for more details. $\endgroup$ – Douglas S. Stones Feb 25 '10 at 22:05 Not the answer you're looking for? Browse other questions tagged nt.number-theory pr.probability soft-question or ask your own question. When has the Borel-Cantelli heuristic been wrong? On Generalizations of Fermat's Conjecture Failure probability formula How closed-form conjectures are made? Binomial distribution conjecture Arguments for the second Hardy–Littlewood conjecture being false? Why is free probability a generalization of probability theory? Can anything deep be said uniformly about conjectures like Goldbach's?
CommonCrawl
How to figure out the temperature of a lake using x for the feet or the Depth and for the degrees Celsius use (3900x+1000)/(3x^2+100) with the answer of the degrees celsius being 8.44 looking like 8.44=(3900x+1000)/(3x^2+100) i need the depth 34,877 questions, page 16 The table that follows shows the relationship between the boiling temperature of water (°F) and the altitude (per thousand feet): Boiling Point of Water Altitude (1000 ft.) Temperature (°F) 8 197.6 4.5 203.9 3 206.6 2.5 207.5 Gemma and Tessa need to find asked by nick on December 1, 2013 college/ pre-cal a square fence 9 feet high enclosing a radio tower is 3 feet from the tower on all 4 sides. guy wires used to hold the tower in position are attached to the tower 30 feet above the ground and 13 feet from the fence on all 4 sides. How long are the guy asked by theresa on February 23, 2012 The temperature of a pan of hot water varies according to Newton's Law of Cooling: dT/dt=-k(T-A), where T is the water temperature, A is the room temperature, and k is a positive constant. If the water cools from 90°C to 85°C in 1 minute at a room asked by Una Rosa on March 12, 2016 asked by nan on March 11, 2016 The daily temperature is decreasing at a rate of 2 degrees Fahrenheit T represent the daily high temperature now right then expression for the high temperature in five days the current at high temperatures need to address for hi-fi high temperature in five asked by John on October 23, 2016 The temperature of a pan of hot water varies according to Newton's Law of Cooling: dT/dt= -k (T - A), where T is the water temperature, A is the room temperature, and k is a positive constant. If the water cools from 90°C to 85°C in 1 minute at a room asked by Alice on February 28, 2019 1. You gathered data and realized that the temperature drop followed the function T=85 (0.3)^t/2 where T is the temperature in Celsius and t is the time in hours the water was left out for. A. What was the initial temperature of the water when it was asked by David on April 3, 2018 There are five questions on the section. These questions are for you to demonstrate your understanding of the steps related to the contents you studied this week. To earn any points in part two, you must show the steps you have used to find the answer in asked by harmony on August 16, 2010 a large vertical rectangular plate of glass is to be inserted in the wall of an aquarium underwater so visitors can see into th tank of fish. the glass is 10 feet high, 25 feet long and the top of the glass is 3 feet below the top of the water. the asked by student on February 12, 2008 a machine can produce 12 clay figure per hour.it costs $750 to set up the machine and $6 per hour to run the machine.each clay figure requires $2 of material to produce.if each clay figure will sell for $10 charged,express the revenue cost,and profit in asked by natasha on August 26, 2011 Danny needs to buy sand for this box. He wants to nearly fill the box leaving only 6 inches empty at the top. How much sand does Danny need if the length is 12 feet the width is 2 feet and the height is 3 feet. I first found the volume which was 72 feet asked by Jerald on February 4, 2014 Suppose equal volumes of iron and water interact thermally. Which will change temperature more? If they are initially 30 Celsius degrees apart in temperature, how much will the temperature of each change? Explain your reasoning. Thanks! asked by A on March 1, 2011 The temperature in your house is controlled by a thermostat. The temperatures will vary according to the sinusoidal function: f(x)=6sin(Pi/12(x-15))+19 , where f(x) represents the temperature in degrees Celsius (°C) and x is hours since midnight. What is asked by Taliyah on December 6, 2018 the clarke family went sailing on a lake. Their boat averaged 6 km per hour. the Rourke family took their outboard runabout for a trip on the lake for the same amount of time. Their boat averaged 14 km per hour. The Rourke family traveled 20 km farther asked by Missy on August 5, 2009 You are cooking chili. When you take it off the stove, it has a temperature of 205°F. The room temperature is 68°F and the cooling rate of the chili is r = 0.03. How long will it take to cool to a serving temperature of 95°F asked by Me on February 20, 2012 asked by Erin on February 20, 2012 Pre-Calc You are taking a road trip in a car without A/C. The temperture in the car is 93 degrees F. You buy a cold pop at a gas station. Its initial temperature is 45 degrees F. The pop's temperature reaches 60 degrees F after 23 minutes. Given that \frac{T-A}{T_0 asked by Anonymous on October 20, 2014 A 3.90×10^2kg cube of ice at an initial temperature of -18.0degreesC is placed in 0.480kg of water at 41.0degreesC in an insulated container of negligible mass. Calculate the change in entropy of the system. I found the equilibrium temperature to be at asked by Melissa on December 7, 2010 A radio receiver is set up on a mast in the middle of a calm lake to track the radio signal from a satellite orbiting the Earth. As the satellite rises above the horizon, the intensity of the signal varies periodically. The intensity is at a maximum when asked by ryan on May 21, 2011 asked by Anonymous on May 21, 2011 Given the equilibrium: CO + 2H2 = CH3OH H=-18.0kJ How will the concentration of CO at equilibrium be affected by the following? a) Adding more CH3OH b) Removing some H2 c) Reducing the temperature. I don't understand how i'm supposed to figure these out.. asked by Amber on May 31, 2011 Mr. Suarez want to paint his storage shed. He needs to calculate the surface area of the shed so that he will know how mush paint to buy. The shed is in the shape of a rectangular prism.Lenght of 14 feet width of 7 feet and height of 8 feet.CAN U HELP ME asked by Chasity mcclennon on January 7, 2016 a ball is thrown upward at 64 feet per second from the top of an 80 feet high building. The height of the ball can be modeled by S(t) = -16t^2 + 64t + 80(feet), where t is the number of seconds after the ball is thrown. describe the graph model asked by nina on October 7, 2012 I just need help setting up the equation - the rest I can do on my own: A man built a walk of uniform width around a rectangular pool. If the area of the walk is 253 square feet and the dimensions of the pool are 9 feet by 3 feet, how wide is the walk? asked by Katie on January 27, 2016 sorry i have about 5 questions on this homework packet that i just can't figure out.... hope you can help me A bowling ball (mass = 7.2 kg, radius = 0.12 m) and a billiard ball (mass = 0.41 kg, radius = 0.028 m) may each be treated as uniform spheres. What asked by kelly on September 12, 2010 MATH: Pre-Cal/Trig The length of a rectangular flower garden is 6 more feet than its width. A walkway 3 feet wide surrounds the outside of the garden. The total area of the walkway itself is 288 square feet. Find the dimensions of the garden. asked by Melody on January 29, 2011 Mary wants to hang a mirror in her room. The mirror and frame must have an area of 7 square feet. The mirror is 2 feet wide and 3 feet long. Which quadratic equation can be used to determine the thickness of the frame, x? (4 points) asked by Alan on February 7, 2015 The dimensions of a rectangle dining room are 18 feet by 16 feet.If a scale factor of 1/4 is used to make a scale model of the dining room,what is the area of the dining room in the scale model? A.18 square feet B.36 square feet C.48 square feet D.72 asked by DeMarcus Smith on July 17, 2015 Certain kinds of steel expand one part in 100,000 for each 1o C they increase in temperature. If a section of a bridge that has no expansion joints has a length of 1.5 km when the temperature is 50o C, what is its length when the temperature is 80o C? asked by Ariannah on March 8, 2011 At 4:00 Joe's thermoter read 30C. He put it in the freezer and read it every 5 minutes. At 4:05pm, the temperature was 23C. At4:10pm the temperature was 16C.The pattern continues, when will the temperature be below 0C? asked by jake on November 5, 2007 The temperature in your house is controlled by a thermostat. The temperatures will vary according to the sinusoidal function:f(x)=6sin(pi/12(x-11)+19, where f(x) represents the temperature in degrees Celsius (°C) and x is hours since midnight. What is the asked by Anonymous on November 21, 2014 meredith wants to paint 3 of the walls in her room each wall has the shape of a rectangle with dimensions 10 feet by 12 feet she has 1 can of paint that will cover approixmately 420 square feet which method can meredith use to determine whether or not she asked by sam on September 12, 2017 if Amy stands 36 feet from a flagpole and the angle formed from the top of the flagpole to her feet is 38 degre, find the distance from her feet to the top of the flagpole. include correct units in your anwser and round to 2 decimal places... asked by joey on February 16, 2011 I wandered lonely as a cloud That floats on high o'er vales and hills, When all at once I saw a crowd, A host, of golden daffodils; Beside the lake, beneath the trees, Fluttering and dancing in the breeze I wandered lonely as a cloud When all at once I saw asked by 2phoneeeeee on December 3, 2014 Math word gr 8 For the month of January, the average afternoon temperature in Calgard is 1/4 the average morning temperature. The average afternoon temperature is -4 C. What is the average morning temperature? a.) If m represents the average morning temperature, what asked by Tyler on January 26, 2009 Write an expression in factored form for the area of the shaded portion of the figure. (In the figure, a = 3 and b = 5.) The figure is in this website below: www.webassign.net/laratrmrp6/p-3-162-alt.gif The answer i got: (5(x+5)^2)/6 - 15/2. but online asked by Hi😱 on February 7, 2019 A architect is designing a house. The scale on the plan is 1 inch=6 feet. If the house is to have a length of 90 feet and a width of 30 feet, how long will the line representing the house's length on the blue print asked by Kayla on May 11, 2014 Sheets of canvas have been hung over the waiting lines to provide shade. Each canvas sheet is in the shape of a triangle with side lengths of 50 feet, 75 feet, and 75 feet. What is the area of each canvas sheet asked by jada on February 26, 2018 The cable supporting a ski lift rises 2 feet for each 5 feet of horizontal length. The top of the cable is fastened 1320 feet above the cable's lowest point. Find the lengths b and c, and find the measure of the angle theta. asked by Anonymous on January 9, 2014 Neil gets in an elevator at the 30th floor and it begins to move downward at a speed of 8 feet per second. After 12 seconds the elevator is 240 feet above ground. a. Let y= the height in feet of the elevator 'x' secinds after Neil got in. Write an equation asked by Natalie on March 4, 2014 reservoir has the shape of a right-circular cone. The altitude is 10 feet, and the radius of the base is 4 ft. Water is poured into the reservoir at a constant rate of 5 cubic feet per minute. How fast is the water level rising when the depth of the water asked by Mohammed on November 26, 2016 a cold front moves in at noon and in 5 hours, the temperature had dropped 23%. the temperature at 5:00 p.m. was 14%. let x be the temperature at noon. write an equation to find x. asked by anne on November 30, 2010 Suppose the temperature outside is dropping at a constant rate. At noon , the temperature is 70 degrees f and it drops to 58 degrees f at 5:00 P.M. How much did the temperature change each hour? asked by JO on July 20, 2008 At noon temperature was38degree Celsius.temperature decreases at rate of 6 degree Celsius per hour. What will be the temperature of room after....... Hours asked by Manak on June 8, 2017 The temperature at noon is 75F degrees. The temperature drops 3 degrees every half hour. What is the temperature at 4.p.m? I honestly don't know what to do all I know is that I have these numbers, 75,3,4 asked by Giovanni on September 6, 2016 algebra1 friday's temperature was 20 degrees warmer than monday's temperature t. write an expression for friday's temperature.... my answer is 20dregees + T is that correct? asked by alanaR. on August 26, 2010 Calculus 2. Tom and Mike have a bet as to who will do the most work today. Mike has to compress a coil 200 feet. It takes Mike 250 lbs to compress the coil 10 feet. Tom needs to pump water through the top of a cylindrical tank sitting on the ground. The asked by Laura on September 27, 2012 3) Find the capacity in gallons of a drum that is cone-shaped if the radius is 9 feet, and the slant height is 41 feet. (first find the volume) Volume in cubic feet = cubic feet Capacity in gallons = gallons (Round answer to the nearest gallon) (Use 1 cu asked by Mike on May 15, 2008 If Sally places a rocket that is 3 feet 6 inches tall atop a launch pad that is 1 foot 8 inches tall, how tall will the entire unit, rocket and launch pad, be when she is done? A.5 feet 4 inches B.5 feet 2 inches C.1 foot 8 inches D.4 feet 2 inches B? 3 asked by Rezu on May 17, 2012 The temperature in Minot, North Dakota registered 18 degrees F. The wind chill factor made it feel negative 10 degrees. What is the difference between the actual temperature and the perceived temperature? Help solve asked by Joel on April 19, 2018 college (physics thermo) A 1.20 mol sample of an ideal diatomic gas at a pressure of 1.20 atm and temperature 380 K undergoes a process in which its pressure increases linearly with temperature. The final temperature and pressure are 680 K and 1.83 atm. (assume 5 active degrees of asked by dennis on February 2, 2009 Don decided to do some landscaping in his backyard. He began by watering the lawn. The rotating sprinkler that he installed can spray an area with a radius of 7 feet. What is the maximum area the sprinkler can cover? a. 21.98 ft(2) b. 28.26 ft(2) c. 43.96 asked by Ty on March 1, 2012 Use the image below to answer the following question: An image of a man with his hands and feet nailed to a large cross. © Godong\UIG\Image Quest 2016 Which key figure in Christianity does this image represent? Peter Paul Jesus God asked by Amber on February 18, 2018 Jenny is building a rectangular garden around a tree in her yard. The southwest corner of the garden will be 2 feet south and 3 feet west of the tree. The northeast corner of the garden will be 1 foot north and 4 feet east of the tree. If jenny paves a asked by katy on September 14, 2015 Write and solve the number sentences she could use. Ms. Cloudcover, the TV weatherperson, noted that Sunday's temperature was 12, much warmer than the -7 on Friday and the -2 on Saturday. She remembered that Thursday's temperature was 0 and checked to asked by Anonymous on February 21, 2011 A rectangular field is to be subdivided in 6 equal fields. There is 1200 feet of fencing available. Find the dimensions of the field that maximizes the total area. (List the longer side first) Width = feet Length = feet What is the maximum area ? Area = asked by irina on September 22, 2014 the scale factor between figure A and figure B is 0.25. If one side length of figure B is 56 yards, what is the corresponding side length of figure A? asked by linda on October 3, 2010 Water is flowing freely from the bottom of a conical tank which is 12 feet deep and 6 feet in radius at the top. If the water is flowing at a rate of 2 cubic feet per hour, at what rate is the depth of the water in the tank going down when the depth is 3 asked by Adam on November 6, 2011 How many square feet of grass are there on a trapezoidal field with a height of 75 feet and bases of 125 feet and 81 feet. I know the formula is A = h( b + b) / 2 A = 75 ( 125 + 81) / 2 A = 75 ( 206 ) /2 OK, my question is do you divide the 206 by 2 and asked by Cole on August 13, 2014 Suppose a square garden has an area represented by 9x^2 square feet. If one side is made 7 feet longer and the other side is made 2 feet shorter, then the trinomial that models the area of the larger garden is 9x^2 + 15x - 14 square feet. asked by Stacey on September 13, 2013 a rectangular swimming pool 20 feet long and 10 feet wide,is enclosed by a wooden deck that is 3 feet wide. What is the area of the deck? How much fence is required to enclose the deck? asked by Dee on September 19, 2013 An object moving vertically is at the given heights at the specified times. Find the position equation s = at2 + v0t + s0 for the object given the following conditions. • At t = 1 seconds, s = 48 feet • At t = 2 seconds, s = 64 feet • At t = 3 An interior painter is painting a wall that is 8 feet by 14 feet. He needs to determine how much paint he will need for the job. If a pint will cover 40 square feet, how many pints of paint are needed to paint the wall? asked by Anonymous on August 30, 2016 A man built a walk of uniform width around a rectangular pool. If the area of the walk is 429 square feet and the dimensions of the pool are 10 feet by 18 feet, how wide is the walk? asked by Aaron on September 22, 2012 1)you have a glass of pure water. are chemical reactions occurring in the water ?if so, what are they?if not explain why not. 2)while investigation the effects of acid rain in your area, you discover a lake that is surprisingly resistant to changes in pH asked by sam on August 23, 2011 A manatee swimming 3 feet below the water surface, ascends 2 feet, descends another 5 feet then ascends another 2 feet. Write an expression to find his position relative to the water surface. Then find His position asked by red on September 7, 2009 math help DX a scientist theorized that you can estimate the temperature by counting how often crickets chirp. the scientist gathers the data in the table shown. A)how many cricket chirps would you expect to indicate a temperature of 85 degrees? include a graph and an asked by unicorns<3 on January 2, 2015 math-algebra HELP!!!! The temperature of a pan of hot water varies according to Newton's Law of Cooling: dT dt equals negative k times the quantity T minus A, where T is the water temperature, A is the room temperature, and k is a positive constant. If the water cools from asked by Anonymous on April 28, 2017 Chem. Again 28: You have a glass of pure water. Are chemical reactions occurring in the water? If so, what are they? If not, explain why not. 29: While investigating the effects of acid rain in your area, you discover a lake that is surprisingly resistant to changes asked by Rachel on May 12, 2009 For Questions 12-12, during halftime of a football game, a slingshot launches T-shirts at the crowd. A T-shirt is launched from a height of 3 feet with an initial upward velocity of 72 feet per second. The T-shirt is caught 50 feet above the field. What is asked by Sally on May 14, 2019 which of the following describes feedback in an oven ? a) setting the temperature b) burning gas releases heat c) a thermostat monikers the temperature and increase the gas flow when the temperature falls too low d) a cake is baked asked by victoria on September 24, 2013 Two similar triangles have perimeters of 12 feet and 36 feet, respectively. If the area of the smaller triangle is 6 square feet, what is the area of the larger triangle? asked by Jeana on August 2, 2015 Given a room with dimensions 12 feet by 10 feet and is 8 feet tall: Calculate total wall space and total floor space asked by ayeshhs on February 8, 2016 Kelso created the scale drawing below what is the scale factor of his drawing 1 inch equals 8 feet the rectangle is 4 feet by 12 feet asked by ezzy on September 22, 2016 A picture 3 feet across is hung in the center of a wall that that is 19 feet wide. How many feet from the end of the wall is the nearest edge of the picture. asked by TONI on May 20, 2015 A wall in Marcus's bedroom is 8 1/3 feet high and 16 1/5 feet long. If he paints 1/3 of the wall blue, how many square feet will be blue? asked by luis on December 21, 2016 A septic tank is 10 feet long by 8 feet wide by 3.5 feet deep. What is the capacity of the tank in gallons? I did V= L*W*H and got 280 gallons is this right A board is 12.5 feet long. I cut 2 pieces that are 1.875 feet long and make 6 ramps with it. How much feet of board is left over? asked by Corrine on November 19, 2015 A shipping crate carrying e-readers has a length of 19 1/3 feet, a width of 7 2/3 feet and a height of 7 5/6 feet. What is the volume of the shipping crate. Please Help......... asked by Lucy on February 26, 2016 girl is 6 feet tall and a pine tree next to her is 42 feet tall and casts a shadow that is 28 feet. how long is the girls shadow asked by cole on October 30, 2012 Math -word problem The lowest recorded temperature in Sweden is -38 degrees and the lowest recorded temperature in Russia is -68 degrees. How much higher is the low temperature in Sweden than the low temperature in Russia. -68 + -38 = -106 degrees. Did I solve this right? asked by Stephen on March 2, 2007 The figure shown is a rectangle. The green shape in the figure is a square. The blue and white shapes are rectangles, and the area of the blue rectangle is 40 square inches. Enter an expression for the area of the entire figure that includes an exponent. asked by Roopsy Ghosh on April 13, 2017 Like a equilibrium constants, the values of Kw depends on temperature. At body temperature (37 C), Kw= 2.4 x 10^-14. What are the [H3O+] and pH of pure water at body temperature? asked by Nevaeh on April 20, 2016 Phsyics Is an object with a temperature of 273.2 K hotter than, colder than, or at the same temperature as an object with a temperature of 0'C? I don't know how to convert kelvin to celcius. asked by Abida on January 17, 2011 The temperature one spring day was 78 degrees. Starting at 1:15 pm the temperature drops 0.25 degrees per minute. What temperature will it be at 1:25 pm? Do I divide 0.25/1:25 to get the answer? asked by George on January 22, 2015 what is the temperature of the week the ,if the temperature of monday is 33 tuesday is 32 wednesday is 32 thursday is 34 friday is 31 saturday is 30 sunday is 31 , what is the temperature of a week asked by Anonymous on January 21, 2015 Chemistry-Dr.Bob222 the temperature of a body falls from 30C to 20C in 5 minutes. The air temperature is 13C. Find the temperature after a further 5 minutes. asked by frank fred on April 16, 2012 what is the temperature at the top of the stratosphere if the sea level temperature was 22 degrees and air temperature decreases 6 degrees every 1km asked by hailey on September 16, 2014 Water freezes at 0 Celisus. If a temperature drop is causing a glass of water to freeze, how would a thermometer in the water most likely change as the water freezes? A. The temperature would keep rising steadily. B. The temperature would keep falling asked by Anne on October 20, 2013 In Figure (a), a block of mass m lies on a horizontal frictionless surface and is attached to one end of a horizontal spring (spring constant k) whose other end is fixed. The block is initially at rest at the position where the spring is unstretched (x = asked by Jin on October 24, 2009 A diver descended at a constant rate of 12.24 feet every 3 minutes. Which of the following is true? A. After one minute, the diver was at-4.08 feet B. After one minute, the diver was at 4.08 feet C. After one minute, the diver was at -6.62 D. After one asked by MJ on September 9, 2015 A wire is stretched from the ground to the top of an antenna tower. The wire is 20 feet long. The height of the tower is 4 feet greater than the distance from the tower's base to the end of the wire.Find the height of the tower. 20^2=H^2 + (H-4)^2 solve asked by jennifer on September 27, 2008 For this problem my equation is Y=70+110e^kt but i keep getting it wrong any suggestions? A roasted turkey is taken from an oven when its temperature has reached 185 Fahrenheit and is placed on a table in a room where the temperature is 75 Fahrenheit. (a) asked by someone on May 12, 2012 A cold front caused the temperature to drop to 21f. The temperature after the drop was 36f. Write and solve an equation to find the temperature before the temperature dropped. my answer is 57f. my equation is 36f=21f-f asked by Americal on February 4, 2013
CommonCrawl
Weak stability in the global $L^ 1$-norm for systems of hyperbolic conservation laws by Blake Temple PDF Trans. Amer. Math. Soc. 317 (1990), 673-685 Request permission We prove that solutions for systems of two conservation laws which are generated by Glimm's method are weakly stable in the global ${L^1}$-norm. The method relies on a previous decay result of the author, together with a new estimate for the ${L^1}$ Lipschitz constant that relates solutions at different times. The estimate shows that this constant can be bounded by the supnorm of the solution, and is proved for any number of equations. The techniques do not rely on the existence of a family of entropies, and moreover the results would generalize immediately to more than two equations if one were to establish the stability of solutions in the supnorm for more than two equations. R. Courant and K. O. Friedrichs, Supersonic Flow and Shock Waves, Interscience Publishers, Inc., New York, N. Y., 1948. MR 0029615 Ronald J. DiPerna, Decay and asymptotic behavior of solutions to nonlinear hyperbolic systems of conservation laws, Indiana Univ. Math. J. 24 (1974/75), no. 11, 1047–1071. MR 410110, DOI 10.1512/iumj.1975.24.24088 James Glimm, Solutions in the large for nonlinear hyperbolic systems of equations, Comm. Pure Appl. Math. 18 (1965), 697–715. MR 194770, DOI 10.1002/cpa.3160180408 E. Isaacson, Global solution of a Riemann problem for a non-strictly hyperbolic system of conservation laws arising in enhanced oil recovery, J. Comput. Phys. (to appear). Eli Isaacson and Blake Temple, The structure of asymptotic states in a singular system of conservation laws, Adv. in Appl. Math. 11 (1990), no. 2, 205–219. MR 1053229, DOI 10.1016/0196-8858(90)90009-N E. Isaacson, D. Marchesin, D. Plohr, and B. Temple, The classification of solutions of quadratic Riemann problems (I), Joint MRC, PUC/RJ report, 1985. E. Isaacson and B. Temple, Examples and classification of non-strictly hyperbolic systems of conservation laws, Abstracts Amer. Math. Soc. 6 (1985). Barbara L. Keyfitz and Herbert C. Kranzer, A system of nonstrictly hyperbolic conservation laws arising in elasticity theory, Arch. Rational Mech. Anal. 72 (1979/80), no. 3, 219–241. MR 549642, DOI 10.1007/BF00281590 P. D. Lax, Hyperbolic systems of conservation laws. II, Comm. Pure Appl. Math. 10 (1957), 537–566. MR 93653, DOI 10.1002/cpa.3160100406 Eduardo H. Zarantonello (ed.), Contributions to nonlinear functional analysis, Academic Press, New York-London, 1971. Mathematics Research Center, Publ. No. 27. MR 0366576 Tai Ping Liu, Invariants and asymptotic behavior of solutions of a conservation law, Proc. Amer. Math. Soc. 71 (1978), no. 2, 227–231. MR 500495, DOI 10.1090/S0002-9939-1978-0500495-7 —, Asymptotic behavior of solutions of general systems of nonlinear hyperbolic conservation laws, Indiana Univ. J. (to appear). —, Decay to $N$-waves of solutions of general systems of nonlinear hyperbolic conservation laws, Comm. Pure Appl. Math. 30 (1977), 585-610. —, Large-time behavior of solutions of initial and initial-boundary value problems of general systems of hyperbolic conservation laws, Comm. Math. Phys. 57 (1977), 163-177. D. G. Schaeffer and M. Shearer, The classification of $2 \times 2$ systems of non-strictly hyperbolic conservation laws, with application to oil recovery, with Appendix by D. Marchesin, P. J. Paes-Leme, D. G. Schaeffer, and M. Shearer, Duke University, preprint. D. Serre, Existence globale de solutions faibles sous une hypothese unilaterale pour un systeme hyperbolique non lineaire, Equipe d'Analyse Numerique, Lyon, Saint-Etienne, July 1985. —, Solutions a variation bornees pour certains systemes hyperboliques de lois de conservation, Equipe d'Analyse Numerique, Lyon, Saint-Etienne, February 1985. M. Shearer, D. G. Schaeffer, D. Marchesin, and P. J. Paes-Leme, Solution of the Riemann problem for a prototype $2 \times 2$ system of non-strictly hyperbolic conservation laws, Duke University, preprint. J. A. Smoller, Shock waves and reaction diffusion equations, Springer-Verlag, 1980. Blake Temple, Global solution of the Cauchy problem for a class of $2\times 2$ nonstrictly hyperbolic conservation laws, Adv. in Appl. Math. 3 (1982), no. 3, 335–375. MR 673246, DOI 10.1016/S0196-8858(82)80010-9 —, Systems of conservation laws with coinciding shock and rarefaction curves, Contemp. Math., vol. 17, Amer. Math. Soc., Providence, R.I., 1983. Blake Temple, Decay with a rate for noncompactly supported solutions of conservation laws, Trans. Amer. Math. Soc. 298 (1986), no. 1, 43–82. MR 857433, DOI 10.1090/S0002-9947-1986-0857433-6 Blake Temple, Degenerate systems of conservation laws, Nonstrictly hyperbolic conservation laws (Anaheim, Calif., 1985) Contemp. Math., vol. 60, Amer. Math. Soc., Providence, RI, 1987, pp. 125–133. MR 873538, DOI 10.1090/conm/060/873538 —, Supnorm estimates in Glimm's method, preprint. —, No ${L^1}$-contractive metrics for systems of conservation laws, Trans. Amer. Math. Soc., 288 (1985). Retrieve articles in Transactions of the American Mathematical Society with MSC: 35L65, 35B35 Retrieve articles in all journals with MSC: 35L65, 35B35 Journal: Trans. Amer. Math. Soc. 317 (1990), 673-685 MSC: Primary 35L65; Secondary 35B35 DOI: https://doi.org/10.1090/S0002-9947-1990-0948199-0 MathSciNet review: 948199
CommonCrawl
OSTI.GOV Journal Article: Nuclear Reactions in the Crusts of Accreting Neutron Stars Title: Nuclear Reactions in the Crusts of Accreting Neutron Stars References (145) ImagesFigures / Tables (27) X-ray observations of transiently accreting neutron stars during quiescence provide information about the structure of neutron star crusts and the properties of dense matter. Interpretation of the observational data requires an understanding of the nuclear reactions that heat and cool the crust during accretion and define its non-equilibrium composition. We identify here in detail the typical nuclear reaction sequences down to a depth in the inner crust where the mass density is rho = 2 times 10^12 g cm-3 using a full nuclear reaction network for a range of initial compositions. The reaction sequences differ substantially from previous work. We find a robust reduction of crust impurity at the transition to the inner crust regardless of initial composition, though shell effects can delay the formation of a pure crust somewhat to densities beyond rho = 2 times 10^12 g cm^-3. This naturally explains the small inner crust impurity inferred from observations of a broad range of systems. The exception are initial compositions with A > 102 nuclei, where the inner crust remains impure with an impurity parameter of Qimp≈20 owing to the N=82 shell closure. In agreement with previous work, we find that nuclear heating is relatively robust and independentmore » of initial composition, while cooling via nuclear Urca cycles in the outer crust depends strongly on initial composition. This work forms a basis for future studies of the sensitivity of crust models to nuclear physics and provides profiles of composition for realistic crust models.« less Lau, Rita [1]; Beard, Mary [2]; Gupta, Sanjib S. [3]; Schatz, H. [4]; Search OSTI.GOV for author "Schatz, H." Search OSTI.GOV for ORCID "0000-0003-1674-4859" Search orcid.org for ORCID "0000-0003-1674-4859" Afanasjev, A. V. [5]; Brown, Edward F. [4]; Deibel, A. T. [6]; Search OSTI.GOV for author "Deibel, A. T." Gasques, Leandro R. [7]; Hitt, George Wesley [8]; Hix, William Raphael [9]; Search OSTI.GOV for author "Hix, William Raphael" Keek, Laurens [10]; Moller, Peter [11]; Shternin, Peter S. [12]; Steiner, Andrew W. [13]; Search OSTI.GOV for author "Steiner, Andrew W." Wiescher, Michael [2]; Xu, Yi [14] Michigan State Univ., East Lansing, MI (United States); Univ. of Notre Dame, Notre Dame, IN (United States); Technological and Higher Education Institute of Hong Kong (Hong Kong) Univ. of Notre Dame, Notre Dame, IN (United States) Indian Institute of Technology Ropar, Punjab (India) Michigan State Univ., East Lansing, MI (United States); Univ. of Notre Dame, Notre Dame, IN (United States) Mississippi State Univ., Mississippi State, MS (United States) Michigan State Univ., East Lansing, MI (United States); Univ. of Notre Dame, Notre Dame, IN (United States); Indiana Univ., Bloomington, IN (United States) Instituto de Fisica da Univ. de Sao Paulo, Sao Paulo (Brazil) Coastal Carolina Univ., Conway, SC (United States) Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States) Michigan State Univ., East Lansing, MI (United States); Univ. of Notre Dame, Notre Dame, IN (United States); Univ. of Maryland, College Park, MD (United States) Univ. of Notre Dame, Notre Dame, IN (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States) Ioffe Institute, Saint Petersburg (Russia) Univ. of Notre Dame, Notre Dame, IN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States) Extreme Light Infrastructure-Nuclear Physics, Ilfov (Romania) Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Mississippi State Univ., Mississippi State, MS (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States) USDOE Office of Science (SC), Nuclear Physics (NP); USDOE National Nuclear Security Administration (NNSA) Alternate Identifier(s): OSTI ID: 1459442; OSTI ID: 1463562 LA-UR-18-22864 Journal ID: ISSN 1538-4357; TRN: US1901006 Grant/Contract Number: AC05-00OR22725; SC0013037; AC52-06NA25396 Journal Article: Accepted Manuscript The Astrophysical Journal (Online) Journal Volume: 859; Journal Issue: 1; Journal ID: ISSN 1538-4357 Institute of Physics (IOP) 79 ASTRONOMY AND ASTROPHYSICS; dense matter; nuclear reactions; nucleosynthesis; abundances; stars: neutron; X-rays: binaries; 73 NUCLEAR PHYSICS AND RADIATION PHYSICS; dense matter – nuclear reactions, nucleosynthesis, abundances – stars: neutron – X-rays: binaries; Atomic and Nuclear Physics; Astronomy and Astrophysics Lau, Rita, Beard, Mary, Gupta, Sanjib S., Schatz, H., Afanasjev, A. V., Brown, Edward F., Deibel, A. T., Gasques, Leandro R., Hitt, George Wesley, Hix, William Raphael, Keek, Laurens, Moller, Peter, Shternin, Peter S., Steiner, Andrew W., Wiescher, Michael, and Xu, Yi. Nuclear Reactions in the Crusts of Accreting Neutron Stars. United States: N. p., 2018. Web. doi:10.3847/1538-4357/aabfe0. Lau, Rita, Beard, Mary, Gupta, Sanjib S., Schatz, H., Afanasjev, A. V., Brown, Edward F., Deibel, A. T., Gasques, Leandro R., Hitt, George Wesley, Hix, William Raphael, Keek, Laurens, Moller, Peter, Shternin, Peter S., Steiner, Andrew W., Wiescher, Michael, & Xu, Yi. Nuclear Reactions in the Crusts of Accreting Neutron Stars. United States. https://doi.org/10.3847/1538-4357/aabfe0 Lau, Rita, Beard, Mary, Gupta, Sanjib S., Schatz, H., Afanasjev, A. V., Brown, Edward F., Deibel, A. T., Gasques, Leandro R., Hitt, George Wesley, Hix, William Raphael, Keek, Laurens, Moller, Peter, Shternin, Peter S., Steiner, Andrew W., Wiescher, Michael, and Xu, Yi. Thu . "Nuclear Reactions in the Crusts of Accreting Neutron Stars". United States. https://doi.org/10.3847/1538-4357/aabfe0. https://www.osti.gov/servlets/purl/1454393. @article{osti_1454393, title = {Nuclear Reactions in the Crusts of Accreting Neutron Stars}, author = {Lau, Rita and Beard, Mary and Gupta, Sanjib S. and Schatz, H. and Afanasjev, A. V. and Brown, Edward F. and Deibel, A. T. and Gasques, Leandro R. and Hitt, George Wesley and Hix, William Raphael and Keek, Laurens and Moller, Peter and Shternin, Peter S. and Steiner, Andrew W. and Wiescher, Michael and Xu, Yi}, abstractNote = {X-ray observations of transiently accreting neutron stars during quiescence provide information about the structure of neutron star crusts and the properties of dense matter. Interpretation of the observational data requires an understanding of the nuclear reactions that heat and cool the crust during accretion and define its non-equilibrium composition. We identify here in detail the typical nuclear reaction sequences down to a depth in the inner crust where the mass density is rho = 2 times 10^12 g cm-3 using a full nuclear reaction network for a range of initial compositions. The reaction sequences differ substantially from previous work. We find a robust reduction of crust impurity at the transition to the inner crust regardless of initial composition, though shell effects can delay the formation of a pure crust somewhat to densities beyond rho = 2 times 10^12 g cm^-3. This naturally explains the small inner crust impurity inferred from observations of a broad range of systems. The exception are initial compositions with A > 102 nuclei, where the inner crust remains impure with an impurity parameter of Qimp≈20 owing to the N=82 shell closure. In agreement with previous work, we find that nuclear heating is relatively robust and independent of initial composition, while cooling via nuclear Urca cycles in the outer crust depends strongly on initial composition. This work forms a basis for future studies of the sensitivity of crust models to nuclear physics and provides profiles of composition for realistic crust models.}, doi = {10.3847/1538-4357/aabfe0}, url = {https://www.osti.gov/biblio/1454393}, journal = {The Astrophysical Journal (Online)}, Free Publicly Available Full Text Accepted Manuscript (DOE) Publisher's Version of Record https://doi.org/10.3847/1538-4357/aabfe0 Citation Metrics: Cited by: 4 works Citation information provided by Figures / Tables: Figure 1: Column density as a function of mass density for extreme burst ashes. The change in slope around $$ρ$$ = 6 x 10 11 g cm -3 indicates the change of the dominant pressure source from electrons to neutrons. All figures and tables (27 total) Works referenced in this record: Databases and tools for nuclear astrophysics applications: BRUSsels Nuclear LIBrary (BRUSLIB), Nuclear Astrophysics Compilation of REactions II (NACRE II) and Nuclear NETwork GENerator (NETGEN) journal, January 2013 Xu, Y.; Goriely, S.; Jorissen, A. Astronomy & Astrophysics, Vol. 549 https://doi.org/10.1051/0004-6361/201220537 The Rapid Proton Process Ashes from Stable Nuclear Burning on an Accreting Neutron Star journal, October 1999 Schatz, Hendrik; Bildsten, Lars; Cumming, Andrew The Astrophysical Journal, Vol. 524, Issue 2 rp-process nucleosynthesis at extreme temperature and density conditions journal, February 1998 Schatz, H.; Aprahamian, A.; Görres, J. Physics Reports, Vol. 294, Issue 4 Strong neutrino cooling by cycles of electron capture and β− decay in neutron star crusts journal, December 2013 Schatz, H.; Gupta, S.; Möller, P. Nature, Vol. 505, Issue 7481 https://doi.org/10.1038/nature12757 Mass Measurements Demonstrate a Strong N = 28 Shell Gap in Argon Meisel, Z.; George, S.; Ahn, S. Physical Review Letters, Vol. 114, Issue 2 https://doi.org/10.1103/physrevlett.114.022501 A Survey of Chemical Separation in Accreting Neutron Stars journal, May 2016 Mckinven, Ryan; Cumming, Andrew; Medin, Zach Nuclear Heating and Melted Layers in the Inner Crust of an Accreting Neutron Star journal, March 2000 Brown, Edward F. Phase separation in the crust of accreting neutron stars journal, June 2007 Horowitz, C. J.; Berry, D. K.; Brown, E. F. Physical Review E, Vol. 75, Issue 6 https://doi.org/10.1103/physreve.75.066101 Compressible liquid drop nuclear model and mass formula journal, July 1977 Mackie, Frederick D.; Baym, Gordon Nuclear Physics A, Vol. 285, Issue 2 Disordered Nuclear Pasta, Magnetic Field Decay, and Crust Cooling in Neutron Stars Horowitz, C. J.; Berry, D. K.; Briggs, C. M. Dependence of X-Ray Burst Models on Nuclear Masses journal, August 2017 Schatz, H.; Ong, W. -J. https://doi.org/10.3847/1538-4357/aa7de9 Intruder Configurations in the A = 33 Isobars: Mg 33 and Al 33 Tripathi, Vandana; Tabor, S. L.; Mantica, P. F. Physical Review Letters, Vol. 101, Issue 14 Late-time Cooling of Neutron Star Transients and the Physics of the Inner Crust journal, April 2017 Deibel, Alex; Cumming, Andrew; Brown, Edward F. https://doi.org/10.3847/1538-4357/aa6a19 Deformations of accreting neutron star crusts and gravitational wave emission: Crustal quadrupole moments Ushomirsky, Greg; Cutler, Curt; Bildsten, Lars Monthly Notices of the Royal Astronomical Society, Vol. 319, Issue 3 Physics of Neutron Star Crusts Chamel, Nicolas; Haensel, Pawel Living Reviews in Relativity, Vol. 11, Issue 1 https://doi.org/10.12942/lrr-2008-10 Thermonuclear Bursts with Short Recurrence Times from Neutron Stars Explained by Opacity-driven Convection Keek, L.; Heger, A. https://doi.org/10.3847/1538-4357/aa7748 Neutron Reactions in Accreting Neutron Stars: A New Pathway to Efficient Crust Heating Gupta, Sanjib S.; Kawano, Toshihiko; Möller, Peter Quiescent thermal emission from neutron stars in low-mass X-ray binaries Turlione, A.; Aguilera, D. N.; Pons, J. A. A Strong Shallow heat Source in the Accreting Neutron star maxi J0556-332 https://doi.org/10.1088/2041-8205/809/2/l31 Fusion reactions in multicomponent dense matter journal, September 2006 Yakovlev, D. G.; Gasques, L. R.; Afanasjev, A. V. Physical Review C, Vol. 74, Issue 3 https://doi.org/10.1103/physrevc.74.035803 Discovery of X-ray burst triplets in EXO 0748-676 Boirin, L.; Keek, L.; Méndez, M. Astronomy & Astrophysics, Vol. 465, Issue 2 https://doi.org/10.1051/0004-6361:20066204 Nuclear Configurations in the Spin-Orbit Coupling Model. II. Theoretical Considerations Mayer, Maria Goeppert Physical Review, Vol. 78, Issue 1 https://doi.org/10.1103/physrev.78.22 Nuclear Compositions in the Inner Crust of Neutron Stars Sato, K. Progress of Theoretical Physics, Vol. 62, Issue 4 https://doi.org/10.1143/ptp.62.957 Mapping Crustal Heating with the Cooling Light Curves of Quasi-Persistent Transients Brown, Edward F.; Cumming, Andrew Collapse of the N = 28 Shell Closure in S 42 i Bastin, B.; Grévy, S.; Sohler, D. Physical Review Letters, Vol. 99, Issue 2 https://doi.org/10.1103/physrevlett.99.022503 Forecasting Neutron Star Temperatures: Predictability and Variability Page, Dany; Reddy, Sanjay Nuclear Properties for Astrophysical and Radioactive-Ion-Beam Applications MÖLler, P.; Nix, J. R.; Kratz, K. -L. Atomic Data and Nuclear Data Tables, Vol. 66, Issue 2 https://doi.org/10.1006/adnd.1997.0746 New developments in the calculation of β-strength functions Möller, Peter; Randrup, Jørgen https://doi.org/10.1016/0375-9474(90)90330-o End Point of the rp Process on Accreting Neutron Stars Schatz, H.; Aprahamian, A.; Barnard, V. Physical Review Letters, Vol. 86, Issue 16 https://doi.org/10.1103/physrevlett.86.3471 Deep crustal heating in a multicomponent accreted neutron star crust Steiner, Andrew W. X-ray binaries Schatz, H.; Rehm, K. E. Nuclear Physics A, Vol. 777 https://doi.org/10.1016/j.nuclphysa.2005.05.200 Nuclear Configurations in the Spin-Orbit Coupling Model. I. Empirical Evidence Neutron star matter at sub-nuclear densities Negele, J. W.; Vautherin, D. Superburst Models for Neutron Stars with Hydrogen- and Helium-Rich Atmospheres Keek, L.; Heger, A.; in 't Zand, J. J. M. Comment on "Intruder Configurations in the A = 33 Isobars: Mg 33 and Al 33 " Yordanov, D. T.; Blaum, K.; De Rydt, M. The limits of the nuclear landscape Erler, Jochen; Birge, Noah; Kortelainen, Markus Nuclear composition and heating in accreting neutron-star crusts Haensel, P.; Zdunik, J. L. Measurements of Fusion Reactions of Low-Intensity Radioactive Carbon Beams on C 12 and their Implications for the Understanding of X-Ray Bursts Carnelli, P. F. F.; Almaraz-Calderon, S.; Rehm, K. E. Nuclear magic numbers: New features far from stability Sorlin, O.; Porquet, M. -G. Progress in Particle and Nuclear Physics, Vol. 61, Issue 2 https://doi.org/10.1016/j.ppnp.2008.05.001 Hydrodynamic Models of type i X-Ray Bursts: Metallicity Effects José, Jordi; Moreno, Fermín; Parikh, Anuj The Astrophysical Journal Supplement Series, Vol. 189, Issue 1 https://doi.org/10.1088/0067-0049/189/1/204 Models of Type I X-Ray Bursts from GS 1826-24: A Probe of rp-Process Hydrogen Burning journal, November 2007 Heger, Alexander; Cumming, Andrew; Galloway, Duncan K. Influence of shell-quenching far from stability on the astrophysical r-process Chen, B.; Dobaczewski, J.; Kratz, K. -L. Physics Letters B, Vol. 355, Issue 1-2 Daily multiwavelength Swift monitoring of the neutron star low-mass X-ray binary Cen X-4: evidence for accretion and reprocessing during quiescence Bernardini, F.; Cackett, E. M.; Brown, E. F. https://doi.org/10.1093/mnras/stt1741 Neutron star crust cooling in KS 1731−260: the influence of accretion outburst variability on the crustal temperature evolution Ootes, Laura S.; Page, Dany; Wijnands, Rudy https://doi.org/10.1093/mnras/stw1799 Urca Cooling Pairs in the Neutron star Ocean and Their Effect on Superbursts Deibel, Alex; Meisel, Zach; Schatz, Hendrik Collectivity in 44S Glasmacher, T.; Brown, B. A.; Chromik, M. J. Charting the Temperature of the Hot Neutron Star in a Soft X-Ray Transient Colpi, Monica; Geppert, Ulrich; Page, Dany On Closed Shells in Nuclei. II Physical Review, Vol. 75, Issue 12 https://doi.org/10.1103/physrev.75.1969 Neutron drip line: Single-particle degrees of freedom and pairing properties as sources of theoretical uncertainties Afanasjev, A. V.; Agbemava, S. E.; Ray, D. Nuclear Ground-State Masses and Deformations Moller, P.; Nix, J. R.; Myers, W. D. Atomic Data and Nuclear Data Tables, Vol. 59, Issue 2, p. 185-381 Models of crustal heating in accreting neutron stars Evidence for crust cooling in the transiently accreting 11-Hz X-ray pulsar in the globular cluster Terzan 5: Crust cooling of the Terzan 5 X-ray pulsar Degenaar, N.; Brown, E. F.; Wijnands, R. Monthly Notices of the Royal Astronomical Society: Letters, Vol. 418, Issue 1 Heating in the Accreted Neutron Star Ocean: Implications for Superburst Ignition Gupta, Sanjib; Brown, Edward F.; Schatz, Hendrik Photodisintegration-triggered Nuclear Energy Release in Superbursts Shell and shape evolution at N = 28 : The Mg 40 ground state Crawford, H. L.; Fallon, P.; Macchiavelli, A. O. 1 p 3 / 2 Proton-Hole State in Sn 132 and the Shell Structure Along N = 82 Taprogge, J.; Jungclaus, A.; Grawe, H. Probing the Crust of the Neutron star in exo 0748-676 Degenaar, N.; Medin, Z.; Cumming, A. β -Decay Half-Lives of Co 76 , 77 , Ni 79 , 80 , and Cu 81 : Experimental Indication of a Doubly Magic Ni 78 Xu, Z. Y.; Nishimura, S.; Lorusso, G. The Spin Distribution of Fast-spinning Neutron Stars in Low-mass X-Ray Binaries: Evidence for Two Subpopulations Patruno, A.; Haskell, B.; Andersson, N. Thermal conductivity and impurity scattering in the accreting neutron star crust Roggero, Alessandro; Reddy, Sanjay Large collection of astrophysical S factors and their compact representation Afanasjev, A. V.; Beard, M.; Chugunov, A. I. Discovery of 40Mg and 42Al suggests neutron drip-line slant towards heavier isotopes Baumann, T.; Amthor, A. M.; Bazin, D. Mass Measurement of Sc 56 Reveals a Small A = 56 Odd-Even Mass Staggering, Implying a Cooler Accreted Neutron Star Crust Crustal Heating and Quiescent Emission from Transiently Accreting Neutron Stars Brown, Edward F.; Bildsten, Lars; Rutledge, Robert E. A superburst candidate in EXO 1745−248 as a challenge to thermonuclear ignition models: Superburst in the NS EXO 1745−248 Altamirano, D.; Keek, L.; Cumming, A. Fusion of neutron-rich oxygen isotopes in the crust of accreting neutron stars Horowitz, C. J.; Dussan, H.; Berry, D. K. Crustal Emission and the Quiescent Spectrum of the Neutron Star in KS 1731−260 Rutledge, Robert E.; Bildsten, Lars; Brown, Edward F. Neutron star cooling after deep crustal heating in the X-ray transient KS 1731-260 Shternin, P. S.; Yakovlev, D. G.; Haensel, P. Lower limit on the heat capacity of the neutron star core Cumming, Andrew; Brown, Edward F.; Fattoyev, Farrukh J. Carbon production on accreting neutron stars in a new regime of stable nuclear burning https://doi.org/10.1093/mnrasl/slv167 A catalogue of low-mass X-ray binaries in the Galaxy, LMC, and SMC (Fourth edition) Liu, Q. Z.; van Paradijs, J.; van den Heuvel, E. P. J. Neutron degeneracy and plasma physics effects on radiative neutron captures in neutron star crust Shternin, P. S.; Beard, M.; Wiescher, M. Rapid Neutrino Cooling in the Neutron Star MXB 1659-29 Brown, Edward F.; Cumming, Andrew; Fattoyev, Farrukh J. Explosive hydrogen burning Wallace, R. K.; Woosley, S. E. The Astrophysical Journal Supplement Series, Vol. 45 β decay and isomeric properties of neutron-rich Ca and Sc isotopes Crawford, H. L.; Janssens, R. V. F.; Mantica, P. F. Compositionally Driven Convection in the Oceans of Accreting Neutron Stars Medin, Zach; Cumming, Andrew Explosive Hydrogen Burning during Type I X‐Ray Bursts Fisker, Jacob Lund; Schatz, Hendrik; Thielemann, Friedrich‐Karl Isotopic r-process abundances and nuclear structure far from stability - Implications for the r-process mechanism Kratz, Karl-Ludwig; Bitouzet, Jean-Philippe; Thielemann, Friedrich-Karl The Astrophysical Journal, Vol. 403 Thermonuclear Stability of Material Accreting onto a Neutron Star Narayan, Ramesh; Heyl, Jeremy S. Dependence of X-Ray Burst Models on Nuclear Reaction Rates Cyburt, R. H.; Amthor, A. M.; Heger, A. The Thermal State of ks 1731−260 After 14.5 Years in Quiescence Merritt, Rachael L.; Cackett, Edward M.; Brown, Edward F. Cooling of the quasi-persistent neutron star X-ray transients KS 1731-260 and MXB 1659-29 Cackett, E. M.; Wijnands, R.; Linares, M. Neutron star crust cooling in the Terzan 5 X-ray transient Swift J174805.3–244637 Degenaar, N.; Wijnands, R.; Bahramian, A. https://doi.org/10.1093/mnras/stv1054 N = 82 Shell Quenching of the Classical r -Process "Waiting-Point" Nucleus C d 130 Dillmann, I.; Kratz, K. -L.; Wöhr, A. Long Type I X‐Ray Bursts and Neutron Star Interior Physics Cumming, Andrew; Macbeth, Jared; Zand, J. J. M. in 't Beta decay of Na 3 1 , 3 2 and Mg 31 : Study of the N =20 shell closure Klotz, G.; Baumann, P.; Bounajma, M. https://doi.org/10.1103/physrevc.47.2502 Constraints on Bygone Nucleosynthesis of Accreting Neutron Stars Meisel, Zach; Deibel, Alex https://doi.org/10.3847/1538-4357/aa618d Calculation of Gamow-Teller β-strength functions in the rubidium region in the RPA approximation with Nilsson-model wave functions Krumlinde, Joachim; Möller, Peter Nonequilibrium shells of neutron stars and their role in sustaining x-ray emission and nucleosynthesis Bisnovatyĭ-Kogan, G. S.; Chechetkin, V. M. Soviet Physics Uspekhi, Vol. 22, Issue 2 https://doi.org/10.1070/pu1979v022n02abeh005418 Potential cooling of an accretion-heated neutron star crust in the low-mass X-ray binary 1RXS J180408.9−342058 Parikh, A. S.; Wijnands, R.; Degenaar, N. Astrophysical S factors for fusion reactions involving C, O, Ne, and Mg isotopes Beard, M.; Afanasjev, A. V.; Chamon, L. C. https://doi.org/10.1016/j.adt.2010.02.005 Gravitational Radiation and Rotation of Accreting Neutron Stars Bildsten, Lars CONTINUED NEUTRON STAR CRUST COOLING OF THE 11 Hz X-RAY PULSAR IN TERZAN 5: A CHALLENGE TO HEATING AND COOLING MODELS? Degenaar, N.; Wijnands, R.; Brown, E. F. β-Decay schemes of very neutron-rich sodium isotopes and their descendants Guillemaud-Mueller, D.; Detraz, C.; Langevin, M. A Strongly Heated Neutron star in the Transient z Source maxi J0556-332 Homan, Jeroen; Fridriksson, Joel K.; Wijnands, Rudy Thermal conductivity and phase separation of the crust of accreting neutron stars Horowitz, C. J.; Caballero, O. L.; Berry, D. K. Multi-Instrument X-Ray Observations of Thermonuclear Bursts with Short Recurrence Times Keek, L.; Galloway, D. K.; in't Zand, J. J. M. Multi-Zone Models of Superbursts from Accreting Neutron Stars Structure of 55Ti from relativistic one-neutron knockout Maierbeck, P.; Gernhäuser, R.; Krücken, R. Physics Letters B, Vol. 675, Issue 1 https://doi.org/10.1016/j.physletb.2009.03.049 A Signature of Chemical Separation in the Cooling Light Curves of Transiently Accreting Neutron Stars https://doi.org/10.1088/2041-8205/783/1/L3 Isomers in Pd 128 and Pd 126 : Evidence for a Robust Shell Closure at the Neutron Magic Number 82 in Exotic Palladium Isotopes Watanabe, H.; Lorusso, G.; Nishimura, S. Constraining the properties of neutron star crusts with the transient low-mass X-ray binary Aql X-1 Waterhouse, A. C.; Degenaar, N.; Wijnands, R. Models for Type I X‐Ray Bursts with Improved Nuclear Physics Woosley, S. E.; Heger, A.; Cumming, A. [ × clear filter / sort ] Works referencing / citing this record: Continued cooling of the accretion-heated neutron star crust in the X-ray transient IGR J17480–2446 located in the globular cluster Terzan 5 Ootes, L. S.; Vats, S.; Page, D. https://doi.org/10.1093/mnras/stz1406 Nuclear physics of the outer layers of accreting neutron stars Meisel, Zach; Deibel, Alex; Keek, Laurens Journal of Physics G: Nuclear and Particle Physics, Vol. 45, Issue 9 https://doi.org/10.1088/1361-6471/aad171 Long-term temperature evolution of neutron stars undergoing episodic accretion outbursts Ootes, L. S.; Wijnands, R.; Page, D. Quiescent X-ray variability in the neutron star Be/X-ray transient GRO J1750−27 Rouco Escorial, A.; Wijnands, R.; Ootes, L. S. Spallation-altered Accreted Compositions for X-Ray Bursts: Impact on Ignition Conditions and Burst Ashes Randhawa, J. S.; Meisel, Z.; Giuliani, S. A. https://doi.org/10.3847/1538-4357/ab4f71 Crust of accreting neutron stars within simplified reaction network Shchechilin, N. N.; Chugunov, A. I. Neutron transfer reactions in accreting neutron stars Chugunov, A. I. https://doi.org/10.1093/mnrasl/sly218 Crust-cooling Models Are Insensitive to the Crust–Core Transition Pressure for Realistic Equations of State Lalit, Sudhanva; Meisel, Zach; Brown, Edward F. https://doi.org/10.3847/1538-4357/ab338c Thermal evolution and quiescent emission of transiently accreting neutron stars Potekhin, A. Y.; Chugunov, A. I.; Chabrier, G. All Cited By Figures / Tables found in this record: Figure 1(p. 3)figure Table 1(p. 5)table Figure 10(p. 10)figure Table 3(p. 13)table Sort by figure / table title Sort by page order Figures/Tables have been extracted from DOE-funded journal article accepted manuscripts. A DIRECT MEASUREMENT OF THE HEAT RELEASE IN THE OUTER CRUST OF THE TRANSIENTLY ACCRETING NEUTRON STAR XTE J1709-267 Journal Article Degenaar, N. ; Miller, J. M. ; Wijnands, R., E-mail: [email protected] - Astrophysical Journal Letters The heating and cooling of transiently accreting neutron stars provides a powerful probe of the structure and composition of their crust. Observations of superbursts and cooling of accretion-heated neutron stars require more heat release than is accounted for in current models. Obtaining firm constraints on the depth and magnitude of this extra heat is challenging and therefore its origin remains uncertain. We report on Swift and XMM-Newton observations of the transient neutron star low-mass X-ray binary XTE J1709-267, which were made in 2012 September-October when it transitioned to quiescence after a {approx_equal}10 week long accretion outburst. The source is detectedmore » with XMM-Newton at a 0.5-10 keV luminosity of L{sub X} {approx_equal} 2 Multiplication-Sign 10{sup 34}(D/8.5 kpc){sup 2} erg s{sup -1}. The X-ray spectrum consists of a thermal component that fits to a neutron star atmosphere model and a non-thermal emission tail, each of which contribute {approx_equal}50% to the total flux. The neutron star temperature decreases from {approx_equal}158 to {approx_equal}152 eV during the {approx_equal}8 hr long observation. This can be interpreted as cooling of a crustal layer located at a column density of y {approx_equal} 5 Multiplication-Sign 10{sup 12} g cm{sup -2} ({approx_equal}50 m inside the neutron star), which is just below the ignition depth of superbursts. The required heat generation in the layers on top would be {approx_equal}0.06-0.13 MeV per accreted nucleon. The magnitude and depth rule out electron captures and nuclear fusion reactions as the heat source, but it may be accounted for by chemical separation of light and heavy nuclei. Low-level accretion offers an alternative explanation for the observed variability.« less Mass Measurement of 56Sc Reveals a Small A=56 Odd-Even Mass Staggering, Implying a Cooler Accreted Neutron Star Crust Journal Article Meisel, Z. ; George, S. ; Ahn, S. ; ... - Physical Review Letters We present the mass excesses of 52-57Sc, obtained from recent time-of-flight nuclear mass measurements at the National Superconducting Cyclotron Laboratory at Michigan State University. The masses of 56Sc and 57Sc were determined for the first time with atomic mass excesses of -24.85(59)((+0)(-54)) MeV and -21.0(1.3) MeV, respectively, where the asymmetric uncertainty for 56Sc was included due to possible contamination from a long-lived isomer. The 56Sc mass indicates a small odd-even mass staggering in the A = 56 mass chain towards the neutron drip line, significantly deviating from trends predicted by the global FRDM mass model and favoring trends predicted bymore » the UNEDF0 and UNEDF1 density functional calculations. Together with new shell-model calculations of the electron-capture strength function of 56Sc, our results strongly reduce uncertainties in model calculations of the heating and cooling at the 56Ti electron-capture layer in the outer crust of accreting neutron stars. We find that, in contrast to previous studies, neither strong neutrino cooling nor strong heating occurs in this layer. We conclude that Urca cooling in the outer crusts of accreting neutron stars that exhibit superbursts or high temperature steady-state burning, which are predicted to be rich in A approximate to 56 nuclei, is considerably weaker than predicted. Urca cooling must instead be dominated by electron capture on the small amounts of adjacent odd-A nuclei contained in the superburst and high temperature steady-state burning ashes. This may explain the absence of strong crust Urca cooling inferred from the observed cooling light curve of the transiently accreting x-ray source MAXI J0556-332.« less Strong neutrino cooling by cycles of electron capture and decay in neutron star crusts Journal Article Schatz, Hendrik ; Gupta, Sanjib ; Moeller, Peter ; ... - Nature (London) The temperature in the crust of an accreting neutron star, which comprises its outermost kilometre, is set by heating from nuclear reactions at large densities, neutrino cooling and heat transport from the interior. The heated crust has been thought to affect observable phenomena at shallower depths, such as thermonuclear bursts in the accreted envelope. Here we report that cycles of electron capture and its inverse, decay, involving neutron-rich nuclei at a typical depth of about 150 metres, cool the outer neutron star crust by emitting neutrinos while also thermally decoupling the surface layers from the deeper crust. This Urca mechanismmore » has been studied in the context of white dwarfs13 and type Ia supernovae, but hitherto was not considered in neutron stars, because previous models1, 2 computed the crust reactions using a zero-temperature approximation and assumed that only a single nuclear species was present at any given depth. The thermal decoupling means that X-ray bursts and other surface phenomena are largely independent of the strength of deep crustal heating. The unexpectedly short recurrence times, of the order of years, observed for very energetic thermonuclear superbursts are therefore not an indicator of a hot crust, but may point instead to an unknown local heating mechanism near the neutron star surface.« less Journal Article Deibel, Alex ; Cumming, Andrew ; Brown, Edward F. ; ... - The Astrophysical Journal (Online) An accretion outburst onto a neutron star transient heats the neutron star's crust out of thermal equilibrium with the core. After the outburst, the crust thermally relaxes toward equilibrium with the neutron star core, and the surface thermal emission powers the quiescent X-ray light curve. Crust cooling models predict that thermal equilibrium of the crust will be established $$\approx 1000\,\mathrm{days}$$ into quiescence. Recent observations of the cooling neutron star transient MXB 1659-29, however, suggest that the crust did not reach thermal equilibrium with the core on the predicted timescale and continued to cool after $$\approx 2500\,\mathrm{days}$$ into quiescence. Because themore » quiescent light curve reveals successively deeper layers of the crust, the observed late-time cooling of MXB 1659-29 depends on the thermal transport in the inner crust. In particular, the observed late-time cooling is consistent with a low thermal conductivity layer near the depth predicted for nuclear pasta that maintains a temperature gradient between the neutron star's inner crust and core for thousands of days into quiescence. As a result, the temperature near the crust–core boundary remains above the critical temperature for neutron superfluidity, and a layer of normal neutrons forms in the inner crust. We find that the late-time cooling of MXB 1659-29 is consistent with heat release from a normal neutron layer near the crust–core boundary with a long thermal time. We also investigate the effect of inner crust physics on the predicted cooling curves of the accreting transient KS 1731-260 and the magnetar SGR 1627-41.« less Journal Article Deibel, Alex ; Brown, Edward F. ; Cumming, Andrew ; ... - Astrophysical Journal An accretion outburst onto a neutron star transient heats the neutron star's crust out of thermal equilibrium with the core. After the outburst, the crust thermally relaxes toward equilibrium with the neutron star core, and the surface thermal emission powers the quiescent X-ray light curve. Crust cooling models predict that thermal equilibrium of the crust will be established ≈1000 days into quiescence. Recent observations of the cooling neutron star transient MXB 1659-29, however, suggest that the crust did not reach thermal equilibrium with the core on the predicted timescale and continued to cool after ≈2500 days into quiescence. Because the quiescentmore » light curve reveals successively deeper layers of the crust, the observed late-time cooling of MXB 1659-29 depends on the thermal transport in the inner crust. In particular, the observed late-time cooling is consistent with a low thermal conductivity layer near the depth predicted for nuclear pasta that maintains a temperature gradient between the neutron star's inner crust and core for thousands of days into quiescence. As a result, the temperature near the crust–core boundary remains above the critical temperature for neutron superfluidity, and a layer of normal neutrons forms in the inner crust. We find that the late-time cooling of MXB 1659-29 is consistent with heat release from a normal neutron layer near the crust–core boundary with a long thermal time. We also investigate the effect of inner crust physics on the predicted cooling curves of the accreting transient KS 1731-260 and the magnetar SGR 1627-41.« less
CommonCrawl
Power series Power series in one complex variable $ z $. A series (representing a function) of the form $$ \tag{1 } s(z) \ = \ \sum _ { k=0 } ^ \infty b _ {k} (z-a) ^ {k} , $$ where $ a $ is the centre, $ b _ {k} $ are the coefficients and $ b _ {k} (z-a) ^ {k} $ are the terms of the series. There exists a number $ r $, $ 0 \leq r \leq \infty $, called the radius of convergence of the power series (1) and determined by the Cauchy–Hadamard formula $$ \tag{2 } r \ = \ \frac{1}{\lim\limits _ {k \rightarrow \infty } \ \sup \ | b _ {k} | ^ {1/k} } , $$ such that if $ | z-a | < r $ the series (1) converges absolutely, while if $ | z-a | > r $, it diverges (the Cauchy–Hadamard theorem). Accordingly, the disc $ D = \{ {z \in \mathbf C } : {| z-a | < r } \} $ in the complex $ z $- plane $ \mathbf C $ is called the disc of convergence of the power series (Fig. a). Figure: p074240a When $ r=0 $, the disc of convergence degenerates to the single point $ z=a $, for example for the power series $ \sum _ {k=0} ^ \infty k!(z-a) ^ {k} $( this case is not of interest, and it is assumed from now on that $ r> 0 $). When $ r = \infty $, the disc of convergence coincides with the entire plane $ \mathbf C $, for example for the power series $ \sum _ {k=0} ^ \infty (z-a) ^ {k} /k! $. The set of convergence, i.e. the set of all points of convergence of the series (1), when $ 0 < r < \infty $, consists of the points of the disc of convergence $ D $ plus all, some or none of the points of the circle of convergence $ S = \{ {z \in \mathbf C } : {| z-a | = r } \} $. The disc of convergence in this case is the interior of the set of points of absolute convergence of the power series. Within $ D $, i.e. on any compact set $ K \subset D $, the power series (1) converges absolutely and uniformly. Thus, the sum of the series, $ s(z) $, is defined, and is a regular analytic function at least inside $ D $. It has at least one singular point on $ S $ to which the sum $ s(z) $ cannot be analytically continued. There exist power series with exactly one singular point on $ S $; there also exist power series for which the entire circle $ S $ consists of singular points. When $ r = \infty $, the series (1) either terminates, i.e. it is a polynomial, $$ s(z) \ = \ \sum _ { k=0 } ^ { m } b _ {k} (z-a) ^ {k} , $$ or its sum is an entire transcendental function, which is regular in the entire place $ \mathbf C $ and which possesses an essential singular point at infinity. Conversely, the very concept of analyticity of a function $ f(z) $ at a point $ a $ is based on the fact that in a neighbourhood of $ a $, $ f(z) $ can be expanded into a power series $$ f(z) \ = \ \sum _ { k=0 } ^ \infty b _ {k} (z-a) ^ {k} , $$ which is the Taylor series for $ f(z) $, i.e. its coefficients are defined by the formulas $$ b _ {k} \ = \ \frac{f ^ {\ (k) } (a) }{k!} . $$ Consequently, the uniqueness property of a power series is important: If the sum $ s(z) $ of the series (1) vanishes on an infinite set $ E \subset D $ with a limit point inside $ D $, then $ s(z) \equiv 0 $, and all $ b _ {k} = 0 $, $ k = 0,\ 1 ,\dots $. In particular, if $ s(z) = 0 $ in a neighbourhood of a certain point $ z _ {0} \in D $, then $ s(z) \equiv 0 $ and all $ b _ {k} = 0 $. Thus, every power series is the Taylor series for its own sum. Let there be another power series apart from (1): $$ \tag{3 } \sigma (z) \ = \ \sum _ { k=0 } ^ \infty c _ {k} (z-a) ^ {k} $$ with the same centre $ a $ and with radius of convergence $ r _ {1} > 0 $. Then, at least inside the disc $ \Delta = \{ {z \in \mathbf C } : {| z-a | < \rho } \} $, where $ \rho = \mathop{\rm min} \{ r,\ r _ {1} \} $, the addition, subtraction and multiplication of the series (1) and (3) according to the following formulas hold: $$ \tag{4 } \left . \begin{array}{c} s(z) \pm \sigma (z) \ = \ \sum _ { k=0 } ^ \infty (b _ {k} \pm c _ {k} )(z-a) ^ {k} , \\ s(z) \sigma (z) \ = \ \sum _ { k=0 } ^ \infty \left ( \sum _ { n=0 } ^ { k } b _ {n} c _ {k-n} \right ) (z-a) ^ {k} \end{array} \right \} . $$ The laws of commutativity, associativity and distributivity hold, whereby subtraction is the inverse operation of addition. Thus, the set of power series with positive radii of convergence and a fixed centre is a ring over the field $ \mathbf C $. If $ c _ {0} \neq 0 $, then division of power series is possible: $$ \tag{5 } \frac{s(z)}{\sigma (z) } \ = \ \sum _ { k=0 } ^ \infty d _ {k} (z-a) ^ {k} , $$ where the coefficients $ d _ {k} $ are uniquely defined from the infinite system of equations $$ \sum _ { n=0 } ^ { k } c _ {n} d _ {k-n} \ = \ a _ {k} ,\ \ k = 0,\ 1 ,\dots. $$ When $ c _ {0} \neq 0 $, $ r > 0 $ and $ r _ {1} > 0 $, the radius of convergence of (5) is also positive. For the sake of simplicity, let $ a = \sigma (0) = c _ {0} = 0 $ in (1) and (3); the composite function $ s( \sigma (z)) $ will then be regular in a neighbourhood of the coordinate origin, and the process of expanding it into a power series is called substitution of a series in a series: $$ \tag{6 } s( \sigma (z)) \ = \ \sum _ { n=0 } ^ \infty b _ {n} \left ( \sum _ { k=0 } ^ \infty c _ {k} z ^ {k} \right ) ^ {n} \ = \ \sum _ { m=0 } ^ \infty g _ {m} z ^ {m} . $$ The coefficient $ g _ {m} $ in (6) is obtained as the sum of the coefficients with the same index in the expansion of each of the functions $ b _ {n} ( \sigma (z)) ^ {n} $, while the latter expansions are obtained by $ n $- fold multiplication of the series for $ \sigma (z) $ by itself. The series (6) automatically converges when $ | z | < \rho $, where $ \rho $ is such that $ | \sigma (z) | < r $. Let $ a = \sigma (0) = c _ {0} = 0 $ again, and, moreover, let $ c _ {1} = \sigma ^ \prime (0) \neq 0 $, $ w = \sigma (z) $. The problem of constructing a series for the inverse function $ z = \phi (w) $, which under the given conditions is regular in a neighbourhood of the origin, is called inversion of the series (3). Its solution is the Lagrange series: $$ z \ = \ \phi (w) \ = \ \sum _ { n=1 } ^ \infty \frac{1}{n!} \left ( \frac \zeta {\sigma ( \zeta ) } \right ) _ {\zeta =0 } ^ {(n)} w ^ {n} $$ (for a more general inversion problem see Bürmann–Lagrange series). If the power series (1) converges at a point $ z _ {0} \neq a $, then it converges absolutely for all $ z $ for which $ | z-a | < | z _ {0} -a | $; this is the essence of Abel's first theorem. This theorem also makes it possible to establish the form of the domain of convergence of the series. Abel's second theorem provides a more detailed result: If the series (1) converges at the point $ z _ {0} = a + re ^ {i \theta _ {0} } $ on the circle of convergence $ S $, then $$ \lim\limits _ {\rho \rightarrow r } \ s(a+ \rho e ^ {i \theta _ {0} } ) \ = \ s(z _ {0} ), $$ i.e. the sum of the series, $ s(z) $, at the point $ z _ {0} \in S $ has radial boundary value $ s(z _ {0} ) $ and, consequently, is continuous along the radius $ z = a + \rho ^ {i \theta _ {0} } $, $ 0 \leq \rho \leq r $; moreover, $ s(z) $ also has non-tangential boundary value $ s(z _ {0} ) $( cf. Angular boundary value). This theorem, dating back to 1827, can be seen as the first major result in the research into the boundary properties of power series. Inversion of Abel's second theorem without extra restrictions on the coefficients of the power series is impossible. However, if one assumes, for example, that $ b _ {k} = o(1/k) $, and if $ \lim\limits _ {\rho \rightarrow r } \ s(a+ \rho e ^ {i \theta _ {0} } ) = s _ {0} $ exists, then $ \sum _ {k=0} ^ \infty b _ {k} (z _ {0} -a) ^ {k} $ converges to $ s _ {0} $. This type of partial inversions of Abel's second theorem are called Tauberian theorems. For other results relating to the boundary properties of power series and particularly to the location of singular points of power series, see Hadamard theorem; Analytic continuation; Boundary properties of analytic functions; Fatou theorem (see also –). Power series in several complex variables $ z = (z _ {1} \dots z _ {n} ) $, $ n > 1 $, or multiple power series, are series (representing functions) of the form $$ \tag{7 } s(z) \ = \ \sum _ {\mid k \vDash0 } ^ \infty b _ {k} (z-a) ^ {k\ } = $$ $$ = \ \sum _ {k _ {1} = 0 } ^ \infty \dots \sum _ {k _ {n} =0 } ^ \infty b _ {k _ {1} \dots k _ {n} } (z _ {1} - a _ {1} ) ^ {k _ {1} } \dots (z _ {n} - a _ {n} ) ^ {k _ {n} } , $$ where $ b _ {k} = b _ {k _ {1} \dots k _ {n} } $, $ (z-a) ^ {k} = (z _ {1} - a _ {1} ) ^ {k _ {1} } \dots (z _ {n} - a _ {n} ) ^ {k _ {n} } $, $ | k | = k _ {1} + \dots + k _ {n} $, and $ a = (a _ {1} \dots a _ {n} ) $, the centre of the series, is a point of the complex space $ \mathbf C ^ {n} $. The interior of the set of points of absolute convergence is called the domain of convergence $ D $ of the power series (7), but when $ n > 1 $, it does not have such a simple form as when $ n=1 $. A domain $ D $ of $ \mathbf C ^ {n} $ is the domain of convergence of a certain power series (7) if and only if $ D $ is a logarithmically-convex complete Reinhardt domain of $ \mathbf C ^ {n} $. If a certain point $ z ^ {0} \in D $, then the closure $ \overline{U}\; (a,\ r) $ of the polydisc $ U(a,\ r) = \{ {z \in \mathbf C ^ {n} } : {| z _ {v} - a _ {v} | < r _ {v} ,\ v = 1 \dots n } \} $, where $ r _ {v} = | z _ {v} ^ {0} - a _ {v} | $, $ r = (r _ {1} \dots r _ {n} ) $, also belongs to $ D $, and the series (7) converges absolutely and uniformly in $ \overline{U}\; (a,\ r) $( the analogue of Abel's first theorem). The polydisc $ U(a,\ r) $, $ r=(r _ {1} \dots r _ {n} ) $, is called the polydisc of convergence of the series (7), if $ U(a,\ r) \subset D $ and if in any larger polydisc $ \{ {z \in \mathbf C ^ {n} } : {| z _ {v} - a _ {v} | < r _ {v} ^ \prime } \} $, where $ r _ {v} ^ \prime \geq r _ {v} $, $ v = 1 \dots n $, and at least one inequality is strict, there are points at which the series (7) diverges. The radii $ r _ {v} $ of the polydisc of convergence are called conjugate radii of convergence, and satisfy a relation analogous to the Cauchy–Hadamard formula: $$ \lim\limits _ {| k | \rightarrow \infty } \ \sup \ (| b _ {k} | r ^ {k} ) ^ {1/ | k | } \ = \ 1, $$ where $ | b _ {k} | = | b _ {k _ {1} } \dots b _ {k _ {n} } | $, $ r ^ {k} = r _ {1} ^ {k _ {1} } \dots r _ {n} ^ {k _ {n} } $. The domain of convergence $ D $ is exhausted by polydiscs of convergence. For example, for the series $ \sum _ {k=0} ^ \infty (z _ {1} z _ {2} ) ^ {k} $ the polydiscs of convergence take the form $$ U \left ( 0,\ r _ {1} ,\ \frac{1}{r _ {1} } \right ) \ = \ \left \{ {z = (z _ {1} ,\ z _ {2} ) \in \mathbf C ^ {2} } : {| z _ {1} | < r _ {1} ,\ | z _ {2} | < \frac{1}{r _ {1} } } \right \} , $$ while the domain of convergence is $ D = \{ {z \in \mathbf C ^ {2} } : {| z _ {1} | \cdot | z _ {2} | < 1 } \} $( in Fig. bit is represented in the quadrant of absolute values). Figure: p074240b The uniqueness property of power series is preserved in the sense that if $ s(z) = 0 $ in a neighbourhood of the point $ z ^ {0} $ in $ \mathbf C ^ {n} $( it is sufficient even in $ \mathbf R ^ {n} $, i.e. on a set $ \{ {z = x + iy \in \mathbf C ^ {n} } : {| x - \mathop{\rm Re} \ a | < r,\ y = \mathop{\rm Im} \ a } \} $), then $ s(z) \equiv 0 $ and all $ b _ {k} = 0 $. Operations with multiple power series are carried out, broadly speaking, according to the same rules as when $ n=1 $. For other properties of multiple power series, see, for example, , . Power series in real variables $ x = (x _ {1} \dots x _ {n} ) $, $ n \geq 1 $, are series of functions of the form $$ \tag{8 } s(x) \ = \ \sum _ {| k | =0 } ^ \infty b _ {k} (x-a) ^ {k} , $$ where abbreviated notations are used, as in , and $ a = (a _ {1} \dots a _ {n} ) \in \mathbf R ^ {n} $ is the centre of the series. If the series (8) converges absolutely in a parallelepipedon $ \Pi = \{ {x \in \mathbf R ^ {n} } : {| x _ {k} - a _ {k} | < r _ {k} ,\ k = 1 \dots n } \} $, then it also converges absolutely in the polydisc $ U(a,\ r) = \{ {z \in \mathbf C ^ {n} } : {| z -a | < r } \} $, $ r = (r _ {1} \dots r _ {n} ) $. The sum of the series, $ s(x) $, being an analytic function of the real variables $ x = (x _ {1} \dots x _ {n} ) $ in $ \Pi $, is continued analytically in the form of the power series $$ \tag{9 } s(z) \ = \ \sum _ {\mid k \vDash0 } ^ \infty b _ {k} (z-a) ^ {k} $$ to the analytic function $ s(z) $ of the complex variables $ z = x+iy = (z _ {1} = x _ {1} + iy _ {1} \dots z _ {n} = x _ {n} + iy _ {n} ) $ in $ U(a,\ r) $. If $ D $ is the domain of convergence of (9) in $ \mathbf C ^ {n} $( complex variables $ z=x+iy $), then its restriction $ \Delta $ to $ \mathbf R ^ {n} $ in the real variables $ x = (x _ {1} \dots x _ {n} ) $ is the domain of convergence of (8), $ \Delta \subset D $. In particular, when $ n=1 $, $ D $ is a disc of convergence and its restriction $ \Delta $ is an interval of convergence on $ \mathbf R $, and $ \Delta = \{ x \in \mathbf R ,\ a-r < x < a+r \} $, where $ r $ is the radius of convergence. [1] A.V. Bitsadze, "Fundamentals of the theory of analytic functions of a complex variable" , Moscow (1972) (In Russian) [2] A.I. Markushevich, "Theory of analytic functions of a complex variable" , 1 , Chelsea (1977) (Translated from Russian) [3] E.C. Titchmarsh, "The theory of functions" , Oxford Univ. Press (1979) [4] L. Bieberbach, "Analytische Fortsetzung" , Springer (1955) [5] E. Landau, D. Gaier, "Darstellung und Begrundung einiger neuerer Ergebnisse der Funktionentheorie" , Springer, reprint (1986) [6] V.S. Vladimirov, "Methods of the theory of functions of several complex variables" , M.I.T. (1966) (Translated from Russian) [7] B.V. Shabat, "Introduction of complex analysis" , 1–2 , Moscow (1985) (In Russian) [8] S. Bochner, W.T. Martin, "Several complex variables" , Princeton Univ. Press (1948) [9] A.I. Yanushauskas, "Double series" , Novosibirsk (1980) (In Russian) The approach to analytic functions via power series is the so-called Weierstrass approach. For a somewhat more abstract setting ( $ \mathbf C $ replaced by a suitable field) see [a3], Chapts. 2–3 (cf. also Formal power series). For $ \mathbf C ^ {n} $ see [a1]–[a2]. [a1] W. Rudin, "Function theory in polydiscs" , Benjamin (1969) [a2] R. Narasimhan, "Several complex variables" , Univ. Chicago Press (1971) [a3] K. Diederich, R. Remmert, "Funktionentheorie" , I , Springer (1972) Power series. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Power_series&oldid=44404 This article was adapted from an original article by E.D. Solomentsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Power_series&oldid=44404" TeX done
CommonCrawl
SmartTracing: self-learning-based Neuron reconstruction Hanbo Chen1, 2Email author, Hang Xiao3, Tianming Liu2 and Hanchuan Peng1 https://doi.org/10.1007/s40708-015-0018-y Accepted: 4 August 2015 In this work, we propose SmartTracing, an automatic tracing framework that does not require substantial human intervention. There are two major novelties in SmartTracing. First, given an input image, SmartTracing invokes a user-provided existing neuron tracing method to produce an initial neuron reconstruction, from which the likelihood of every neuron reconstruction unit is estimated. This likelihood serves as a confidence score to identify reliable regions in a neuron reconstruction. With this score, SmartTracing automatically identifies reliable portions of a neuron reconstruction generated by some existing neuron tracing algorithms, without human intervention. These reliable regions are used as training exemplars. Second, from the training exemplars the most characteristic wavelet features are automatically selected and used in a machine learning framework to predict all image areas that most probably contain neuron signal. Since the training samples and their most characterizing features are selected from each individual image, the whole process is automatically adaptive to different images. Notably, SmartTracing can improve the performance of an existing automatic tracing method. In our experiment, with SmartTracing we have successfully reconstructed complete neuron morphology of 120 Drosophila neurons. In the future, the performance of SmartTracing will be tested in the BigNeuron project (bigneuron.org). It may lead to more advanced tracing algorithms and increase the throughput of neuron morphology-related studies. SmartTracing Neuron reconstruction Neuron morphology Reconstruction confidence The manual reconstruction of a neuron's morphology has been in practice for one century now since the time of Ramón y Cajal. Today, the technique has evolved such that researchers can quantitatively trace neuron morphologies in 3D with the help of computers. As a quantitative description of neuron morphology, the digital representation has been widely applied in the tasks of modern neuroscience studies [1–3] such as characterizing and classifying neuron phenotype or modeling and simulating electrophysiology behavior of neurons. However, many popular neuron reconstruction tools such as Neurolucida (http://www.mbfbioscience.com/neurolucida) still rely on manual tracing to reconstruct neuron morphology, which limits the throughput of analyzing neuron morphology. In the past decade, many efforts have been given to eliminate such a bottleneck by developing automatic or semi-automatic neuron reconstruction algorithms [1, 3]. In these algorithms, different strategies and models were applied, such as pruning of over-complete neuron trees [4, 5], shortest path graph [6], distance transforms [7], snake curve [8], and deformable curve [9]. However, the completeness and the attribute of resulted neuron morphology vary tremendously between different algorithms. Recently, to quantitatively assess such variability between algorithms and advance the state of the art of automatic neuron reconstruction method, a project named BigNeuron [10, 11] has been launched to bench-test existing algorithms on big dataset. One reason causing such variability is that image quality and attributes vary between different data sets—partially due to the differences in imaging modality, imaging parameter, animal model, neuron type, tissue processing protocol, and the proficiency of microscopic operator. And some of the algorithms were developed based on specific data or were developed to solve specific problem in the data which may not be applicable for other types of data. Another reason is that most of the tracing algorithms required user input of parameters. As a consequence, the optimal parameters vary between images and thus require manual tuning by the user with sufficient knowledge of the algorithm. We note that most of the current automatic neuron reconstruction algorithms are not "smart" enough. Indeed, many times they require human intervention to obtain reasonable result. To conquer this limitation, one can adapt learning-based methods; so the algorithm can be trained for different data. In [12], the authors proposed a machine learning approach to estimate the optimal solution of linking neuron fragments. However, the fragments to link were still generated by model-driven approaches, and it requires manual work in generating training samples. In this paper, based on machine learning algorithms, we proposed SmartTracing, an automatic tracing framework that does not require human intervention. The procedure of the SmartTracing algorithm is outlined in Fig. 1. First, the initial reconstruction was obtained based on existing automatic tracing algorithms (Fig. 1b). Second, a confidence metric proposed in this paper was computed for each reconstruction segment to identify reliable tracing (Fig. 1c). Third, a training sampler (Fig. 1d) and the most characteristic features were obtained. Fourth, a classifier was then trained and the foreground containing neuron morphology was predicted (Fig. 1e). Finally, after adjusting the image based on prediction result, the final reconstruction was traced (Fig. 1f). Overview of SmartTracing method and the result for a single image. In each sub-figure, the global 3D view of images and the overlapped reconstructions is shown on the left. The zoomed-in 3D view (a–c) and (f) or slice view (d–e) are shown on the right. The locations of the zoomed-in view are highlighted in a The paper is organized as follows. We first discuss the key steps of SmartTracing. Then we describe the implementation and the availability of the algorithm. Finally, we present experimental results on real neuron image data, followed by some brief discussion of the pros and cons and the future extension of SmartTracing. 2.1 Automatic search training exemplars 2.1.1 Confidence score of reconstruction In SmartTracing, we first identify the reliable neuron reconstructions as training exemplars. A neuron reconstruction can be decomposed into multiple segments by breaking the reconstruction at the branch point. Whether or not a segment is trustworthy can be tested by checking if there is an alternative path connecting the two ends of the segment compared to this segment. Our premise is that a segment with no better alternative pathway (e.g., Figure 2c) is more reliable in comparison with a segment with alternative pathway (e.g., Figure 2d). Specifically, for a segment L ij between points i and j, the image intensity along L ij will be masked to 0 first. Then, the shortest path \(L_{ij}^{*}\) weighted by intensity between points i and j will be identified. In the original image, the average intensity along L ij and \(L_{ij}^{*}\) will be measured: $$\overline{{I_{ij} }} = \frac{{\int_{{L_{ij} }} {I(x){\text{d}}x} }}{{L_{ij} }},$$ where I(x) is the intensity of x and L ij is the length of L ij . Illustration of alternative path. For each segment in the reconstructions, after masking the image along the segment, the alternative path will be searched by fast marching from one end to the other end of the segment based on intensity. a neuron to reconstruct, b initial reconstructions, c alternative path of L ij , d alternative path of L pq Then the confidence metric can be obtained by dividing \(\overline{{I_{ij} }}^{*}\) by \(\overline{{I_{ij} }}\): $$C_{ij} = \overline{{I_{ij} }}^{*} /\overline{{I_{ij} }}.$$ Our method is that if an alternative path exists, \(\overline{{I_{ij} }}^{*}\) will be closer to or even larger than \(\overline{{I_{ij} }}\) and C ij will be close to 1. Otherwise, \(L_{ij}^{*}\) will be a relatively straight line passing through background with low intensity connecting i and j, and thus \(C_{ij} \ll 1\). This measurement is based on the assumption that background intensity is lower than foreground intensity. When the background intensity is greater than foreground (e.g., for brightfield images), we can simply invert C ij in Eq. (2). 2.1.2 Obtaining training exemplars Based on the confidence score obtained, the original image can be classified into 4 groups of regions—foreground samples (labeled neurons), background samples (no-neuron area), uncertain regions, and the irrelevant area (Fig. 1d). Foreground samples are defined as the skeleton regions of confident reconstruction segments. Background samples are defined as the non-skeleton regions surrounding the confident reconstruction segments. The intermediate zones between these two regions are taken as uncertain regions. And the zones surrounding less-confident reconstructions are taken as uncertain regions as well. These 3 types of regions compose 3 layers surrounding the confident reconstructions—core layer: foreground samples; middle layer: uncertain regions; and outer layer: background samples. 2.2 Extracting features for classification Image intensity-based features are extracted by adopting the method proposed in [13]. The whole procedure is outlined in Fig. 3. For each sample voxel, features are extracted in a 3D cube surrounding this voxel (Fig. 3a). Multi-resolution wavelet representation (MWR) is applied to project the sub-volume of the local 3D cube into a feature space (Fig. 3b. c). Then, a subset of features is selected based on minimum-Redundancy Maximum-Relevance (mRMR) method [14] for classification (Fig. 3d). Illustration of feature selection procedure. a Extracting sub-volume in 3D cube surrounding the sample voxel. b Wavelet decomposition for volume data. c Multi-resolution wavelet representation. d Selecting a characterizing subset of features based on mRMR for classification MWR codes the information in both frequency domain and spatial domain. It is effective for identifying local and multi-scale features from signals or images and has been widely used in pattern recognition tasks. The MWR framework was firstly introduced on 1-dimensional (1D) signals and then extended to 2-dimensional (2D) images by Mallat [15]. In brief, a pair of functions was defined to conduct wavelet transform—the mother wavelet ψ(x)—representing the detail and high-frequency parts of a signal and the scaling function φ(x) representing the smooth and low-frequency parts of the signal. To decompose signal into multiple resolutions, the calculation is performed iteratively on the smoothed signal calculated based on φ(x). In practice, for discrete signal, instead of calculating wavelet ψ(x) and scaling function φ(x), a high pass filter H and a low pass filter L will be applied to calculate MWR. Mallat has shown that MWR can be extended from 1D signal to 2D image by convolving the image with the filters in one dimension first and then convolving the output image with the filters in the other dimension [15]. Such operation can be further extended to 3D volume [16]. As illustrated in Fig. 3b, in one level of decomposition, 8 groups of wavelet coefficients are obtained by convolving volume with different permutations of two filters in three directions successively. The smoothed volume LLL is further decomposed in the next level to achieve multi-resolution representations. After MWR decomposition, the dimension of feature space is relatively high—the number of features \(\{ f_{i} \}\) equals the number of voxels in the sub-volume (Fig. 3c). Since some of these features may carry redundant information or non-discriminative information, using the full set of MWR coefficients directly may lead to inaccurate result. To better discriminate patterns and improve the robustness and accuracy of training framework, we select the most characterizing subset of features S. We consider the mRMR feature selection method to solve the problem. The algorithm has been widely applied in selecting features in high-dimensional data such as microarray gene expression data to solve classification problems [17]. In the algorithm, the statistical dependency between the exemplar type and the joint distribution of the selected features will be maximized. To meet this criterion, mRMR method search for the features that are mutually far away from each other (minimum redundancy) but also individually most similar to the distribution of sampler types (maximum relevance). In practice, these two conditions were optimized simultaneously: $$\mathop {\hbox{max} }\limits_{S \in W} \left\{ {\frac{1}{\left| S \right|}\mathop \sum \limits_{i \in S} I(c,f_{i} ) - \frac{1}{{\left| S \right|^{2} }}\mathop \sum \limits_{i,j \in S} I(f_{i} ,f_{j} )} \right\},$$ where W denotes the full set of MWR coefficients, c denotes the vector of sampler type, \(\left| S \right|\) is the number of features, and I(x, y) is the mutual information between x and y. The first term in the equation is the maximum relevance condition, and the second term is the minimum redundancy condition. It has been shown in [14] that the solution can be computed efficiently in \(O(\left| S \right|*\left| W \right|)\). 2.3 Training classifier and tracing neuron reconstruction Based on the extracted features of training samplers, supervised training can be performed to train a classifier for foreground/background predictions. In our proposed framework, we use Support Vector Machine (SVM) implemented in LIBSVM tool kit [18]. The default parameter setting of LIBSVM is used. A subset of foreground and background training samplers is randomly chosen from the pool to make sure that the numbers of training samplers from each class are the same. With the trained classifier, we then examine the voxels in the image and label them as foreground or background (Fig. 1e). Since in neuron tracing problem foreground signals are often sparse and relatively continuous in the image, we use a fast marching algorithm to search for the foreground signals. Initially, the voxels of foreground samples are pre-labeled as foreground and the rest voxels are marked "unknown." The algorithm would then march from foreground voxels to their adjacent unknown voxels. For each of such "unknown" voxels, its feature will be extracted and will be classified into foreground or background based on the classifier trained. If the voxel is classified as foreground, it will be taken as a new starting point for the next round of marching. The marching will stop if no more foreground voxel can be reached, and all of the unknown voxels left will be labeled as background. Based on the labeled image, the original image is adjusted to obtain the final tracing result. The intensity of background voxels is set to 0. For foreground voxel, if its intensity is lower than threshold set for tracing algorithm, the intensity of the voxel will be set as the threshold value. Otherwise, its intensity will be kept unchanged. Then the tracing algorithm will be re-run on the adjusted image to trace the final corrected neuron reconstruction. 3 Implementation Intuitively, the proposed sampling, training, and prediction framework can be applied on any existing neuron tracing algorithms to test and improve its performance. In our implementation, we used the APP2 tracing algorithm [4] to generate the initial tracing from original image as well as the final tracing from the image after prediction. To our best knowledge, APP2 tracing algorithm is the fastest tracing algorithm among existing methods and is reliable in generating tree shape morphology for neuron reconstructions, which makes it an ideal algorithm to implement proposed framework. On the other hand, the APP2 algorithm has its own limitations. It will stop tracing when there is a gap between signals such as the ones highlighted by arrows in Fig. 1b. Also, like many other tracing algorithms, it needs to fine tune the background threshold and other parameters to avoid over-tracing. Thus, our proposed framework can further improve the performance of APP2. We implemented the SmartTracing algorithm as a plugin of Vaa3D [19, 20] which is the common platform to implement algorithms for the BigNeuron project (bigneuron.org) bench-testing. Since the APP2 algorithm has already been implemented in Vaa3D, the algorithm was directly invoked via the Vaa3D plugin interface. The default parameters of APP2 were taken to generate initial neuron reconstruction. To generate the final reconstruction, the background threshold was set to 1 since the intensity of all the background voxels was set to 0 as introduced in the previous section. The neighborhood 3D window size was 16 × 16 × 16 voxels. The cube of each such 3D small window was decomposed into 3 levels of MWR. The mRMR feature selection was implemented based on the code downloaded from http://penglab.janelia.org/proj/mRMR/, and the top 20 characteristic features were selected. Classifier training and prediction were implemented based on the code downloaded from LIBSVM tool kit (http://www.csie.ntu.edu.tw/%7cjlin/libsvm/). 4 Experimental results The whole framework was tested on 120 confocal images of single neurons in the Drosophila brain downloaded from the flycircuit.tw database. The dimension of each image is 1024 × 1024 × 120 voxels. For some of the images, APP2 works reasonably well in reconstructing neuron morphologies. However, due to the loss of signals during image preprocessing, there could be a gap between neuron segments which resulted in incomplete reconstructions by APP2. Ten examples of incomplete reconstructions were shown and highlighted by arrows in Fig. 4. Those gaps were classified as foreground with proposed SmartTracing framework and filled for complete tracing (red skeletons in Fig. 4). The quantitative measurements of the morphology and the computational running time (using single CPU) of these 10 examples are listed in Table 1. Visualization of reconstructed neuron morphology of 10 selected examples. In each sub-figure, initial reconstruction generated by APP2 (colored skeletons) was overlapped on the original image (gray skeletons). The corresponding final reconstruction obtained by SmartTracing was shown in red skeletons on the right. The initial reconstructions were color coded by confidence scores (blue more confident, red less confident). The incomplete part of the reconstruction and the gap that caused the problem were highlighted by black arrows. The detailed measurements of these reconstructions are listed in Table 1. (Color figure online) The running time of each procedure and the quantitative neuron morphology measurement of 10 selected example datasets Running time (seconds) T in T s T m T t T st R in R st Visualization of the morphology of reconstructions and the original image of these examples are shown in Fig. 4 T in generating initial reconstruction by APP2; T s computing confidence score; T m mRMR feature selection; T t SVM classifier training; T p searching foreground; T st generating final reconstruction; R in initial reconstruction; R st final reconstruction; Length unit: voxel For the 120 confocal images tested, the proposed SmartTracing algorithm successfully improved the overall completeness of reconstructions. In comparison with initial reconstructions, the total length, bifurcation number, branch number, and tip number all increased after the optimization of SmartTracing (Fig. 5). Among those, the completeness of 30 reconstructions was significantly improved (the total length of final reconstruction is 1.2 times larger than that of initial reconstruction). By visual inspection, the SmartTracing algorithm only failed to trace the complete neuron morphology on 1 image out of the 120 images. In this failure case, there is a gap that is too big to be filled (Fig. 6b). Box plots of neuron morphology measurements of the 120 neuron reconstructions obtained Examples of performing SmartTracing iteratively. Reconstruction shown in red tube is overlapped on the original image shown in gray. a Reconstruction of the first and second rounds of SmartTracing of case #7 shown in Fig. 4. b Reconstruction of the case that failed in the first round of SmartTracing but succeeded after two rounds shown in different angles. The gap that caused the failure in the first round is highlighted by arrows. (Color figure online) Notably, SmartTracing is able to run iteratively. The reconstruction generated from the previous round is used as the initial reconstruction for the next round. However, for the reconstruction that is relatively complete, further iteration will not change the result significantly (Fig. 6a) and is time consuming. On the other hand, for the incorrect reconstruction, better training samples could be obtained based on the reconstruction from the previous iteration which may successively remedy the reconstruction. Thus we tried performing SmartTracing iteratively on the previously failed case. Intriguingly, it only took two rounds of SmartTracing to successfully fill the gap and obtain complete reconstruction (Fig. 6b). This is mainly because. with the result from the first round, more training samples from the gap area were obtained to train the classifier, so the gap can be filled in the second round. We then compared the result generated by SmartTracing with other methods. Specifically, the results generated by micro-optical sectioning tomography (MOST) ray-shooting tracing [21] and open-curve snake (Snake) tracing [8, 22] were compared. By visual inspection, the results generated by our proposed SmartTracing were more complete, more topologically correct, and better at reflecting the morphology of the neurons in original images than other tracing methods (Fig. 7). Comparisons of the reconstructions generated by 3 different tracing algorithms using 3 testing images. Image ID is the same as Fig. 4. The original images are shown in the top row followed by the reconstructions generated by MOST (red), Snake (blue), and SmartTracing (green). (Color figure online) In our experiments, the proposed SmartTracing method improved the APP2 tracing and successfully reconstructed 120 Drosophila neurons from confocal images. In addition to filling the gaps between neuron segments, SmartTracing can also reduce over-traces due to image noise, inhomogeneous distribution of image intensity, and inappropriate tracing parameters. Essentially, SmartTracing is an adaptive and self-training image preprocessing procedure that segments the image into the foreground area containing neuron signals and the background voxels. The major novelty of SmartTracing lies in two aspects. First, we proposed a likelihood measurement that serves as a confidence score to identify reliable regions in a neuron reconstruction. With this score, reliable portions of a neuron reconstruction generated by some existing neuron tracing algorithms are identified, without human intervention, as training exemplars for learning-based tracing method. On the other hand, the human proofreader can also benefit from the metric. By ranking the reconstructions by the confidence score, the human annotators are able to prioritize on the less-reliable reconstructions, which increases the overall accuracy and saves time. Second, from the training exemplars the most characteristic wavelet features are automatically selected and used in a machine learning framework to predict all image areas that most probably contain neuron signal. Since the training samples and their most characterizing features are selected from each individual image, the whole process is automatically adaptive to different images and does not require prior knowledge on the object to identify. Potentially, the proposed machine learning and prediction framework can be extended to other image segmentation tasks and 3D object recognition systems such as neuron spine detection, cell segmentation, etc. SmartTracing is applicable to most of the existing tracing algorithms. However, the performance and the outcome of SmartTracing largely relied on the tracing algorithm applied. For instance, the cause of the only failed case among 120 tested images is that APP2 did not generate sufficient initial reconstruction due to the gap which results in a lack of training exemplars. One solution to this limitation is to run SmartTracing iteratively, so better training samples can be acquired from the previous iteration. Also, we can take the merit of different tracing algorithms and use different algorithms in different steps to further improve the performance of the framework—e.g., use MOST algorithm to generate initial tracing for scoring and thus training since it is not sensitive to gaps and can capture more signals; then use APP2 to generate final tracing since it is robust, efficient, and optimal to generate tree shape topology of neurons. Another limitation of SmartTracing is the relatively high computational complexity. At present, the top two time-consuming procedures are the computation of confidence metric, which is proportional to the initial neuron reconstruction complexity, and the predictions of foreground voxels, which is proportional to the size of the neuron. The previously reported computation time is calculated based on a single CPU. With parallel computation framework, both steps can be sped up. In recent years, a growing number of model-driven approaches have been proposed for automatic neuron reconstructions. To our best knowledge, SmartTracing is one of the earliest machine learning-based methods for automatic neuron reconstruction. Different from the traditional learning-based method, SmartTracing does not require human input of training exemplars and can self-adapt to different types of neuroimage data. Additionally, the method can be applied to improve the performance of other existing tracing methods. As part of future work, the performance of SmartTracing will be further examined and improved by BigNeuron project. In the near future, we hope that SmartTracing can significantly facilitate manual tracing and contribute to the neuron morphology reconstructions in large. Allen Institute for Brain Science, Seattle, WA, USA Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA CAS-MPG Partner Institute for Computational Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, 320 Yueyang Road, Shanghai, China Donohue DE, Ascoli GA (2011) Automated reconstruction of neuronal morphology: an overview. Brain Res Rev 67:94–102View ArticleGoogle Scholar Parekh R, Ascoli GA (2013) Neuronal morphology goes digital: a research hub for cellular and system neuroscience. Neuron 77:1017–1038View ArticleGoogle Scholar Meijering E (2010) Neuron tracing in perspective. Cytometry A 77:693–704View ArticleGoogle Scholar Xiao H, Peng H (2013) APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics 29:1448–1454View ArticleGoogle Scholar Peng H, Long F, Myers G (2011) Automatic 3D neuron tracing using all-path pruning. Bioinformatics 27:i239–i247View ArticleGoogle Scholar Lee P-C, Chuang C-C, Chiang A-S, Ching Y-T (2012) High-throughput computer method for 3D neuronal structure reconstruction from the image stack of the Drosophila brain and its applications. PLoS Comput Biol 8:e1002658View ArticleGoogle Scholar Yang J, Gonzalez-Bellido PT, Peng H (2013) A distance-field based automatic neuron tracing method. BMC Bioinform 14:93View ArticleGoogle Scholar Wang Y, Narayanaswamy A, Tsai C-L, Roysam B (2011) A broadly applicable 3-D neuron tracing method based on open-curve snake. Neuroinformatics 9:193–217View ArticleGoogle Scholar Peng H, Ruan Z, Atasoy D, Sternson S (2010) Automatic reconstruction of 3D neuron structures using a graph-augmented deformable model. Bioinformatics 26:i38–i46View ArticleGoogle Scholar Peng H, Meijering E, Ascoli GA (2015) From DIADEM to BigNeuron. Neuroinformatics 13:259–260View ArticleGoogle Scholar Peng H, Hawrylycz M, Roskams J, Hill S, Spruston N, Meijering E, Ascoli GA (2015) BigNeuron: large-Scale 3D neuron reconstruction from optical microscopy images. Neuron. doi:10.1016/j.neuron.2015.06.036 Google Scholar Gala R, Chapeton J, Jitesh J, Bhavsar C, Stepanyants A (2014) Active learning of neuron morphology for accurate automated tracing of neurites. Front Neuroanat 8:37View ArticleGoogle Scholar Zhou J, Peng H (2007) Automatic recognition and annotation of gene expression patterns of fly embryos. Bioinformatics 23:589–596View ArticleGoogle Scholar Peng H, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27:1226–1238View ArticleGoogle Scholar Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11:674–693MATHView ArticleGoogle Scholar Muraki S (1993) Volume data and wavelet transforms. IEEE Comput Graph Appl 13:50–56View ArticleGoogle Scholar Ding C, Peng H (2005) Minimum redundancy feature selection from microarray gene expression data. J Bioinform Comput Biol 3:185–205View ArticleGoogle Scholar Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:1–27View ArticleGoogle Scholar Peng H, Ruan Z, Long F, Simpson JH, Myers EW (2010) V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets. Nat Biotechnol 28:348–353View ArticleGoogle Scholar Peng H, Bria A, Zhou Z, Iannello G, Long F (2014) Extensible visualization and analysis for multidimensional images using Vaa3D. Nat Protoc 9:193–208View ArticleGoogle Scholar Wu J, He Y, Yang Z, Guo C, Luo Q, Zhou W, Chen S, Li A, Xiong B, Jiang T, Gong H (2014) 3D BrainCV: simultaneous visualization and analysis of cells and capillaries in a whole mouse brain with one-micron voxel resolution. Neuroimage 87:199–208View ArticleGoogle Scholar Narayanaswamy A, Wang Y, Roysam B (2011) 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors. Neuroinformatics 9:219–231View ArticleGoogle Scholar
CommonCrawl
Searching for a listed infrastructure asset class using mean–variance spanning Frédéric Blanc-Brude1, Timothy Whittaker1 & Simon Wilde2 Financial Markets and Portfolio Management volume 31, pages 137–179 (2017)Cite this article An Erratum to this article was published on 01 November 2017 This study examines the portfolio-diversification benefits of listed infrastructure stocks. We employ three different definitions of listed infrastructure and tests of mean–variance spanning. The evidence shows that viewing infrastructure as an asset class is misguided. We employ different schemes of infrastructure asset selection (both traditional asset classes and factor exposures) and discover that they do not provide portfolio-diversification benefits to existing asset allocation choices. We also find that defining and selecting infrastructure investments by business model as opposed to industrial sectors can reveal a very different investment profile, albeit one that improves the mean–variance efficient frontier since the global financial crisis. This study provides new insights into defining and benchmarking infrastructure equity investments in general, as well as into the extent to which public markets can be used to proxy the risk-adjusted performance of privately held infrastructure investments. In this paper, we ask the question: Does focusing on listed infrastructure stocks create diversification benefits previously unavailable to large investors already active in public markets? This question arises from what we call the "infrastructure investment narrative" (Blanc-Brude 2013), which is a set of beliefs commonly held by investors about the investment characteristics of infrastructure assets. According to this narrative, the "infrastructure asset class" is less exposed to the business cycle because of the low price elasticity of infrastructure services, implying improved diversification. Furthermore, the value of these investments is expected to be mostly determined by income streams extending far into the future and should thus be less impacted by current events, suggesting a degree of downside protection. According to this narrative, infrastructure investments may provide diversification benefits to investors since they are expected to exhibit low return covariance with other financial assets. In other words, infrastructure investments are expected to exhibit unique characteristics. Empirically, there are at least three reasons why this view requires further examination: Most research on infrastructure uses public equity markets to infer findings for the whole infrastructure investment universe, but there is no robust and conclusive evidence to support this approach; Index providers have created dedicated indices focusing on this theme and a number of active managers propose to invest in listed infrastructure arguing that it constitutes an asset class in its own right and is worthy of an individual allocation; Listed infrastructure stocks are often used by investors to proxy investments in privately held (unlisted) infrastructure, but the adequacy of such proxies remains untested. The existence of a distinctive listed infrastructure effect in investors' portfolios would support these views. However, if this effect cannot be found, there is little to expect from listed infrastructure equity from an asset allocation (risk/reward optimization) perspective and maybe even less to learn from public markets about the expected performance of unlisted infrastructure investments. In this paper, we test the impact of adding 22 different proxies of public infrastructure stocks to the portfolio of a well-diversified investor using mean–variance spanning tests. We focus on three definitions of "listed infrastructure" as an asset selection scheme: A "naïve" rule-based filtering of stocks based on industrial sector classifications and percentage income generated from predefined infrastructure sectors (nine proxies); Existing listed infrastructure indices designed and maintained by index providers (12 proxies); A basket of stocks offering a pure exposure to several hundred projects that correspond to a well-known form of infrastructure investment. These projects are defined, in contrast with the two previous cases, in terms of long-term public–private contracts, not industrial sectors (one proxy). In what follows, we show that the existence of diversification benefits is dependent on how infrastructure is defined and understood as an asset selection scheme. Overall, we find no persistent evidence to support the claims that listed infrastructure provides diversification benefits. In other words, any "listed infrastructure" effect was already spanned by a combination of capital–market instruments over the past 15 years in global, US, and UK markets. We show that listed infrastructure, as it is traditionally defined (by SIC code and industrial sector), is not an asset class or a unique combination of market factors. Indeed, it cannot be distinguished from existing exposures in investors' portfolios. We also show that an alternative definition of infrastructure focusing on the relationship-specific, and therefore contractual, nature of the infrastructure business can help identify exposures that have at least the potential to persistently improve portfolio diversification. The rest of this paper is structured as follows. Section 2 briefly reviews existing research on the performance of listed infrastructure. Section 3 details our approach, while Sects. 4 and 5 present our choice of methodology and data, respectively. The results of the analysis are reported in Sect. 6. Section 7 discusses our findings and their implications for better defining and benchmarking infrastructure equity investments. In Fabozzi and Markowitz (2011, p. 16), asset classes are defined as homogeneous investments with comparable characteristics driven by similar factors, including a common legal or regulatory structure, thus correlating highly with each other. Fabozzi and Markowitz (2011) state that as a result of this definition, the combination of two or more asset classes should provide diversification benefits. Two distinct asset classes should exhibit low return covariance with each other. The question of whether listed infrastructure is an asset class and is a good proxy for a broader universe of privately held infrastructure equity has been discussed in previous research. The approach taken in the literature has been to define infrastructure in terms of industrial categories since roads and airports can seem rather similar businesses compared with automotive factories or financial services. Given the definition above, they can be expected to form a relatively homogeneous group of stocks, indicating a potential asset class. Existing studies can be organized in two groups. First is literature that employs rule-based stock selection schemes focusing on what is traditionally understood as "infrastructure," that is, a collection of industrial sectors. The second group is comprised of papers that employ listed infrastructure indices created by a number of index providers. The first group of studies examines stocks that are classified under a set list of infrastructure activities and derive a certain proportion of their income from these activities.Footnote 1 The findings from these studies suggest considerable heterogeneity in "listed infrastructure." Newell and Peng (2007) report that listed Australian infrastructure exhibits higher returns, but also higher volatility than equity markets. They find higher Sharpe ratios than the market and low but growing correlations over time with market returns. Finkenzeller et al. (2010) find similar results. The work of Newell and Peng (2008) finds that in the USA, infrastructure (ex-utilities) underperforms stocks and bonds over the period from 2000 to 2006, while utilities outperform the market. Rothballer and Kaserer (2012) find that infrastructure stocks exhibit lower market risk than equities in general but not lower total risk, that is, they find high idiosyncratic volatility. They also report significant heterogeneity in the risk profiles of different infrastructure sectors, with an average beta of 0.68, but with variation between sectors. For the utility, transport, and telecom companies, the average betas were 0.50, 0.73, and 1.09, respectively.Footnote 2 Bitsch (2012) finds that infrastructure vehicles are priced using a high-risk premium in part because of complex and opaque financial structuring, information asymmetries with managers, and regulatory and political risks. These findings are in line with the results of several industry studies suggesting that the volatility of infrastructure indices is on par with equities and real estate, but that market correlation is relatively low (Colonial First State Asset Management 2009; RREEF 2008). The conclusions from this strand of the literature are limited. Infrastructure stocks are found to have higher Sharpe ratios in some cases, but the statistical significance of this effect is never tested. Overall, rule-based infrastructure stock selection schemes lead to either anecdotal (small sample) or heterogeneous results, which do not support the notion of an independent asset class. Ad hoc listed infrastructure indices A second group of studies uses infrastructure indices created by index providers such as Dow Jones, FTSE, MSCI, and S&P, as well as financial institutions such as Brookfield, Macquarie, and UBS. These indices are not fundamentally different from the approach described above. They use asset selection schemes based on slightly different industrial definitions of what qualifies as infrastructure and apply a market-capitalization weighing scheme. They are ad hoc as opposed to rule-based because index providers pick and choose which stocks are included in each infrastructure index. Their weighing scheme may also create concentration issues, since they are likely to include very large utilities relative to other firms active in the infrastructure sector, regardless of how it is defined. Using such indices, Bird et al. (2014) and Bianchi et al. (2014) find that infrastructure exhibits returns, correlation, and tail risk similar to that of the stock market, with a marginally higher Sharpe ratio, driven by what could be described as a utility tilt. Other studies on the performance of infrastructure indices by Peng and Newell (2007), Finkenzeller et al. (2010), Dechant and Finkenzeller (2013), and Oyedele et al. (2014) also report potential diversification benefits. However, none examine whether these are statistically or economically significant. For example, Peng and Newell (2007) and Oyedele et al. (2014) compare Sharpe ratios but provide no statistical tests to support their conclusions. Idzorek and Armstrong (2009) provide the only study of the role of listed infrastructure in a portfolio context. The authors create an infrastructure index by combining existing industry indices. Using three versions of their composite index (low, medium, and high utilities), and consistent with previous papers, they report that over the 1990–2007 period, infrastructure returns were similar to those of US equities but with slightly less risk. Finally, using the capital asset pricing model (CAPM) to create a forward-looking model of expected returns including an infrastructure allocation, Idzorek and Armstrong (2009) find that adding infrastructure does not lead to a meaningful improvement in the efficient frontier. Limitations of existing research The existing literature has not examined whether different types of listed infrastructure investments are already spanned in the portfolio of a typical investor. As a result, it remains unclear whether a focus on infrastructure-related stocks can create additional diversification benefits for investors. Nor is it clear whether infrastructure is a new combination of investment factor exposures. In the rest of this paper, we empirically test whether infrastructure stocks, selected according to their industrial classification, provide diversification benefits to investors. Furthermore, following the argument in Blanc-Brude (2013), we examine a different definition of infrastructure, focusing on the "business model" as determined by the role of long-term contracts in infrastructure projects. Next, we describe our approach in more detail. We propose to test the portfolio characteristics of listed infrastructure equity under the three different definitions of what constitutes infrastructure given in Sect. 1. The first two are "naïve" rule-based filtering of stocks based on industrial sector classifications and listed infrastructure indices maintained by index providers. These first two proxies focus on the "real" characteristics of the relevant capital projects. They bundle together assets that may all be related to large structures of steel and concrete but may also have radically different risk profiles from an investment viewpoint. Hence, we also look at a basket of stocks offering a pure exposure to projects that correspond to a specific long-term contract but not to any specific industrial sectors. This last case thus captures a specific infrastructure "business model." We identify a number of stocks that happen to create a useful natural experiment: These are the publicly traded shares of investment vehicles that are solely involved in buying and holding the equity of infrastructure projects engaged in PFI (private finance initiative) projects in the UK and, to a lesser extent, their equivalents in Canada, France, and the rest of the OECD (Organisation for Economic Co-operation and Development).Footnote 3 The firms we identify are listed on the London Stock Exchange, and they buy and hold the equity and shareholder loans (quasi-equity) of hundreds of these PFI project companies. They distribute dividends on a regular basis. They are, in effect, a listed basket of PFI equity (with no additional leverage) and, as such, represent a unique proxy for one model of private infrastructure investment. We can expect the cash flows of these firms to be highly predictable, uncorrelated with markets and the business cycle, albeit highly correlated with the UK RPI index. In other words, we can expect to see some evidence of the "infrastructure investment narrative" discussed earlier that has so far eluded studies of infrastructure stocks defined by their SIC code. Testing asset classes or factors? Using these three alternative approaches to define infrastructure investment, we employ the mean–variance spanning test designed by Huberman and Kandel (1987) to determine whether adding a listed infrastructure bucket to an existing investment portfolio significantly increases diversification opportunities. If the answer is in the affirmative, this result implies a degree of "asset class-ness" of infrastructure stocks, since their addition to a reference portfolio effectively shifts the mean–variance efficient frontier (to the left) and can create new diversification opportunities for investors. Furthermore, we define the reference portfolio used to test the mean–variance spanning properties of listed infrastructure either in terms of traditional asset classes or investment factors. Indeed, the notion of asset class has been losing its relevance in investment management since the financial crisis of 2008, when existing asset class-based allocations failed to provide diversification (e.g., Ilmanen and Kizer 2012). Factor-based asset allocations aim to identify the persistent dimensions of financial assets that best explain (and predict) their performance instead of assuming that assets belong to distinctive categories because they have different names.Footnote 4 Thus, we include both a traditional asset allocation based on asset classes and a factor-based allocation to examine the diversification benefits of listed infrastructure under the three definitions so as to test whether listed infrastructure is indeed an asset class or, alternatively, a unique combination of investment factors. Testing persistence Finally, we test the existence of a persistent effect of listed infrastructure on a reference portfolio by splitting the observation period in two, from 2000 to 2008 and from 2009 to 2014, to test for the impact of the 2008 reversal of the credit cycle, aka the global financial crisis (GFC). In Sect. 6, we report results for the whole sample period, as well as for two subsample periods denoted as pre- and post-GFC periods. In the case of the PFI portfolio, our data begin in 2006, and so we also divide the sample in 2011, which marks the time of the Eurozone debt crisis and the launch of quantitative easing policies by the Bank of England. In the next section, we discuss the mean–variance spanning methodology used in the remaining sections of this paper. In a mean–variance framework, the question of whether infrastructure provides diversification benefits is equivalent to asking whether investors are able to improve their optimal mean–variance frontier by including infrastructure stocks in an existing reference portfolio. This question can be answered using the mean–variance spanning test described by Huberman and Kandel (1987), which examines whether the efficient frontier is improved when including new assets. If the mean–variance frontier, inclusive of the new assets, coincides with that already produced by the reference assets, the new assets can be considered to be already spanned by the existing portfolio, that is, no new diversification benefit is created. Conversely, if the existing mean–variance frontier is shifted to the left in the mean/variance plane by the addition of the new asset, then investors have improved their investment-opportunity set. This approach has been used to examine the diversification benefits of different asset classes. For instance, Petrella (2005) and Eun et al. (2008) employ this methodology to examine the diversification benefits of small-cap stocks. Likewise, Kroencke and Schindler (2012) examine the benefits of international diversification in real estate using mean–variance spanning, while Chen et al. (2011) examines the diversification benefits of volatility. However, to date, the approach has not been used in the literature on listed infrastructure. Mean–variance spanning is a regression-based test that assumes that there are K reference assets as well as N test assets. In Huberman and Kandel (1987), there is a linear relationship between test and reference assets so that: $$\begin{aligned} R_{2t} =\alpha +\beta R_{1t} +\varepsilon _{t} \end{aligned}$$ with \(t = 1, {\ldots } T\) periods and R1 representing a \(T \times K\) matrix of excess realized returns of the K benchmark assets. R2 represents a \(T \times N\) matrix of realized excess returns of the N test assets. \(\beta \) is a \(K \times N\) matrix of regression factor loadings and \(\varepsilon \) is a vector of the regression error terms. The null hypothesis is that existing assets already span the new assets. This implies that the \(\alpha \) of the regression in Eq. (1) is equal to 0, while the sum of the \(\beta \hbox {s}\) equals 1. As a result, the null hypothesis assumes that a combination of the existing benchmark assets is capable of replicating the returns of the test assets with a lower variance. Chen et al. (2011) describe the null hypothesis as: $$\begin{aligned} H_{0S} =\alpha =0,\quad \delta =1-\beta 1_{K}=0 \end{aligned}$$ where \(\alpha \) is the regression intercept coefficient and \(\beta \) is the matrix of factor loadings. As this analysis is only examining the case where \(N=1\), the test statistic is given by:Footnote 5 $$\begin{aligned} { HK}=\left( {\frac{1}{V-1}} \right) \left( {\frac{T-K-1}{2}} \right) \end{aligned}$$ where V is the ratio of the determinant of the maximum likelihood estimator of the error covariance matrix of the model assuming that there is no spanning of the efficient frontier (otherwise known as the unrestricted model) to that of the determinant of the maximum likelihood estimator of the model that assumes spanning occurs (known as the restricted model). T is the number of return observations; K is the number of benchmark assets included in the study. The HK variable is a Wald test statistic and follows an F-distribution with \((2, T-K-1)\) degrees of freedom. Kan and Zhou (2012) developed a two-stage test to examine whether the rejection of the Huberman and Kandel (1987) null hypothesis is due to differences in the tangency or the global minimum variance as a result of the addition of new assets. The first step of the Kan and Zhou (2012) test examines whether \(\alpha = 0\). If the null is rejected at this stage, the two tangency portfolios comprising the benchmark assets, and the benchmark and new assets, respectively, are statistically different. The second stage of the Kan and Zhou (2012) test examines whether \(\delta = 0\) conditional on \(\alpha = 0\). If the null hypothesis is rejected, the global minimum variance of the test portfolio and the benchmark portfolios are statistically different (for a discussion, see Chen et al. 2011). In this paper, we incorporate both the Huberman and Kandel (1987) and Kan and Zhou (2012) tests to investigate where infrastructure can provide portfolio-diversification benefits.Footnote 6 Next, we describe the data employed in this study. This section describes the datasets used to build test infrastructure portfolios and reference portfolios to which to apply the mean–variance spanning methodology described previously. Sections 5.1, 5.2, and 5.3 describe listed infrastructure proxies designed with sector-based asset selection rules, index provider data, and the PFI portfolio, respectively. Section 5.4 describes the reference portfolios, with the summary statistics reported in Appendices A and B. All returns and standard deviations reported in this section are annualized from monthly data. Test assets—listed infrastructure companies Asset selection The first asset selection scheme represents the "naïve" definition of infrastructure equity investment and follows the methodology described by Rothballer and Kaserer (2012), following broad industry definitions to determine infrastructure-related stocks.Footnote 7 There are 5757 possible securities thus identified as infrastructure related. Next, only stocks for which the majority of the revenue was obtained from sectors corresponding to infrastructure activities are kept in the sample. A minimum market capitalization of USD 500 million is also required to be included in the sample. This yields 1290 firms with at least 50% of their income from infrastructure activities.Footnote 8 Setting a minimum infrastructure sector revenue threshold to 75 and 90% yields 650 and 554 stocks, respectively.Footnote 9 USD price and total returns are from Datastream using the methodology described in Ince and Porter (2006).Footnote 10 The firms thus identified comprise at most 12, 7, and 6.5% of the MSCI World market value as of December 31, 2014, for the 50, 75, and 90% revenue thresholds, respectively. Table 1 Descriptive statistics of the naïve infrastructure stock selection scheme, 2000–2014 For market-cap-weighted portfolios of infrastructure stocks, defined according to the industry-based scheme described above, we report, in Table 1, annualized returns, standard deviation and Sharpe ratios, and the maximum drawdown statistics for the period 2000–2014 for price and total returns.Footnote 11 Note that the reference market index should not be difficult to beat. While market-cap-weighted indices are a useful point of reference, they have been shown to be highly inefficient in previous research (see Amenc et al. 2010).Footnote 12 Nevertheless, the listed infrastructure portfolios obtained above do not necessarily offer better risk-adjusted performance than this relatively unambitious baseline. We observe that irrespective of the revenue cutoffs employed to create the infrastructure portfolios, the telecom sector continually produces poor returns. It appears that this sector has not recovered from the bursting of the technology bubble in the early 2000s. This suggests that infrastructure sectors experience a high degree of cyclicality, as well as a complete absence of persistence. Transportation fares better, with higher Sharpe ratios than the market index under both the price return and total return measures. Drawdown risk is typically higher than the market return for price and total returns, as shown in Table 1. During the sample period, utilities outperform the broad market only from a total return perspective. Next, we describe our second test asset, a combination of rule-based and ad hoc stock selection schemes created by index providers. Test assets—ad hoc listed infrastructure indices The basic requirements for being included in listed infrastructure indices created by index providers are not very different from the naïve selection scheme described above. They include: being part of a broader index universe (usually that of the infrastructure universe of the index provider); and a minimum amount of revenue derived from infrastructure activities. However, minimum revenue requirements and the definition of infrastructure activities vary between index providers, adding what could amount to active views to a rule-based scheme. We test two groups of listed infrastructure indices: a set of global indices and one designed to represent the US market only. Global indices provide a direct comparison with the naïve approach described above, while a US-only perspective allows more controls and granularity when designing a reference portfolio of asset classes or factors to test the mean–variance spanning of listed infrastructure indices. Global infrastructure indices The indices included in this sample are listed below; their descriptive statistics are presented in Sect. 5.2.2. This study includes seven global infrastructure indices and four US infrastructure indices. The global infrastructure indices are:Footnote 13 Dow Jones Brookfield Global Infrastructure Index; FTSE Macquarie Global Infrastructure Index; FTSE Global Core Infrastructure; MSCI World Infrastructure Index; MSCI ACWI Infrastructure Capped; UBS Global Infrastructure and Utilities; and UBS Global 50 / 50 Infrastructure and Utilities. The universe thus recognized by index providers is not very large, with only the MSCI World Infrastructure and MSCI ACWI Global Infrastructure representing more than 10% of the value of the MSCI World Index. US infrastructure indices The US infrastructure indices included in this study are: FTSE Macquarie USA Infrastructure Index; MSCI US Infrastructure Index; MSCI USA Infrastructure 20/35 Capped Index; and Alerian MLP Infrastructure Index. Table 2 shows that most infrastructure indices exhibit higher Sharpe ratios than the reference market index (MSCI World). The Dow Jones Brookfield Global Infrastructure Index exhibits the highest average annualized returns and Sharpe ratio for the sample period. This performance is in contrast to that of the MSCI World Infrastructure Index, which exhibits negative performance on a price return basis. Table 2 suggests that drawdown risk is very similar between infrastructure indices and the broad market, with the exception of the Brookfield and MSCI ACWI indices. In the case of US-only indices, the MSCI and FTSE indices reported in Table 3 do not appear too different from the broad market index (here the Russell 3000), but the Alerian MLP Index, which captures an underlying business model focused on dividend distributions, exhibits very different characteristics. Table 2 Descriptive statistics for the global infrastructure indices for the period 2000–2014 Table 3 Descriptive statistics for annualized price and total returns of US infrastructure stock indices, 2000–2014 Test assets-listed baskets of contracted infrastructure Projects the PFI portfolio consists of HSBC Infrastructure Company Ltd (HICL); John Laing Infrastructure Fund Ltd (JLIF); GCP Infrastructure Ltd (GCP); International Partnerships Ltd (INPP); and Bilfinger Berger Global Infrastructure Ltd (BBGI). As discussed, these firms are solely occupied with buying and holding the equity and quasi-equity of PFI (private finance initiative) project companies in the UK and similar countries. The project companies they invest in are mostly involved in delivering so-called availability-payment infrastructure projects, by which the public sector pays a pre-agreed income to the project firm on a regular basis in exchange for the construction/development, maintenance, and operation of a given infrastructure project, given a pre-agreed output specification, for several decades. These PFI project companies do not engage in any other activity during their lifetime and only deliver the contracted infrastructure and associated services while repaying their creditors and investors. As such, they give access to a pure infrastructure project cash flow, representative of the underlying nature of the PFI business model. The firms in the PFI portfolio can be considered useful proxies for a portfolio of PFI equity investments. While the project companies are typically highly leveraged, the firms in the PFI portfolio do not make a significant use of leverage. Hence, as a group, they can be considered as representative of a listed basket of PFI equity stakes. Table 4 suggests that the PFI portfolio possesses different characteristics than the other listed infrastructure portfolios examined to this point. Its Sharpe ratio is high and its maximum drawdown is much lower than the market reference (here the FTSE All Shares). Indeed, the maximum drawdown for the PFI portfolio is also much lower than the FTSE Macquarie Europe Infrastructure Index. The combination of high-risk-adjusted performance with low drawdown risk is particularly striking in the total return case. Reference assets As discussed above, we use two types of reference allocations to test the impact of adding listed infrastructure to an investor's universe, an asset class-based allocation and a factor-based allocation. All the summary statistics for the reference assets are given in Tables 13 and 14 in the appendix. Table 4 Descriptive statistics for annualized price and total returns of the PFI portfolio, an infrastructure index, and the market index, 2006–2014 Global asset class-based reference portfolio A "well-diversified investor" in the traditional, albeit imprecise, meaning of the term can be expected to hold a number of different asset classes, including: Global fixed interest proxied by JP Morgan Global Aggregate Bond Index; Commodities proxied by the S&P Goldman Sachs Commodity Index; Real estate proxied by MSCI World Real Estate Index; Hedge funds proxied by the Dow Jones Credit Suisse Hedge Fund Index; and OECD and emerging market equities proxied by MSCI World and MSCI Emerging Market Indices, respectively. One potential issue with employing indices as a reference asset is the possibility of double counting infrastructure stocks in both the reference and test assets. This has the potential of biasing the mean–variance spanning tests against finding an improvement in the investment-opportunity set. Ideally, removing any infrastructure like stocks from the reference assets would solve the problem of double counting; however, the circulation of index-constituent lists is too limited to allow this. However, the MSCI World Index (MSCI 2014) states that as of November 2014, the utilities and telecom industries comprise 3.32 and 3.46% while the industrial sector comprises 10.89% and the share of infrastructure in industrials (e.g., railway) is small. Although it would be preferable to exclude the infrastructure stocks from the MSCI World, we do not know the constituents, so this cannot be done. However, given the low weighting, it is not likely that any results will be biased against the infrastructure stocks. We conclude that not isolating infrastructure stocks from our reference assets will not materially influence the conclusions of this study. The price and total returns for the reference asset portfolios are reported in Panel A of Table 13. The hedge fund index reports the highest price returns, at 0.062 for the period 2000 to 2014, whereas the OECD stocks report the lowest returns, at \(-\)0.001. The hedge fund proxy also reports the lowest standard deviation of returns; commodities report the highest. As a result, the hedge fund proxy reports the highest Sharpe ratio and the OECD stocks report the lowest. The total returns of the asset class proxies reveal a similar finding. The hedge fund proxy reports the highest returns, lowest standard deviation, and highest Sharpe ratio. Commodities report the lowest returns and highest standard deviation and, as a result, the lowest Sharpe ratio for the sample period. US asset class reference portfolio A typical US-based reference portfolio built using traditional asset classes would include: Government bonds proxied by the Barclays Govt Aggregate Index; Corporate bonds represented by the Barclays US Aggregate Index; High-yield bonds with the Barclays US Corporate High Yield; Real estate, as per the US DataStream Real Estate Index; Hedge funds represented by the Dow Jones Credit Suisse Hedge Fund Index; US equities captured by the Russell 3000;Footnote 14 and World equities represented by the MSCI World ex-US. Panel B of Table 13 displays the summary statistics for the US asset class reference portfolios. For the price returns the hedge fund proxy exhibits the highest returns; high-yield bonds exhibit the lowest. Corporate bonds report the lowest standard deviation; commodities report the highest. Finally, real estate reports the highest Sharpe ratio during the sample period and high-yield bonds report the lowest. For the total returns, again hedge funds exhibit the highest returns, whereas commodities exhibit the lowest. Corporate bonds exhibit the lowest standard deviation, and commodities again exhibit the highest. When total returns are considered, corporate bonds have the highest Sharpe ratio; commodities have the lowest. UK asset class reference portfolio To test the mean–variance spanning properties of the PFI portfolio, we build a UK asset class reference portfolio consisting of: Fixed interest, represented by the Bank of America/ML UK Gilts Index; Real estate, proxied by the DataStream UK Real Estate Index; Hedge funds, represented by the UK DataStream Hedge Funds Index; Commodities, as proxied by the S&P Goldman Sachs Commodity Index; UK equities represented by the FTSE100; and World equities proxied by the FTSE World ex-UK. The returns, standard deviation, and Sharpe ratios for the UK reference asset classes are presented in Panel C of Table 13. For the price returns, the fixed-interest proxy exhibits the highest returns, and the proxy employed for UK stock returns exhibits the lowest. Fixed interest again exhibits the lowest standard deviation, whereas commodities exhibit the highest. As a result, the fixed-interest proxy has the highest Sharpe ratio, while UK stocks have the lowest. When total returns are considered, real estate now exhibits the highest return and commodities the lowest. For standard deviation, fixed interest again exhibits the lowest and commodities the highest. The highest Sharpe ratio is again fixed interest with commodities exhibiting the lowest. Global factor-based reference portfolio Consistent with prior research, the factors in this study are constructed from stock and bond market indices. We follow Bender et al. (2010), Ilmanen and Kizer (2012), and Bird et al. (2013) to build market, size, value, term, and default factors. The market factor is the excess return of the MSCI US and MSCI Europe indices. The size factor is calculated by taking the difference between the simple average of MSCI Small Value and Growth indices and the simple average of MSCI Large Value and Growth indices. The value factor is constructed by obtaining the difference between simple average of MSCI Small-, Mid-, and Large-Value indices and simple average of MSCI Small-, Mid-, and Large-Growth indices. The term factor is estimated by taking the difference between the returns of the US government 10-Year Index and S&P US Treasury Bill 0–3 Index. Finally, the default factor is estimated by the change in the Moody's Seasoned Baa Corporate Bond Yield relative to the yield on 10-Year Treasury constant maturity. The price and total return summary statistics for the factor portfolios are described in Table 14. The average price returns for both the US and Europe market factors are negative and with a higher standard deviation than the size and value factors. For the total returns, the Europe market factor is still negative, along with the default factor. The US market factor is positive but smaller than all other factors apart from the default and Europe market factors. The size factor exhibits the lowest standard deviation; the default factor exhibits the highest. As a result, the size factor has the highest Sharpe ratio and the default factor the lowest. US factor-based reference portfolio US factors are computed using the now canonical formulas reported in Faff (2001): \(\hbox {SMB}=\frac{(\text {Russell 2000 Value + Russell 2000 Growth})}{2}- \frac{(\text {Russell 1000 Value} + \text {Russell 1000 Growth})}{2}\); \(\hbox {HML}=\frac{(\text {Russell 1000 Value + Russell 2000 Value})}{2}- \frac{(\text {Russell 1000 Growth}+ \text {Russell 2000 Growth})}{2}\); Term = Barclays US Treasury 10–20 Years Index–Barclays US Treasury Bills 1–3 Months Index; Default = Barclays US Corporate: AAA Long Index-Barclays US Treasury Long Index; and Market = Russell 3000 Index–US 1-month Treasury Bill return Panel B of Table 14 displays the descriptive statistics for the US factors. Both the term and default factors' average price returns are negative, with the value factor exhibiting the highest average return. The price returns of the US market factor exhibit the highest standard deviation and the default factor the lowest. For the total returns, the term factor has the highest average return, whereas the size factor has the lowest. As with the price returns, the default factor exhibits the lowest standard deviation; the market exhibits the highest. In the next section, we present the results of the mean–variance spanning tests presented in Sect. 4 using the multiple datasets described above. We first present, in Sect. 6.1, the results of the mean–variance spanning tests conducted using the asset classes as the reference assets. Next, we use the factor proxies defined above as the reference portfolio. We report the Huberman and Kandel (1987) regression results for Eq. (1) and the corresponding Kan and Zhou (2012) two-stage, step-down tests. Asset class mean–variance spanning test results Here, we discuss test results for infrastructure defined as listed infrastructure companies (Sect. 6.1.1), global listed infrastructure indices (Sect. 6.1.2), US-based listed infrastructure indices (Sect. 6.1.3), and the PFI portfolio (Sect. 6.1.4). Table 5 Results for the mean–variance spanning tests for naïve infrastructure portfolios with asset class-built reference portfolio Listed infrastructure companies The results of the mean–variance spanning test for the naïve infrastructure portfolios are reported in Table 5. For the price returns of the nine portfolios constructed, Table 5 shows that the reference investment-opportunity set is improved by four of these portfolios. These are the 75% revenue cutoff transport portfolio and the telecommunication portfolios. However, when total returns are examined, only the 50% telecoms and the 75% transport infrastructure portfolio are found to reject the Huberman and Kandel (1987) null hypothesis of spanning at conventional significance levels. No other portfolio improves on the mean–variance frontier created by the reference asset classes. Applying the Kan and Zhou (2012) methodology, the results in Table 5 show that none of the infrastructure portfolios improve the mean–variance frontier from that created by the reference investments. Indeed, the listed infrastructure portfolios neither improve the tangent portfolio nor produce a lower minimum variance portfolio. This finding is consistent when either price or total returns are used. When the subperiods are considered, both before and after the GFC, the conclusion that the naïve infrastructure approach fails to identify any diversification benefits is supported. Panels B and D of Table 5 present the results for the mean–variance spanning tests for the period January 2000 to December 2008. The Huberman and Kandel (1987) test's null hypothesis is rejected in two cases: the price and total returns of the 75% transport portfolio. However, the Kan and Zhou (2012) test fails to reject the null hypothesis. From January 2009 to December 2014 (Panels C and F of Table 5), only one portfolio improves the efficient frontier, the price returns of the 50% utilities portfolio. Here, the Huberman and Kandel (1987) portfolio's null hypothesis is rejected at the 5% significance level and both steps of the Kan and Zhou (2012) test reject the null hypothesis that the portfolio's risk and returns are already spanned by the reference assets. However, just one portfolio out of the nine tested was statistically significant, which fails to provide systematic evidence for the existence of a listed infrastructure asset class. Figure 1 illustrates the findings in Table 5 by showing the mean–variance frontier with and without the addition of the naïve 90% utilities portfolio for the period January 2009 to December 2014. The results in Table 5 confirm that this portfolio does not improve the investment-opportunity set despite the efficient frontier shifting to the left. This is because the global minimum variance is not statistically different as a result of adding infrastructure to the asset class mix. Mean–variance frontier of 90% revenue threshold utilities and asset class reference portfolio Next, we discuss our results using industry-provided infrastructure indices as proxies for the infrastructure asset class, testing whether there are diversification benefits from a global asset class-based reference portfolio. Table 6 presents our results for the global infrastructure price and total return indices described in Sect. 5. Here, using price returns for the full sample period (Panel A, Table 6), the Huberman and Kandel (1987) test demonstrates a statistically significant improvement in the efficient frontier in six of the eight infrastructure indices examined. However, the more restrictive Kan and Zhou (2012) test finds that only two of the eight global infrastructure indices improve the efficient frontier: the Dow Jones Brookfield Global Infrastructure Index and the UBS Infrastructure and Utilities Index. Indices found to improve the efficient frontier by the Huberman and Kandel (1987) test are here found to improve the tangency portfolio or the minimum variance portfolio but not both. As a result, it cannot be assumed that these indices improve the efficient frontier. Using total returns (Panel D, Table 6), again six of the eight global indices reject the null of the Huberman and Kandel (1987) test. The FTSE Core Index fails to span when either the price or total returns are employed, whereas the MSCI World Infrastructure is not spanned by the reference asset classes using price returns, but is spanned when considering total returns. The reverse is true for the UBS 50-50. Using the Kan and Zhou (2012) test, four of the eight global infrastructure indices are found to improve the efficient frontier, but Table 6 shows that most indices found by the Huberman and Kandel (1987) test to improve the efficient frontier only improved the minimum variance portfolio but not the tangency portfolio. As a result, it is not possible to conclude that these listed infrastructure indices are not spanned by existing asset classes. In the subperiod analysis, for price returns pre-GFC (Panels B and E of Table 6), the Huberman and Kandel (1987) test finds that four of the eight global listed infrastructure indices improve the efficient frontier. However, the Kan and Zhou (2012) test finds that these indices improve only the minimum variance portfolio and not the tangency portfolio. Table 6 Results for global listed infrastructure indices with asset class-built reference portfolio When total returns are considered for the same period, the Huberman and Kandel (1987) test finds that all the listed infrastructure indices improve the efficient frontier. The results of the Kan and Zhou (2012) test, however, indicate that although the global indices improved on the tangency portfolio, in the period of analysis, not all reduced the minimum variance portfolio. As a result, in the pre-GFC sample, only FTSE Core, MSCI World Infrastructure, and the MSCI ACWI Capped infrastructure indices can be said to improve the efficient frontier, as they both improve the tangency portfolio and reduce the minimum variance portfolio. Post-GFC (Panels C and F of Table 6), the pre-GFC results are invalidated. Using price returns, only one of the eight indices examined is found to improve the efficient frontier under both the Huberman and Kandel (1987) and Kan and Zhou (2012) tests: the Dow Jones Brookfield index. Using total returns again, only one index is found to improve the efficient frontier under both the Huberman and Kandel (1987) and Kan and Zhou (2012) tests: the MSCI ACWI Capped Index. Hence, the pre-GFC results are not persistent post-GFC. These results argue against the existence of a well-defined and persistent listed infrastructure asset class. Table 7 Results for the US listed infrastructure indices with asset class-built reference portfolio The results for US listed infrastructure indices are presented in Table 7. The Huberman and Kandel (1987) results in Table 7 indicate that for the full period (Panels A and D) both the price returns and total returns of the Alerian MLP Infrastructure Index improve the efficient frontier. None of the other infrastructure indices reject the null hypothesis that the existing asset class investments span the risk and returns provided by the listed infrastructure indices. When the Kan and Zhou (2012) test is employed, the conclusion that the Alerian MLP Infrastructure Index improves the investment-opportunity set is reversed for both the price and total return indices. Although the Kan and Zhou (2012) test finds that the tangency portfolio has improved, it does not reject the null hypothesis that the global minimum variance has improved. As a result, it is not possible to conclude that the inclusion of the Alerian MLP Infrastructure Index improves the investment universe. As the other infrastructure indices do not reject the null hypothesis, the same conclusions apply to them. When pre- and post-GFC subsamples are considered (Panels B, C, E, and F of Table 7), the conclusion that listed infrastructure assets do not improve the investment universe continues to be supported. For the first subperiod, only the total returns of the Alerian MLP Infrastructure Index rejects the null hypothesis of the Huberman and Kandel (1987) test, as illustrated by Fig. 2. Mean–variance frontier of Alerian MLP Index asset class proxies When the Kan and Zhou (2012) test is employed, none of the indices can reject both steps of the test. It is not possible to conclude that the inclusion of infrastructure indices improves the mean variance of traditional asset classes in this sample period. Using the second subsample period, none of the indices, either using total or price returns, are found to reject the null hypothesis, leading to the conclusion that none of the indices improve an investor's diversification opportunities. PFI portfolio Finally, we report, in Table 8, the ability of our PFI portfolio to improve the mean–variance efficiency of a diversified investor in the UK. For the complete sample, the price return series does not provide diversification benefits. However, total return results are found to improve on the reference efficient frontier when investing over the entire period, as Fig. 3 illustrates. The total return PFI portfolio passes both the Huberman and Kandel (1987) and the Kan and Zhou (2012) tests for the full sample period. Table 8 Results for the mean–variance spanning test for the PFI stocks with asset class-built reference portfolio The subperiod analysis shows that the diversification benefits appear only in the period following the GFC. Prior to the GFC, neither price nor total returns of the PFI portfolios improve the efficient frontier. Total returns, for example, produce diversification benefits according to the Huberman and Kandel (1987) test, but the Kan and Zhou (2012) test finds that these benefits are simply due to a change in the global minimum variance portfolio. Without a corresponding increase in the tangency portfolio, it is not possible to conclude that the efficient frontier has been improved. Still, these results may be considered inconclusive, as PFI portfolio returns have a short history, beginning only in 2006. Mean–variance frontier of total returns PFI portfolio and reference portfolio After the GFC, however, the price returns of the PFI portfolio pass the Huberman and Kandel (1987) test. The Kan and Zhou (2012) test finds that this is simply due to the improvement in the minimum variance portfolio but not the tangency portfolio. However, the total return PFI portfolio is found by both the Huberman and Kandel (1987) and Kan and Zhou (2012) tests to exhibit diversification benefits. Hence, the impact of the PFI portfolio appears to be one of the most persistent of the various infrastructure portfolios that were tested on a total return basis. It improves diversification for the entire investment period and, crucially, post-GFC, when all but one of the other infrastructure indices fail to pass the post-GFC test of persistence. Factor-based mean–variance spanning test results Next, we examine how the various listed infrastructure definitions proposed above fare against a factor-based reference portfolio, that is, whether investing in listed infrastructure creates an exposure to a combination of factors not otherwise available to investors already allocating to the well-known factors described in Sect. 5.4. As above, we first present our results for listed infrastructure companies (Sect. 6.2.1), followed by global listed infrastructure (Sect. 6.2.2), and US infrastructure indices (Sect. 6.2.3). Unfortunately, at this stage, we cannot build a reference factor portfolio for the UK due to lack of sufficient data. Table 9 presents our results for the infrastructure portfolios using the naïve infrastructure definition proposed in Sect. 5. Using the full sample (Panels A and D of Table 9), the Huberman and Kandel (1987) test rejects the null hypothesis that the efficient frontier is not improved in five of the nine price return indices and six of the nine total return indices. Applying the Kan and Zhou (2012) test, however, there is no evidence that infrastructure, thus defined, provides diversification benefits. Indices that qualified under the Huberman and Kandel (1987) test all fail to reject the null hypothesis for both steps of the Kan and Zhou (2012) test. Consistent with the findings for the asset class reference portfolio, the addition of listed infrastructure companies to a factor-based allocation does not improve the mean–variance frontier. Pre- and post-GFC results are consistent with the full sample. In the period January 2000 to December 2008 (Panels B and E of Table 9), eight of the nine price return indices are found to improve the efficient frontier according to the Huberman and Kandel (1987) test. When the Kan and Zhou (2012) test is applied, however, this positive result is overturned, with none of the indices examined passing the two-stage test. When total returns are employed, the results are the same. Table 9 Results for the factor asset class and the naïve infrastructure portfolios with factor-based reference portfolios Table 10 Results for the global listed infrastructure indices with factor-based reference portfolios For the period January 2009 to December 2014 (Panels C and F of Table 9), results mirror the pre-GFC sample. For the price return indices, the Huberman and Kandel (1987) test finds that the mean–variance frontier is improved in six of the naïve infrastructure portfolios. However, the Kan and Zhou (2012) test results do not support these findings, and none of the portfolios qualify. The total returns for naïve infrastructure portfolios lead to the same conclusions. The results for the spanning tests for global listed infrastructure indices are presented in Table 10. The results will be by now familiar. Using price returns for the full sample (Panel A of Table 10), six of the eight indices examined reject the null of the Huberman and Kandel (1987) test at the 5% level, but the Kan and Zhou (2012) test indicates that only two of the eight indices improve both the tangency portfolio as well as the global minimum variance portfolio: Only the Dow Jones Brookfield and FTSE Core Infrastructure Index can be said to improve the reference efficient frontier. For the period January 2000 to December 2008 (Panel B of Table 10), only four of the eight indices are found to reject the null of the Huberman and Kandel (1987) test at the 5% level, but none of these pass the Kan and Zhou (2012) test. Between January 2009 and December 2014 (Panel C of Table 10), only two of the eight portfolios are found to reject the null of the Huberman and Kandel (1987) test at the 5% level. Of these, only the Dow Jones Brookfield is found to improve the efficient frontier by the Kan and Zhou (2012) test. Using total returns for the full sample period (Panel D of Table 10), all infrastructure indices examined reject the null of the Huberman and Kandel (1987) test at the 5% level; four still pass the Kan and Zhou (2012) test: the S&P Global Infrastructure, FTSE Macquarie Index, MSCI ACWI Capped Index, and the UBS 50-50 Index. The same is true when the period January 2000 to December 2008 (Panel E of Table 10) is considered: all indices pass the Huberman and Kandel (1987) test and three (the FTSE Core, MSCI World Infrastructure, and MSCI ACWI Infrastructure) are found by the Kan and Zhou (2012) test to improve the tangency portfolio and the global minimum variance portfolio, with the remainder found to improve only the tangency portfolio. However, from January 2009 to December 2014, only three of the eight portfolios pass the Huberman and Kandel (1987) test, and only one (the MSCI ACWI Capped Index) is found to improve the efficient frontier by the Kan and Zhou (2012) test on a total return basis. Table 11 Results for the factor asset classes and US listed infrastructure indices with factor-based reference portfolios Table 11 shows the same results using US market indices and factors. For the full sample period and using price returns (Panel A of Table 11), the Alerian MLP Infrastructure Index is, again, the only index found to improve the efficient frontier according to the Huberman and Kandel (1987) test. In the period from January 2000 to December 2008 (Panel B of Table 11), the Alerian MLP Infrastructure Index rejects the null hypothesis of the Huberman and Kandel (1987) test at the 5% level, but the Kan and Zhou (2012) test concludes that only the tangency portfolio has improved. In the post-GFC period (Panel C of Table 11), similar conclusions hold. Total returns for the full sample period (Panel D of Table 11) again show that only the Alerian MLP Infrastructure Index passes the Huberman and Kandel (1987) test, but the results of the Kan and Zhou (2012) test indicate that this is due to the Alerian MLP Infrastructure Index improving only the tangency portfolio. From January 2000 to December 2008 (Panel E in Table 11), the conclusions are the same. However, for the period from January 2009 of December 2014 (Panel F of Table 11), the Alerian MLP Index passes both the Huberman and Kandel (1987) and Kan and Zhou (2012) tests, indicating that the efficient frontier has been improved. Hence, the MLP Index is found to have a spanning profile somewhat similar to the PFI portfolio in the sense that it manages to create diversification benefits both before and after the GFC when considered from a total return perspective. Summary of results In this paper, we examined the contention that focusing on listed infrastructure has the potential to create diversification benefits previously unavailable to large investors already active in public markets. The reasons for doing so were threefold: Several papers argue that this is the case but do not provide robust statistical tests of the hypothesis; Index providers have created dedicated indices focusing on this idea and a number of active managers propose to invest in listed infrastructure, arguing that it constitutes an asset class in its own right, worthy of an individual allocation; Capital–market instruments are used by investors to proxy investments in privately held (unlisted) infrastructure, but the adequacy of such proxies remains untested. We tested the notion that there is a unique and persistent listed infrastructure effect using 22 listed infrastructure proxies and a series of statistical tests of mean–variance spanning against reference portfolios, built with either traditional asset classes or investment factors. We conducted these tests for global, US, and UK markets covering the past 15 years, on a price return and total return basis. We conclude that listed infrastructure, as traditionally defined by SIC code and the industrial sector, is not an asset class or a unique combination of market factors, and, indeed, cannot be persistently distinguished from existing exposures in investors' portfolios. Expecting the emergence of a new or unique infrastructure asset class by focusing on public equities selected on the basis of industrial sectors is thus misguided. Our test results are summarized in Table 12. The facts include: We tested 22 proxies of listed infrastructure and found little to no robust evidence of a listed infrastructure asset class that was not already spanned by a combination of capital–market instruments and alternatives or a factor-based asset allocation. The majority of test portfolios that improved the mean–variance efficient frontier before the GFC fail to repeat this feat post-GFC. There is no evidence of persistent outperformance. Of the 22 test portfolios used in this paper to try to establish the existence of a listed infrastructure asset class, only four manage to improve on a typical asset allocation defined either by traditional asset class or by factor exposure after the GFC, and only one is not spanned both pre- and post-GFC. We return to these in the discussion below. Building baskets of stocks on the basis of their SIC code and proportion of infrastructure income fails to generate a convincing exposure to a new asset class. A more promising avenue is to focus on underlying contractual or governance structures that tend to maximize dividend payout and pay dividends with great regularity, such as the PFI or MLP models. More generally, benchmarking unlisted infrastructure investments with thematic (industry-based) stock indices is unlikely to be very helpful from a pure asset allocation perspective, that is, the latter do not exhibit a risk/return tradeoff or betas that large investors did not have access to already. Table 12 Summary of mean–variance spanning tests While we conclude from testing the impact of 22 proxies that there is no convincing evidence of a listed infrastructure asset class, it is worthwhile to examine the four proxies that manage to improve on the proposed reference asset allocation after the GFC. Indeed, high pre-GFC Sharpe ratios that do not survive the 2008 credit crunch and lose all statistical significance in mean–variance spanning tests post-GFC do not make good candidates for an asset class or bundle of factors. However, proxies that pass the mean–variance tests after 2008 may at least open the possibility of a more persistent effect. The four proxies that are not already spanned by our reference portfolios in the post-GFC period questions are: The Brookfield Dow Jones Infrastructure Index: Close examination reveals that this index made a significant shift toward the oil and gas sector after the GFC and benefited from the significant rise in oil prices in the subsequent period. We note, without further investigation, that since 2014 and the collapse of global oil prices, it has experienced lackluster performance. Hence, rather than an infrastructure effect, this proxy may have been capturing a temporary oil play. The MSCI ACWI Infrastructure Capped: This proxy is the only one that passes the spanning tests both pre- and post-GFC. In fact, it is one of the few listed infrastructure indices that is not simply weighted by market capitalization but is instead constrained to have a maximum of one-third of its assets invested in telecoms, one-third in utilities, and another third in energy and transportation. Hence, it uses a very ad hoc weighing scheme, vaguely resembling equal weighting, which nevertheless improves on the market-cap-weighted point of reference. Again, rather than an effect driven by an elusive infrastructure asset class, it seems reasonable to assume that portfolio weights explain the impact of this proxy.Footnote 15 The Alerian MLP Index: This proxy and the next one only improve the reference allocation post-GFC on a total return basis. Here, the role played by dividend payouts, their size, and regularity relative to other stocks are likely explanations for why they succeed in passing the spanning tests. The PFI Portfolio: The proxy corresponds to self-contained investment vehicles which receive a steady income stream from the public sector. While these firms have risky but essentially fixed and predictable operating and financing costs, by design, these firms are likely to have very regular dividend payouts and the more "bond-like" characteristics often associated with infrastructure investment. This last point is important since the observed improvement in the efficient frontier by adding assets such as MLPs or PFIs also corresponds to the beginning of the very low interest rate policies introduced by US and UK central banks after the GFC. In such an environment, such high-coupon-paying assets start to exhibit previously unremarkable characteristics that mechanically increase their ability to have an impact on the reference portfolio. Crucially, what determines this ability to deliver regular and high dividend payouts is the contractual and governance structure of the underlying businesses, not their belonging to a given industrial sector. However, it must be noted that the relatively low aggregate market capitalization of listed entities offering a clean exposure to infrastructure business models as opposed to infrastructure industrial sectors may limit the ability of investors to enjoy these potential benefits unless the far larger unlisted infrastructure fund universe has similar characteristics. We conclude that as an asset selection scheme, the notion of investing in infrastructure should be understood as a heuristic, that is, a mental shortcut meant to create an exposure to certain factors, but neither a thing nor an end in itself. A clear distinction can be made between infrastructure as a matter of public policy, in which case the focus is rightly on industrial functions, and the point of view of financial investors, who may be exposed to completely different risks through investments in firms providing exactly the same industrial functions. Notional grouping of assets by industrial sectors (transport, energy, water, etc.) creates very little information or predictive power. Focusing on definitions of infrastructure investment that match the tangible or industrial characteristics of certain firms or assets is unhelpful because it does not take into account the mechanisms that create the potentially desirable characteristics of infrastructure investment. Infrastructure investment should be construed solely as a way to buy claims on future cash flows created by specific underlying business models, themselves the product of long-term contractual arrangements between public and private parties (or, alternatively, between two private parties). It follows that infrastructure investment, listed or not, is much more likely to play a role in an ALM framework than in a pure asset allocation setting (mean–variance optimization) where there are more relevant building blocks for designing investment policies. The article Searching for a listed infrastructure asset class using mean–variance spanning written by Frédéric Blanc-Brude, Timothy Whittaker, Simon Wilde was originally published Online First without open access. See Newell and Peng (2007, 2008), Finkenzeller et al. (2010), Newell et al. (2009), Rothballer and Kaserer (2012), and Bitsch (2012). Using the same sample as Rothballer and Kaserer (2012), Rödel and Rothballer (2012) examine the inflation-hedging ability of infrastructure. They find no evidence to suggest infrastructure exhibits a greater ability to hedge inflation risks than do listed equities. Even restricting their sample to firms with assumed strong monopoly characteristics fails to yield a statistically significant result. PFI projects consist of dedicated project firms entering into long-term contracts with the public sector to build, maintain, and operate public infrastructure facilities according to a pre-agreed service output specification. As long as these firms deliver the projects and associated services for which they have contracted, on time and according to specifications, the public sector is committed to pay a monthly or quarterly income to the firm according to a pre-agreed schedule for multiple decades. In the UK, the long-term contract between the public and private parties also stipulates that this "availability payment" is adjusted to reflect changes in the retail Price Index (RPI). Each project company is a special-purpose vehicle created solely to deliver an infrastructure project and financed on a nonrecourse basis with sponsor equity, shareholder loans, and senior debt. Such factors include the Fama and French (1992) size and value premiums, the term and default premiums (Fama and French 1993), and the momentum anomaly identified by Jegadeesh and Titman (1993). Bender et al. (2010) show that these premiums are uncorrelated with each other and that they increase returns and reduce portfolio volatility over traditional asset class allocations. Likewise, when comparing the diversification benefits of factor-based allocations to alternative assets, Bird et al. (2013) find that factor approaches tend to outperform alternative asset classes. For recent and in-depth analyses of factor investing, see Amenc et al. (2014). Kan and Zhou (2012) state that if \(N \ge 2\), the appropriate formation of the test statistic is given as \({ HK}=\left( {\textstyle \frac{1}{V^{\frac{1}{2}}-1}} \right) \left( {\frac{T-K-1}{2}} \right) \). As another robustness check, we employ the Gibbons et al. (1989) test of portfolio efficiency. The results are similar to the mean–variance spanning test results presented in this paper. The results are available upon request. The SIC and GIC codes used to identify infrastructure are available upon request. The minimum revenue by infrastructure type is reported by SIC or GIC code by Worldscope. This is a crude measure, as it relies on the continuous updating of the revenue codes by Worldscope, as well as on the assumption that GIC or SIC codes represent infrastructure activities. The number of firms identified as well as their geographic and industry distributions are available upon request. Extreme monthly returns are identified following Ince and Porter (2006) and set to a missing value. Ince and Porter (2006) set an arbitrary cutoff of 300% for extreme monthly returns. If R1 or \(Rt-1\) is greater than 300% and \((1 + R1)/(1 + Rt-1) -1\) is less than 50%, then R1 or \(Rt-1\) are set to missing. Furthermore, following Rothballer and Kaserer (2012), 18 months of nonzero returns are required for the stock to be included in the portfolios. Any Datastream-padded price is removed by requesting X(P#S) $U, which returns null values when Datastream does not have a record and any nonequity item is removed by requiring the TYPE description in Datastream to be equal to EQ. The maximum drawdown statistic is measured as a percentage of maximum cumulative monthly return in the sample period. The MSCI World Index is a free-float-adjusted market-capitalization-weighted index comprising 1631 mid-size and large capitalization stocks across 23 developed-country equity markets. MSCI states that the index comprises 85% of the free-float-adjusted market capitalization of each country covered. The index is updated quarterly with annual revisions to the investable universe and the removal of stocks with low liquidity. A brief summary of the indices is available upon request. The Russell 3000 Index was selected for the US equity market index for two reasons. First, it represents the top 3000 stocks by market capitalization (FTSE Russell 2016). This represents a significant proportion of the investable universe of US stocks. Second, for consistency, in the factor-exposure studies we employ the Russell indices to create the factor proxies. In future research, a similar test of mean–variance spanning against efficient or "smart" reference indices is necessary to control for such effects. Amenc, N., Goltz, F., Martellini, L., Retkowsky, P.: Efficient Indexation: An Alternative to Cap-Weighted Indices. EDHEC-Risk Institute Publications, Singapore (2010) Amenc, N., Goltz, F., Lodh, A., Martellini, L.: Towards smart equity factor indices: harvesting risk premia without taking unrewarded risks. J. Portf. Manag. 40(4), 106–122 (2014) Bender, J., Briand, R., Nielsen, F., Stefek, D.: Portfolio of risk premia: a new approach to diversification. J. Portf. Manag. 36(2), 17–25 (2010). doi:https://doi.org/10.3905/JPM.2010.36.2.017. https://doi.org/www.iijournals.com/doi/abs/10.3905/JPM.2010.36.2.017 Bianchi, R.J., Bornholt, G., Drew, M.E., Howard, M.F.: Long-term U.S. infrastructure returns and portfolio selection. J. Bank. Finance 42, 314–325 (2014). https://doi.org/www.sciencedirect.com/science/article/pii/S037842661400048X Bird, R., Liem, H., Thorp, S.: The tortoise and the hare: risk premium versus alternative asset portfolios. J. Portf. Manag. 39(3), 112–122 (2013). https://doi.org/www.iijournals.com/doi/abs/10.3905/jpm.2013.39.3.112 Bird, R., Liem, H., Thorp, S.: Infrastructure: real assets and real returns. Eur. Financ. Manag. 20(4), 802–824 (2014) Bitsch, F.: Do investors value cash flow stability of listed infrastructure funds? No. 2012-01. CEFS working paper series (2012) Blanc-Brude, F.: Towards Efficient Benchmarks for Infrastructure Equity Investments. EDHEC-Risk Institute Publications, Singapore (2013). https://doi.org/f.blancbrude.com/wp-content/uploads/publications/blanc-brude2013u.pdf Chen, H.C., Chung, S.L., Ho, K.Y.: The diversification effects of volatility-related assets. J. Bank. Finance 35(5), 1179–1189 (2011). https://doi.org/www.sciencedirect.com/science/article/pii/S0378426610003742 Colonial First State Asset Management (2009) Global List Infrastructure. https://doi.org/portfolioconstruction.com.au/obj/articles_pcc09/pcc09_DDF_PPT_Colonial-First-State.pdf. Accessed 13 Jan 2016 Dechant, T., Finkenzeller, K.: How much into infrastructure? Evidence from dynamic asset allocation. J. Prop. Res. 30(2), 103–127 (2013) Eun, C.S., Huang, W., Lai, S.: International diversification with large- and small-cap stocks. J. Financ. Quant. Anal. 43(02), 489–523 (2008) Fabozzi, F.J., Markowitz, H.: The Theory and Practice of Investment Management, 2nd edn. Wiley, Hoboken (2011) Faff, R.: An examination of the Fama and French three-factor model using commercially available factors. Aust. J. Manag. 26(1), 1–17 (2001). doi:https://doi.org/10.1177/031289620102600101. https://doi.org/aum.sagepub.com/cgi/doi/10.1177/031289620102600101 Fama, E.F., French, K.R.: The cross-section of expected stock returns. J. Finance 47(2), 427–465 (1992). https://doi.org/doi.wiley.com/10.1111/j.1540-6261.1992.tb04398.x Fama, E.F., French, K.R.: Common risk factors in the returns on stocks and bonds. J. Financ. Econ. 33(1), 3–56 (1993). doi:https://doi.org/10.1016/0304-405X(93)90023-5. https://doi.org/linkinghub.elsevier.com/retrieve/pii/0304405X93900235 Finkenzeller, K., Dechant, T., Schäfers, W.: Infrastructure: a new dimension of real estate? An asset allocation analysis. J. Prop. Invest. Finance 28(4), 263–274 (2010) FTSE Russell: Russell U.S. Equity Indexes v2.1 (2016). https://doi.org/www.ftse.com/products/downloads/Russell-US-indexes.pdf?783 Gibbons, M.R., Ross, S.A., Shanken, J.: A test of the efficiency of a given portfolio. Econometrica 57(5), 1121–1152 (1989). https://doi.org/EconPapers.repec.org/RePEc:ecm:emetrp:v:57:y:1989:i:5:p:1121-52 Huberman, G., Kandel, S.: Mean-variance spanning. J. Finance 42(4), 873–888 (1987) Idzorek, T., Armstrong, C.: Infrastructure and strategic asset allocation: is infrastructure an asset class? Tech. rep, Ibbotson Associates (2009) Ilmanen, A., Kizer, J.: The death of diversification has been greatly exaggerated. J. Portf. Manag. 38(3), 15–27 (2012). https://doi.org/www.iijournals.com/doi/abs/10.3905/jpm.2012.38.3.015 Ince, O.S., Porter, R.B.: Individual equity return data from Thomson Datastream: handle with care. J. Financ. Res. 29(4), 463–479 (2006). https://doi.org/doi.wiley.com/10.1111/j.1475-6803.2006.00189.x Jegadeesh, N., Titman, S.: Returns to buying winners and selling losers: implications for stock market efficiency. J. Finance 48(1), 65–91 (1993). https://doi.org/doi.wiley.com/10.1111/j.1540-6261.1993.tb04702.x Kan, R., Zhou, G.: Tests of mean-variance spanning. Ann. Econ. Finance 13, 139–187 (2012). doi:https://doi.org/10.2139/ssrn.231522 Kroencke, T.A., Schindler, F.: International diversification with securitized real estate and the veiling glare from currency risk. J. Int. Money Finance 31(7), 1851–1866 (2012). doi:https://doi.org/10.1016/j.jimonfin.2012.05.018. https://doi.org/www.sciencedirect.com/science/article/pii/S026156061200109X MSCI: MSCI World Index (2014). https://doi.org/www.msci.com/resources/factsheets/indexfactsheet/msci-world-index.pdf Newell, G., Peng, H.W.: The significance and performance of retail property in Australia. J. Prop. Invest. Finance 25(2), 147–165 (2007) Newell, G., Peng, H.W.: The role of US infrastructure in investment portfolios. J. Real Estate Portf. Manag. 14(1), 21–34 (2008) Newell, G., Chau, K.W., Wong, S.K.: The significance and performance of infrastructure in China. J. Prop. Invest. Finance 27(2), 180–202 (2009) Oyedele, J., Adair, A., McGreal, S.: Performance of global listed infrastructure investment in a mixed asset portfolio. J. Prop. Res. 31(1), 1–25 (2014) Peng, H., Newell, G.: The significance of infrastructure in Australian investment portfolios. Pac. Rim Prop. Res. J. 13(4), 423–450 (2007) Petrella, G.: Are Euro area small cap stocks an asset class? Evidence from mean-variance spanning tests. Eur. Financ. Manag. 11, 229–253 (2005). doi:https://doi.org/10.1111/j.1354-7798.2005.00283.x. https://doi.org/doi.wiley.com/10.1111/j.1354-7798.2005.00283.x Rödel, M., Rothballer, C.: Infrastructure as hedge against inflation-Fact or fantasy? J. Altern. Invest. 15(1), 110–123 (2012) Rothballer, C., Kaserer, C.: The risk profile of infrastructure investments: challenging conventional wisdom. J. Struct. Finance 18(2), 95–109 (2012) RREEF (2008) The Case for Global Listed Infrastructure. https://doi.org/realestate.deutscheam.com/content_media/Research_The_Case_for_Global_Listed_Infrastructure_May_2008.pdf. Accessed 13 Jan 2016 The authors thank Timo Välilä, Majid Hasan, Lionel Martellini, Noël Amenc, and anonymous referee(s) for useful comments. EDHEC Infrastructure Institute-Singapore, # 07-02, One George Street, Singapore, 049145, Singapore Frédéric Blanc-Brude & Timothy Whittaker University of Bath, Building 8 West, Quarry Rd, Bath, BA2 7AY, UK Simon Wilde Frédéric Blanc-Brude Timothy Whittaker Correspondence to Frédéric Blanc-Brude. 1. Reference assets Table 13 Descriptive statistics of annualized price and total returns of (Panel A) the global reference asset classes. (Panel B) Descriptive statistics of annualized price and total returns of the US reference asset classes. (Panel C) Descriptive statistics of annualized price and total returns of the UK reference asset classes, 2000–2014 2. Reference investment factors Table 14 Descriptive statistics of annualized price and total returns of (Panel A) the global reference factors. (Panel B) Descriptive statistics of annualized price and total returns of the US reference factors, 2000–2014 Blanc-Brude, F., Whittaker, T. & Wilde, S. Searching for a listed infrastructure asset class using mean–variance spanning. Financ Mark Portf Manag 31, 137–179 (2017). https://doi.org/10.1007/s11408-017-0286-z Issue Date: May 2017 DOI: https://doi.org/10.1007/s11408-017-0286-z Mean–variance spanning Not affiliated
CommonCrawl
Physics and Astronomy (47) Publications of the Astronomical Society of Australia (11) The Journal of Agricultural Science (6) The British Journal of Psychiatry (5) British Journal of Nutrition (4) Canadian Journal of Emergency Medicine (4) Experimental Agriculture (4) PMLA / Publications of the Modern Language Association of America (3) Symposium - International Astronomical Union (2) Cambridge Prisms: Global Mental Health (1) Disaster Medicine and Public Health Preparedness (1) Journal of the Australian Mathematical Society (1) Mathematical Proceedings of the Cambridge Philosophical Society (1) Nutrition Society (5) The Royal College of Psychiatrists (5) Canadian Association of Emergency Physicians (CAEP) (4) Modern Language Association of America (3) Test Society 2018-05-10 (3) Australian Mathematical Society Inc (2) Institute and Faculty of Actuaries (2) European Microwave Association (1) International Psychogeriatric Association (1) International Society for Biosafety Research (1) MBA Online Only Members (1) Society for Disaster Medicine and Public Health, Inc. SDMPH (1) Shakespeare Survey (1) The Jacobs Foundation Series on Adolescence (1) Values-Based Practice (1) Cambridge Prisms (1) WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said Published online by Cambridge University Press: 15 November 2022, e058 We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey. The Evolutionary Map of the Universe Pilot Survey – ADDENDUM Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Green fodder cultivation improves technical efficiency of dairy farmers in semi-arid tropics of central India: a micro-analysis Bishwa Bhaskar Choudhary, Purushottam Sharma, Priyanka Singh, Sunil Kumar, Gaurendra Gupta, S. R. Kantwa, Deepak Upadhyay, Vinod Kumar Wasnik, Mahendra Prasad, R. K. Sharma Journal: Journal of Dairy Research / Volume 89 / Issue 4 / November 2022 Published online by Cambridge University Press: 01 December 2022, pp. 367-374 This study assessed the impact of improved green fodder production activities on technical efficiency (TE) of dairy farmers in climate vulnerable landscapes of central India. We estimated stochastic production frontiers, considering potential self-selection bias stemming from both observable and unobservable factors in adoption of fodder interventions at farm level. The empirical results show that TE for treated group ranges from 0.55 to 0.59 and that for control ranges from 0.41 to 0.48, depending on how biases are controlled. Additionally, the efficiency levels of both adopters and non-adopters would be underestimated if the selectivity bias is not appropriately accounted. As the average TE is consistently higher for adopter farmers than the control group, promoting improved fodder cultivation would increase input use efficiency, especially in resource-deprived small holder dairy farmers in the semi-arid tropics. Asynchronous and synchronous quenching of a globally unstable jet via axisymmetry breaking Abhijit K. Kushwaha, Nicholas A. Worth, James R. Dawson, Vikrant Gupta, Larry K.B. Li Journal: Journal of Fluid Mechanics / Volume 937 / 25 April 2022 Published online by Cambridge University Press: 03 March 2022, A40 We explore experimentally whether axisymmetry breaking can be exploited for the open-loop control of a prototypical hydrodynamic oscillator, namely a low-density inertial jet exhibiting global self-excited axisymmetric oscillations. We find that when forced transversely or axially at a low amplitude, the jet always transitions first from a period-1 limit cycle to $\mathbb {T}^2$ quasiperiodicity via a Neimark–Sacker bifurcation. However, we find that the subsequent transition, from $\mathbb {T}^2$ quasiperiodicity to $1:1$ lock-in, depends on the spatial symmetry of the applied perturbations: axial forcing induces a saddle-node bifurcation at small detuning but an inverse Neimark–Sacker bifurcation at large detuning, whereas transverse forcing always induces an inverse Neimark–Sacker bifurcation irrespective of the detuning. Crucially, we find that only transverse forcing can enable both asynchronous and synchronous quenching of the natural mode to occur without resonant or non-resonant amplification of the forced mode, resulting in substantially lower values of the overall response amplitude across all detuning values. From this, we conclude that breaking the jet axisymmetry via transverse forcing is a more effective control strategy than preserving the jet axisymmetry via axial forcing. Finally, we show that the observed synchronization phenomena can be modelled qualitatively with just two forced coupled Van der Pol oscillators. The success of such a simple low-dimensional model in capturing the complex synchronization dynamics of a multi-modal hydrodynamic system opens up new opportunities for axisymmetry breaking to be exploited for the open-loop control of other globally unstable flows. GASKAP-HI pilot survey science I: ASKAP zoom observations of Hi emission in the Small Magellanic Cloud N. M. Pingel, J. Dempsey, N. M. McClure-Griffiths, J. M. Dickey, K. E. Jameson, H. Arce, G. Anglada, J. Bland-Hawthorn, S. L. Breen, F. Buckland-Willis, S. E. Clark, J. R. Dawson, H. Dénes, E. M. Di Teodoro, B.-Q. For, Tyler J. Foster, J. F. Gómez, H. Imai, G. Joncas, C.-G. Kim, M.-Y. Lee, C. Lynn, D. Leahy, Y. K. Ma, A. Marchal, D. McConnell, M.-A. Miville-Deschènes, V. A. Moss, C. E. Murray, D. Nidever, J. Peek, S. Stanimirović, L. Staveley-Smith, T. Tepper-Garcia, C. D. Tremblay, L. Uscanga, J. Th. van Loon, E. Vázquez-Semadeni, J. R. Allison, C. S. Anderson, Lewis Ball, M. Bell, D. C.-J. Bock, J. Bunton, F. R. Cooray, T. Cornwell, B. S. Koribalski, N. Gupta, D. B. Hayman, L. Harvey-Smith, K. Lee-Waddell, A. Ng, C. J. Phillips, M. Voronkov, T. Westmeier, M. T. Whiting Published online by Cambridge University Press: 07 February 2022, e005 We present the most sensitive and detailed view of the neutral hydrogen ( ${\rm H\small I}$ ) emission associated with the Small Magellanic Cloud (SMC), through the combination of data from the Australian Square Kilometre Array Pathfinder (ASKAP) and Parkes (Murriyang), as part of the Galactic Australian Square Kilometre Array Pathfinder (GASKAP) pilot survey. These GASKAP-HI pilot observations, for the first time, reveal ${\rm H\small I}$ in the SMC on similar physical scales as other important tracers of the interstellar medium, such as molecular gas and dust. The resultant image cube possesses an rms noise level of 1.1 K ( $1.6\,\mathrm{mJy\ beam}^{-1}$ ) $\mathrm{per}\ 0.98\,\mathrm{km\ s}^{-1}$ spectral channel with an angular resolution of $30^{\prime\prime}$ ( ${\sim}10\,\mathrm{pc}$ ). We discuss the calibration scheme and the custom imaging pipeline that utilises a joint deconvolution approach, efficiently distributed across a computing cluster, to accurately recover the emission extending across the entire ${\sim}25\,\mathrm{deg}^2$ field-of-view. We provide an overview of the data products and characterise several aspects including the noise properties as a function of angular resolution and the represented spatial scales by deriving the global transfer function over the full spectral range. A preliminary spatial power spectrum analysis on individual spectral channels reveals that the power law nature of the density distribution extends down to scales of 10 pc. We highlight the scientific potential of these data by comparing the properties of an outflowing high-velocity cloud with previous ASKAP+Parkes ${\rm H\small I}$ test observations. Integration of Dental Health Professionals in Disaster Management - Commitment to Action - New Delhi Declaration - 2020 Vikrant R. Mohanty, Rajesh G. Rao, Anil K. Gupta, Vamsi K. Reddy, Kavita Rijhwani, Fatima Amin Journal: Prehospital and Disaster Medicine / Volume 37 / Issue 2 / April 2022 Recruitment, Readiness, and Retention of Providers at a Field Hospital During the Pandemic Ishaan Gupta, Zishan K. Siddiqui, Mark D. Phillips, Amteshwar Singh, Shaker M. Eid, Laura Wortman, Flora Kisuule, James R. Ficke, CONQUER COVID Consortium, Melinda E. Kantsiper Journal: Disaster Medicine and Public Health Preparedness / Volume 17 / 2023 Published online by Cambridge University Press: 10 January 2022, e102 In response to the coronavirus disease (COVID-19) pandemic, the State of Maryland established a 250-bed emergency response field hospital at the Baltimore Convention Center to support the existing health care infrastructure. To operationalize this hospital with 65 full-time equivalent clinicians in less than 4 weeks, more than 300 applications were reviewed, 186 candidates were interviewed, and 159 clinicians were credentialed and onboarded. The key steps to achieve this undertaking involved employing multidisciplinary teams with experienced personnel, mass outreach, streamlined candidate tracking, pre-interview screening, utilizing all available expertise, expedited credentialing, and focused onboarding. To ensure staff preparedness, the leadership developed innovative team models, applied principles of effective team building, and provided "just in time" training on COVID-19 and non-COVID-19-related topics to the staff. The leadership focused on staff safety and well-being, offered appropriate financial remuneration, and provided leadership opportunities that allowed retention of staff. Radiation necrosis of the bone, cartilage or cervical soft-tissues following definitive high-precision radio(chemo)therapy for head-neck cancer: an uncommon and under-reported phenomenon T Gupta, G Maheshwari, S Gudi, A Chatterjee, R Phurailatpam, K Prabhash, A Budrukkar, S Ghosh-Laskar, J P Agarwal Journal: The Journal of Laryngology & Otology / Volume 136 / Issue 5 / May 2022 Print publication: May 2022 The impact of modern high-precision conformal techniques on rare but highly morbid late complications of head and neck radiotherapy, such as necrosis of the bone, cartilage or soft-tissues, is not well described. Medical records of head and neck cancer patients treated in prospective clinical trials of definitive high-precision radiotherapy were reviewed retrospectively to identify patients with necrosis. Twelve of 290 patients (4.1 per cent) developed radiotherapy necrosis at a median interval of 4.5 months. There was no significant difference in baseline demographic (age, gender), disease (primary site, stage) and treatment characteristics (radiotherapy technique, total dose, fractionation) of patients developing radiotherapy necrosis versus those without necrosis. Initial management included antibiotics or anti-inflammatory agents, tissue debridement and tracheostomy as appropriate followed by hyperbaric oxygen therapy and resective surgery for persistent symptoms in selected patients. Multidisciplinary management is essential for the prevention, early diagnosis and successful treatment of radiotherapy necrosis of bone, cartilage or cervical soft tissues. The ASKAP Variables and Slow Transients (VAST) Pilot Survey Australian SKA Pathfinder Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified. The Evolutionary Map of the Universe pilot survey Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Cross cultural and global uses of a digital mental health app: results of focus groups with clinicians, patients and family members in India and the United States Elena Rodriguez-Villa, Abhijit R. Rozatkar, Mohit Kumar, Vikram Patel, Ameya Bondre, Shalini S. Naik, Siddharth Dutt, Urvakhsh M. Mehta, Srilakshmi Nagendra, Deepak Tugnawat, Ritu Shrivastava, Harikeerthan Raghuram, Azaz Khan, John A. Naslund, Snehil Gupta, Anant Bhan, Jagadisha Thirthall, Prabhat K. Chand, Tanvi Lakhtakia, Matcheri Keshavan, John Torous Journal: Cambridge Prisms: Global Mental Health / Volume 8 / 2021 Published online by Cambridge University Press: 24 August 2021, e30 Despite significant advancements in healthcare technology, digital health solutions – especially those for serious mental illnesses – continue to fall short of their potential across both clinical practice and efficacy. The utility and impact of medicine, including digital medicine, hinges on relationships, trust, and engagement, particularly in the field of mental health. This paper details results from Phase 1 of a two-part study that seeks to engage people with schizophrenia, their family members, and clinicians in co-designing a digital mental health platform for use across different cultures and contexts in the United States and India. Each site interviewed a mix of clinicians, patients, and their family members in focus groups (n = 20) of two to six participants. Open-ended questions and discussions inquired about their own smartphone use and, after a demonstration of the mindLAMP platform, specific feedback on the app's utility, design, and functionality. Our results based on thematic analysis indicate three common themes: increased use and interest in technology during coronavirus disease 2019 (COVID-19), concerns over how data are used and shared, and a desire for concurrent human interaction to support app engagement. People with schizophrenia, their family members, and clinicians are open to integrating technology into treatment to better understand their condition and help inform treatment. However, app engagement is dependent on technology that is complementary – not substitutive – of therapeutic care from a clinician. The MAGPI survey: Science goals, design, observing strategy, early results and theoretical framework C. Foster, J. T. Mendel, C. D. P. Lagos, E. Wisnioski, T. Yuan, F. D'Eugenio, T. M. Barone, K. E. Harborne, S. P. Vaughan, F. Schulze, R.-S. Remus, A. Gupta, F. Collacchioni, D. J. Khim, P. Taylor, R. Bassett, S. M. Croom, R. M. McDermid, A. Poci, A. J. Battisti, J. Bland-Hawthorn, S. Bellstedt, M. Colless, L. J. M. Davies, C. Derkenne, S. Driver, A. Ferré-Mateu, D. B. Fisher, E. Gjergo, E. J. Johnston, A. Khalid, C. Kobayashi, S. Oh, Y. Peng, A. S. G. Robotham, P. Sharda, S. M. Sweet, E. N. Taylor, K.-V. H. Tran, J. W. Trayford, J. van de Sande, S. K. Yi, L. Zanisi Published online by Cambridge University Press: 26 July 2021, e031 We present an overview of the Middle Ages Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey, a Large Program on the European Southern Observatory Very Large Telescope. MAGPI is designed to study the physical drivers of galaxy transformation at a lookback time of 3–4 Gyr, during which the dynamical, morphological, and chemical properties of galaxies are predicted to evolve significantly. The survey uses new medium-deep adaptive optics aided Multi-Unit Spectroscopic Explorer (MUSE) observations of fields selected from the Galaxy and Mass Assembly (GAMA) survey, providing a wealth of publicly available ancillary multi-wavelength data. With these data, MAGPI will map the kinematic and chemical properties of stars and ionised gas for a sample of 60 massive ( ${>}7 \times 10^{10} {\mathrm{M}}_\odot$ ) central galaxies at $0.25 < z <0.35$ in a representative range of environments (isolated, groups and clusters). The spatial resolution delivered by MUSE with Ground Layer Adaptive Optics ( $0.6-0.8$ arcsec FWHM) will facilitate a direct comparison with Integral Field Spectroscopy surveys of the nearby Universe, such as SAMI and MaNGA, and at higher redshifts using adaptive optics, for example, SINS. In addition to the primary (central) galaxy sample, MAGPI will deliver resolved and unresolved spectra for as many as 150 satellite galaxies at $0.25 < z <0.35$ , as well as hundreds of emission-line sources at $z < 6$ . This paper outlines the science goals, survey design, and observing strategy of MAGPI. We also present a first look at the MAGPI data, and the theoretical framework to which MAGPI data will be compared using the current generation of cosmological hydrodynamical simulations including EAGLE, Magneticum, HORIZON-AGN, and Illustris-TNG. Our results show that cosmological hydrodynamical simulations make discrepant predictions in the spatially resolved properties of galaxies at $z\approx 0.3$ . MAGPI observations will place new constraints and allow for tangible improvements in galaxy formation theory. Mastoid cavity obliteration using bone pâté versus bioactive glass granules in the management of chronic otitis media (squamous disease): a prospective comparative study A K Mishra, A Mallick, J R Galagali, A Gupta, A Sethi, A Ghotra Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 6 / June 2021 Print publication: June 2021 To compare the efficacy of bone pâté versus bioactive glass in mastoid obliteration. This randomised parallel groups study was conducted at a tertiary care centre between September 2017 and August 2019. Sixty-eight patients, 33 males and 35 females, aged 12–56 years, randomly underwent single-stage canal wall down mastoidectomy with mastoid obliteration using either bone pâté (n = 35) or bioactive glass (n = 33), and were evaluated 12 months after the operation. A dry epithelised cavity (Merchant's grade 0 or 1) was achieved in 65 patients (95.59 per cent). Three patients (4.41 per cent) showed recidivism. The mean air–bone gap decreased to 16.80 ± 4.23 dB from 35.10 ± 5.21 dB pre-operatively. The mean Glasgow Benefit Inventory score was 30.02 ± 8.23. There was no significant difference between the two groups in these outcomes. However, the duration of surgery was shorter in the bioactive glass group (156.87 ± 7.83 vs 162.28 ± 8.74 minutes; p = 0.01). The efficacy of both materials was comparable. Clinical Study of 668 Indian Subjects with Juvenile, Young, and Early Onset Parkinson's Disease Prashanth L. Kukkle, Vinay Goyal, Thenral S. Geetha, Kandadai R. Mridula, Hrishikesh Kumar, Rupam Borgohain, Adreesh Mukherjee, Pettarusp M. Wadia, Ravi Yadav, Soaham Desai, Niraj Kumar, Ravi Gupta, Atanu Biswas, Pramod K. Pal, Uday Muthane, Shymal K. Das, Niall Quinn, Vedam L. Ramprasad, the Parkinson Research Alliance of India (PRAI) Journal: Canadian Journal of Neurological Sciences / Volume 49 / Issue 1 / January 2022 Published online by Cambridge University Press: 09 March 2021, pp. 93-101 To determine the demographic pattern of juvenile-onset parkinsonism (JP, <20 years), young-onset (YOPD, 20–40 years), and early onset (EOPD, 40–50 years) Parkinson's disease (PD) in India. We conducted a 2-year, pan-India, multicenter collaborative study to analyze clinical patterns of JP, YOPD, and EOPD. All patients under follow-up of movement disorders specialists and meeting United Kingdom (UK) Brain Bank criteria for PD were included. A total of 668 subjects (M:F 455:213) were recruited with a mean age at onset of 38.7 ± 8.1 years. The mean duration of symptoms at the time of study was 8 ± 6 years. Fifteen percent had a family history of PD and 13% had consanguinity. JP had the highest consanguinity rate (53%). YOPD and JP cases had a higher prevalence of consanguinity, dystonia, and gait and balance issues compared to those with EOPD. In relation to nonmotor symptoms, panic attacks and depression were more common in YOPD and sleep-related issues more common in EOPD subjects. Overall, dyskinesias were documented in 32.8%. YOPD subjects had a higher frequency of dyskinesia than EOPD subjects (39.9% vs. 25.5%), but they were first noted later in the disease course (5.7 vs. 4.4 years). This large cohort shows differing clinical patterns in JP, YOPD, and EOPD cases. We propose that cutoffs of <20, <40, and <50 years should preferably be used to define JP, YOPD, and EOPD. Australian square kilometre array pathfinder: I. system description A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier Published online by Cambridge University Press: 05 March 2021, e009 In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown. The Rapid ASKAP Continuum Survey I: Design and first results D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz. Site-specific fertilizer nitrogen management in Bt cotton using chlorophyll meter Arun Shankar, R. K. Gupta, Bijay-Singh Journal: Experimental Agriculture / Volume 56 / Issue 3 / June 2020 Field experiments were conducted to standardize protocols for site-specific fertilizer nitrogen (N) management in Bt cotton using Soil Plant Analysis Development (SPAD) chlorophyll meter. Performance of different SPAD-based site-specific N management scenarios was evaluated vis-à-vis blanket fertilizer N recommendation. The N treatments comprised a no-N (control), four fixed-time and fixed N doses (60, 90, 120, and 150 kg N ha-1) including the recommended dose (150 kg ha-1), and eight fixed-time and adjustable N doses based on critical SPAD readings of 45 and 41 at first flowering and boll formation stages, respectively. The results revealed that by applying 45 or 60 kg N ha-1 at thinning stage of the crop and critical SPAD value-guided dose of 45 or 30 kg N ha-1 at first flowering stage resulted in yields similar to that recorded by applying the recommended dose of 150 kg N ha-1. However, significantly higher N use efficiency as well as 30–40% less total fertilizer N use was recorded with site-specific N management. Applying 30 kg N ha-1 at thinning and SPAD meter-guided 45 kg N ha-1 at first flowering were not enough and required additional SPAD meter-guided 45 kg N ha-1 at boll formation for sustaining yield levels equivalent to those observed by following blanket recommendation but resulted in 20% less fertilizer N application. Our data revealed that SPAD meter-based site-specific N management in Bt cotton results in optimum yield with dynamic adjustment of fertilizer N doses at first flowering and boll formation stages. The total amount of N fertilizer following site-specific management strategies was substantially less than the blanket recommendation of 150 kg N ha-1, but the extent may vary in different fields. Experiential learnings from the Nipah virus outbreaks in Kerala towards containment of infectious public health emergencies in India Rima R. Sahay, Pragya D. Yadav, Nivedita Gupta, Anita M. Shete, Chandni Radhakrishnan, Ganesh Mohan, Nikhilesh Menon, Tarun Bhatnagar, Krishnasastry Suma, Abhijeet V. Kadam, P. T. Ullas, B. Anu Kumar, A. P. Sugunan, V. K. Sreekala, Rajan Khobragade, Raman R. Gangakhedkar, Devendra T. Mourya Journal: Epidemiology & Infection / Volume 148 / 2020 Nipah virus (NiV) outbreak occurred in Kozhikode district, Kerala, India in 2018 with a case fatality rate of 91% (21/23). In 2019, a single case with full recovery occurred in Ernakulam district. We described the response and control measures by the Indian Council of Medical Research and Kerala State Government for the 2019 NiV outbreak. The establishment of Point of Care assays and monoclonal antibodies administration facility for early diagnosis, response and treatment, intensified contact tracing activities, bio-risk management and hospital infection control training of healthcare workers contributed to effective control and containment of NiV outbreak in Ernakulam. Effect of dopant on the morphology and electrochemical performance of Ni1-xCaxCo2O4 (0 ≤ x ≤ 0.8) oxide hierarchical structures D. Guragain, C. Zequine, R. Bhattarai, J. Choi, R. K. Gupta, X. Shen, S. R. Mishra Journal: MRS Advances / Volume 5 / Issue 48-49 / 2020 The binary metal oxides are increasingly used as supercapacitor electrode materials in energy storing devices. Particularly NiCo2O4 has shown promising electrocapacitive performance with high specific capacitance and energy density. The electrocapacitive performance of these oxides largely depends on their morphology and electrical properties governed by their energy band-gaps and defects. The morphological structure of NiCo2O4 can be altered via the synthesis route, while the energy band-gap could be altered by doping. Also, doping can enhance crystal stability and bring in grain refinement, which can further improve the much-needed surface area for high specific capacitance. Given the above, this study evaluates the electrochemical performance of Ca-doped Ni1-xCaxCo2O4 (0 ≤ x ≤ 0.8) compounds. This stipulates promising applications for electrodes in future supercapacitors. Tranexamic acid has no advantage in head and neck surgical procedures: a randomised, double-blind, controlled clinical trial A Thakur, S Gupta, J S Thakur, R S Minhas, R K Azad, M S Vasanthalakshmi, D R Sharma, N K Mohindroo Journal: The Journal of Laryngology & Otology / Volume 133 / Issue 12 / December 2019 Print publication: December 2019 To assess the effect of tranexamic acid in head and neck surgical procedures. A prospective, double-blind and randomised, parallel group, placebo-controlled clinical trial was conducted. Ninety-two patients undergoing various head and neck surgical procedures were randomised. Subjects received seven infusions of coded drugs (tranexamic acid or normal saline) starting at the time of skin closure. Haematological, biochemical, blood loss and other parameters were observed by the staff, who were blinded to patients' group allocation (case or control). Patients were analysed on the basis of type of surgery. Fifty patients who had undergone surgical procedures, including total thyroidectomy, total parotidectomy, and various neck dissections with or without primary tumour excision, were included in the first group. The second group comprised 41 patients who had undergone hemithyroidectomy, lobectomy or superficial parotidectomy. There was no statistical difference in blood parameters between both groups. There was a reduction in post-operative drain volume, but this was not significant. Although this prospective, randomised, placebo-controlled clinical trial found a reduction in post-operative drain volume in tranexamic acid groups, the difference was not statistically significant between the various head and neck surgical procedure groups.
CommonCrawl
A microfluidic-based analysis of 3D macrophage migration after stimulation by Mycobacterium, Salmonella and Escherichia Sandra Pérez-Rodríguez1,2,3, Carlos Borau1,2, José Manuel García-Aznar1,2 & Jesús Gonzalo-Asensio3,4,5 Macrophages play an essential role in the process of recognition and containment of microbial infections. These immune cells are recruited to infectious sites to reach and phagocytose pathogens. Specifically, in this article, bacteria from the genus Mycobacterium, Salmonella and Escherichia, were selected to study the directional macrophage movement towards different bacterial fractions. We recreated a three-dimensional environment in a microfluidic device, using a collagen-based hydrogel that simulates the mechanical microarchitecture associated to the Extra Cellular Matrix (ECM). First, we showed that macrophage migration is affected by the collagen concentration of their environment, migrating greater distances at higher velocities with decreasing collagen concentrations. To recreate the infectious microenvironment, macrophages were exposed to lateral gradients of bacterial fractions obtained from the intracellular pathogens M. tuberculosis and S. typhimurium. Our results showed that macrophages migrated directionally, and in a concentration-dependent manner, towards the sites where bacterial fractions are located, suggesting the presence of attractants molecules in all the samples. We confirmed that purified M. tuberculosis antigens, as ESAT-6 and CFP-10, stimulated macrophage recruitment in our device. Finally, we also observed that macrophages migrate towards fractions from non-pathogenic bacteria, such as M. smegmatis and Escherichia coli. In conclusion, our microfluidic device is a useful tool which opens new perspectives to study the recognition of specific antigens by innate immune cells. The latest World Health Organization (WHO) report indicates that lower respiratory infections and diarrheal diseases remain one of the top 10 causes of death globally, and people living in low-income countries are far more likely to die from these diseases [1]. Both diseases are frequently caused by microorganisms. Among respiratory infections, Tuberculosis, which is mainly caused by Mycobacterium tuberculosis, remains today as the deadliest bacterial disease accounting for more than 1.5 million deaths in 2020, having higher prevalence in low-income countries. Tuberculosis is aggravated by the alarming emergence of drug resistant strains, and co-infection with Human Immunodeficiency Virus [2]. On the other hand, more than 420,000 people die annually from foodborne pathogens, with bacteria such as Salmonella typhimurium or Escherichia coli, being the most typical causative agents. These pathogens contaminate spoiled food which, when ingested, lead to the development of diarrheal diseases. These infections lead to more than half of the global deaths from foodborne pathogens, reaching 550 million affected and 230,000 deaths [3]. M. tuberculosis enters the body via the respiratory tract where it is engulfed by alveolar macrophages. Once phagocytized, M. tuberculosis is able to survive inside phagocytic cells. The most characteristic feature of this bacterium is its specialized secretion system, termed type seven secretion system (T7SS), or ESX, which releases different proteins [4]. Of the five ESX encoded in its genome, the most characterized is ESX-1, which releases two immunogenic proteins of M. tuberculosis, ESAT-6 and CFP-10. Both proteins form a dimer which enable phagosomal rupture, and the subsequent release of M. tuberculosis into the macrophage cytosol [5]. This cytosolic access facilitates the spread of M. tuberculosis to neighboring cells and the process of infection evolves resulting in pulmonary dysfunction [6]. E. coli and S. typhimurium are gram-negative bacteria belonging to the phylogenetic family of Enterobacteriaceae. E. coli usually coexist as a commensal in the gut, but some bacterial strains that have acquired virulent characteristics, become pathogenic [7]. These acquired virulence factors, are similar to those presented by pathogenic Salmonella species and include the secretion of toxins capable of killing eukaryotic cells, adhesion systems based on fimbriae or pili that allow bacteria to interact with host cells and/or the incorporation of flagella that enable bacterial motility [8]. Both bacteria also share the type III secretion system (T3SS), encoded in pathogenicity islands (PAIs), although each has unique machineries [9]. E. coli uses the T3SS to translocate effector proteins into host cells and control their signaling pathways, altering them to create a microenvironment to thrive. However, S. typhimurium has developed more sophisticated T3SS effectors to invade and replicate into intestinal epithelial cells and to survive within macrophage phagolysosomes. As a result of these virulent determinants, pathogenic Salmonella and Escherichia produce cell death and release of pro-inflammatory molecules that ultimately cause clinical diarrhea, among other diseases [10]. Both, in the lungs and in the intestine, there are innate immune cells, such as macrophages, responsible for containing bacterial infections. Macrophages possess a series of receptors that allow them to sense their environment and recognize the presence of pathogens. These receptors interact with the pathogens themselves or their secreted products [11], and activate a series of signaling cascades that generate changes in the cytoskeleton, so that the macrophage migrates towards the invading microorganism and is able to phagocytize it [12]. Therefore, since the outcome of the infection is strongly determined by the interplay between macrophages and bacteria, in this article we have focused on the study of the macrophage migration towards different bacterial stimuli. For this purpose, we have used 3D cultures on microfluidic devices. This technique allows working with microscale fluids [13] and is being increasingly used in the field of microbiology, because it offers numerous advantages such as better control of study conditions, biocompatibility of materials or cost and time savings. However, the most remarkable characteristic of this technology is the great versatility in terms of design that microfluidic devices offer. In addition, since most cellular infection studies to date has been performed using 2D cellular cultures in monolayers, they fail to recreate the 3D positioning of cells in a physiological environment, and they only allow to gain information about cellular movements in the x and y planes. Thus, using a microfluidic device in which cells are embedded within a hydrogel, allows cells to freely move and/or grow in the x, y and z planes, providing a full spatial dimensionality to the study [14]. Previous studies have studied macrophages-bacteria interactions using microfluidic devices. Some studies used microfluidic-based sorting devices to analyze cell populations of macrophages infected with Francisella tularensis or stimulated with E. coli LPS [15, 16]. However, they do not focus on the direct interaction between macrophages and bacteria. Other devices are designed to trap and isolate macrophages to study single-cell interactions with E. coli [17, 18], missing the information provided by the intercellular communication. In those studies in which macrophages and bacteria are co-cultured to take into account this cross-talk, they are routinely performed in liquid medium [19, 20], a physiologically unrealistic environment. Finally, the aforementioned studies have been predominantly performed with non-pathogenic bacteria. To our knowledge, there are limited studies addressing the interactions between macrophages and pathogens in microfluidic devices. Han et al. used a droplet-based microfluidic device coupled to electrodes to separate and isolate mixtures of macrophages and S. typhimurium, without analyzing the interaction between both cells. On the other hand, Gopalakrishnan et al. used a two-reservoir device connected by a network of microchannels to study individual movements of macrophages from one reservoir to the other containing bacteria from the Mycobacterium avium complex [21]. In this manuscript, we propose an alternative application of a previously described three-channel microfluidic design [22, 23], where macrophages are embedded in a collagen gel in the central channel and stimuli are applied from one side channel. This design presents a number of additional features not used before in the study of macrophage-bacteria interaction that overcomes the limitations of previous studies. First, the use of a collagen hydrogel offers three-dimensionality and mimics the extracellular matrix through which these innate cells must migrate to interact with pathogens. Second, by seeding a range of 100–500 cells per chip, we ensure that intercellular communication occurs. Third, side channels allow the generation of gradients by adding different conditions to each channel [24]. Fourth, the use of PDMS as the main material of the chip allows real time visualization of the events that are happening. Overall, our design allowed us to quantify the directional migration of macrophages to assess whether they were attracted to different fractions from representative bacteria in realistic microenvironment conditions. We used THP-1 monocytes from the American Type Cell Culture (ATCC) that came from peripheral blood of one year infant human. THP-1 were cultured in suspension in RMPI-1640 supplemented with L-glutamine, 2% FBS and Ampicillin/Streptomycin. For monocyte differentiation to macrophages, THP-1 were resuspended at 6.5 × 105 cell/ml in RPMI medium and 100 ng/ml phorbol 12-myristate-13-acetate (PMA) for 24 h. Macrophage became adherent, facilitating their distinction from undifferentiated monocytes. Bacteria culture and fractions preparation We used these bacterial strains: M. tuberculosis H37Rv [25], M. smegmatis mc2155 [26], Salmonella enterica serovar typhimurium SV5015 [27] and E. coli DH5α. M. tuberculosis and M. smegmatis were grown in Middlebrook 7H9 broth supplemented with 10% (vol/vol) of ADC (0.5% bovine serum albumin, 0.2% dextrose, 0.085% NaCl and 0.0003% beef catalase) and 0.05% (vol/vol) Tween-80. E. coli and S. typhimurium was grown in Luria–Bertani (LB) broth. Four bacterial fractions were prepared for the migration assays: secreted protein, bacteria inactivated by paraformaldehyde (PFA) treatment, bacterial lysate and bacteria inactivated by heat. For their preparation, bacteria were grown at a concentration of 109 cfu/mL and then separated in four tubes used for the different treatments. To obtain the secreted protein, a trichloroacetic acid (TCA) precipitation was performed. Briefly, the sample was centrifuged and the bacterial pellet was discarded. The supernatant was incubated with 10% TCA for 1 h on ice and centrifuged. The pellet containing secreted proteins was washed in acetone, and the pellet was resuspended in distilled water. To inactivate bacteria by PFA treatment, the bacterial pellet was washed twice with PBS to remove extracellular components, resuspended in 4% PFA, and incubated for 1 h at room temperature. To obtain bacterial lysates, bacterial pellets were washed twice with PBS and then resuspended in 1 mL of cold PBS. Bacterial suspensions were disrupted by sonication using the BioRuptor (Diagenode) for 15 min (30 s pulse at high power), allowing cooling in an ice-water bath for 30 s between pulses. The samples were centrifuged at 4.000 g for 10 min at 4 °C, and the supernatant containing whole-cell bacterial extracts was filtered through a 0.22 μm-pore-size low protein-binding filter (Pall). For inactivating bacteria by heat treatment, bacterial pellets were washed twice with PBS, and then incubated at 100 °C for 10 min. Manufacturing of microfluidic device Microfluidic devices were fabricated following the protocol published by Shin et al. in 2012 [28]. A polydimethylsiloxane (PDMS) mixture made of the base and curing agent of Sylgard 184 silicone elastomer in a 10:1 ratio was poured onto the printed wafer and incubated for 24 h at 80 °C. Then, PDMS molds were cut independently, perforated and autoclaved. Finally, both PDMS devices and 35 mm diameter glass bottom plates were treated with plasma to achieve a good seal between them. To increase the adherence of PDMS to the collagen hydrogel which is subsequently to be introduced, devices were filled with 1 mg/ml Poly-D-lysine (PDL) for 4 h at 37 °C and the washed with sterile water. Migration assays A suspension of THP-1, at a final concentration of 3 × 105 cell/ml, was embedded in collagen hydrogels. These hydrogels were made of a predefined concentration of type I collagen, to obtain the desired matrix stiffness, and an optimal NaOH/H2O balance to achieve a pH of 7. By working with the most abundant protein of the extracellular matrix and a physiological pH [29], it was possible to realistically recreate the extracellular environment in which macrophages migrate. This hydrogel was introduced into the central channel of the microfluidic device (Fig. 1) and polymerized at 37 °C for at least 20 min in a humid box. Devices were turned around every 5 min to avoid adhesions between the cells and the PDMS or glass surface and to prevent them from passing into a two-dimensional plane. Subsequently, the reservoirs and side channels were filled with RPMI culture medium and the devices were incubated 24 h at 37 °C to let the macrophages to adapt to their three-dimensional environment before performing the migration assay. A Dimensioned top view of the microfluidic device consisting of a central channel (yellow) and two side channels (gray), each with corresponding loading ports at the ends of the channel. Dimensions are provided in millimeters. B Schematic drawing of the migration experiment design. Macrophages (pink) are embedded in a collagen gel in the central channel, while a gradient with bacterial fractions (green) is generated in the upper side channel Next day, the medium in the reservoirs was removed and the reservoirs were refilled differentially. In the case of the upper reservoir and channel, a 10–4 dilution of each bacterial fraction in RPMI medium was added, and the lower channel was filled with RMPI medium. In this way, a gradient was generated where the stimulus came from the upper part of the device. The devices were visualized in a phase contrast microscopy (Nikon D-Eclipse C1 Confocal Microscope, Nikon Instruments, Tokyo, Japan), which included an incubation chamber to maintain optimal culture conditions, taking pictures every 20 min for 24 h with 10 × objective lenses. Between 5 and 9 devices were analyzed for each condition, and about 30–60 cells are tracked for each device, making a total of 150–350 cells analyzed per condition (Table S1). These data indicate that the present study is an exploratory research, in which the priority has been to expand the number of bacterial species and conditions, in order to show the behavior of macrophages under 21 different situations. Cell tracking and analysis The focal plane located in the middle of the device along the z-axis was selected, and out-of-focus cells were not quantified to minimize artifacts resulting from the glass and PDMS surfaces, ensuring that the tracked cells were fully in a 3D environment. Cells were recorded for a total time of 24 h, taking images every 20 min, resulting in 72 images per chip. Subsequently, individual cell migration was analyzed with a custom cell tracker algorithm developed in Matlab (Mathworks, Natick, CA, USA) [23] used in previous works [24, 30]. This software uses averaged pixel intensity information in search windows around each individual cell, therefore allowing centroid tracking at sub-pixel resolution. Besides, it allows visual correction from the user and post-process the migration results. In particular, trajectories were used to extract the cell mean (Vmean, defined as the averaged instantaneous speed including all time steps) and effective (Veff, takes only into account the initial and final positions) velocities. The MSD curve of each trajectory was also obtained and used to determine the global diffusion coefficient (D) [31, 32], which was used as a measure of macrophage motility and migration persistence in a linear-weighted fit of the mean MSD curve (using the first quarter of the data). Additionally, MSD individual curves were used to fit a power law (MSD(t) = γ.t^α)) to determine the kind of motion (α < 1 for confined motion, α = 1 for Brownian or purely diffusive motion and α > 1 for directed motion). Analysis of variance (ANOVA) followed by post hoc Tukey–Kramer tests were performed to determine statistical significance among the aforementioned parameters in the different conditions. Titration assays Titration assays were performed under the same protocol as the migration assays: macrophages at a concentration of 3 × 105 cell/ml were embedded in 2.5 mg/ml collagen gels. Macrophage migration was documented for 24 h, taking pictures every 20 min with a phase contrast microscopy with 10 × objective lenses. In this case, in each assay the stimulus to be studied was the same, but at different concentrations. The fractions of secreted protein from M. tuberculosis and S. typhimurium were analyzed at serial dilutions (1:10) with respect to the original secreted protein fraction in the range of 10–2 to 10–6. Purified ESAT-6 and CFP-10 proteins were kindly provided by Lionex GmbH (catalog numbers LRP-0017.3 and LRP-0016.2, respectively). These recombinant proteins were also analyzed in titration assays at concentrations ranging from 10 µg/ml to 1 ng/ml. M. tuberculosis H37Rv cultures were grown in 7H9 (Difco) 0.05% Tween 80 supplemented with 0.2% dextrose and 0.085% NaCl, in order to avoid albumin contamination in the secreted protein fraction. After 2–3 weeks incubation at 37 °C, cultures were pelleted by centrifugation. The supernatant containing secreted proteins was incubated with 10% trichloroacetic acid (TCA) for one hour in ice and then centrifuged at 4 °C for 30 min. Pelleted proteins were rinsed with cold acetone and then resuspended in 150 mM TrisHCl pH 8. Protein integrity and absence of albumin contamination was checked by SDS-PAGE and Coomassie staining. Proteins were separated on SDS-PAGE 12–15% gels and transferred onto PVDF membranes using a semidry electrophoresis transfer apparatus (Bio-Rad). Membranes were incubated in TBS-T blocking buffer (25 mM Tris pH 7.5, 150 mM NaCl, 0.05% Tween 20) with 5% w/v skimmed milk powder for 30 min prior to overnight incubation with primary antibodies at the dilution indicated below. Membranes were washed in TBS-T three times, and then incubated with secondary antibodies for 1 h before washing. Antibodies were used at the following dilutions: mouse monoclonal anti-ESAT-6 antibodies at 1:1,000 for (abcam ab 26,246, HYB 076–08) and rabbit polyclonal anti-CFP10 antiserum at 1:2,000 (Thermo Scientific ref.: PA1-19,445). The corresponding horseradish peroxidase (HRP) conjugated IgG secondary antibodies (Sigma-Aldrich) were used at a 1:20,000 dilution. Signals were detected using chemiluminescent substrates (GE Healthcare). Macrophage migration and velocity are inversely proportional to collagen concentration in the microfluidic device Macrophages are motile cells that migrate through the extracellular matrix (ECM) to sites of infection and inflammation. Since previous literature has demonstrated differences between cell migration in two-dimensional and three-dimensional environments [33], in this study, we recreated the extracellular matrix by confining a hydrogel of collagen I, in which macrophages were embedded three-dimensionally. In addition, we analyzed macrophage migration in gels of different collagen concentration, at 2.5, 4 and 6 mg/mL. These concentration changes imply structural changes in the hydrogel, such as smaller pore size or porosity, higher storage shear modulus and a decreasing hydraulic permeability with increasing collagen concentration [34]. As the concentration of collagen in the hydrogels increases, macrophages migrate shorter total distances (Fig. 2A). In 2.5 mg/mL gels, some macrophages are able to overcome total distances of 100 μm from their point of origin, whereas as the concentration increases to 4 mg/mL, macrophages have more difficulty moving forward and their maximum migration distances are around 50 μm. In the case of the highest collagen concentration (6 mg/mL), macrophages do not reach distances greater than 25 μm. In all conditions, macrophages migrate radially without observing a directional movement, which is consistent with the absence of external stimulus (Fig. 2B). Quantitatively, no differences in terms of vertical migration were detected in the different collagen concentrations (Figure S1). Macrophage migration in collagen gels of different concentrations: 2.5 mg/mL (blue), 4 mg/mL (orange) and 6 mg/mL (yellow). A Representation of the relative trajectories of macrophages, each line being the individual trajectory of a cell. B Directionality of migration considering the length of the radius as the number of cells that migrated in that direction and normalized by the total number of cells per condition. C Mean squared displacement (MSD) of the tracked trajectories, where D is the diffusive coefficient and α is the power law fitting coefficient (MSD(t) = γ.t^α). D Mean and effective velocity, where mean velocity is the total distance migrated divided by the time spent, and effective velocity is the distance between the starting and final point of the cell divided by the time spent. Asterisks indicate significant differences as follows: **p < 0.01, ***p < 0.005. n (devices) = 9 (2.5 mg/mL gels), 3 (4 mg/mL gels) and 3 (6 mg/mL gels), with an average of 25 cells per device The analysis of the Mean Squared Displacement (MSD) of the macrophage trajectories provides information on the persistence of migration, reflected in the diffusive coefficient (D), and the type of movement that the cells have in this environment depending on the power law fitting coefficient (α) that they show (Fig. 2C). The increase in the concentration of collagen in the gels implies a decrease in the diffusive coefficient, indicating a lower persistence in the migration of macrophages. In 2.5 mg/mL gels, macrophages show a diffusive coefficient of 0.343 µm/min2, which decreases to 0.197 µm/min2 in 4 mg/mL gels, and to 0.134 µm/min2 in higher concentration gels at 6 mg/mL. In addition, in all the conditions an α value lower than 1 is obtained, indicating that in these assays macrophages showed a sub-diffusive (confined) type of movement, as opposed to a completely random walk behavior [35]. As the collagen concentration increases, the mean and effective velocities decrease significantly (Fig. 2D). However, comparing the obtained values of mean velocities with those published in previous literature, we observe that our values are clearly lower. In our collagen hydrogels, macrophages tend to migrate in a range of 0.05–0.2 μm/min, reaching velocities of 0.5 μm/min in the lowest concentration gels. In contrast, in vitro assays where macrophages migrated on 2D surfaces, velocities around 1 μm/min have been observed [36]. In vivo fish models, similar results have been obtained, with velocities in the range of 1–2.5 μm/min [37], which can increase over 10 μm/min in the presence of a wound [38]. For subsequent experiments with bacterial fractions we will work with 2.5 mg/mL collagen gels, because under these conditions they migrate longer distances and will allow us to follow cell trajectories more easily and with greater differentiation. Macrophages show directional migration towards bacterial fractions of M. tuberculosis Once characterized the optimal conditions for macrophage migration in our microfluidic device, we prompted to simulate macrophage stimulation with M. tuberculosis bacterial fractions (Fig. 3A). Each fraction presents a different composition. On the one hand, the secreted protein exclusively contains those proteins released by the bacteria to the extracellular milieu. When bacteria are inactivated by PFA, a cross-linking reaction preserves bacterial surface macrostructures [39]. This implies that PFA-inactivated bacteria maintain their bacterial wall intact, so that macrophages will be able to interact with all the components of their surface. The cell lysate contains either the cytosolic components, or the membrane fractions from lysed bacteria, but not the secreted protein fraction. Finally, when bacteria are inactivated by heat, molecule denaturation occurs [40]. This leads to alterations in the bacterial cell wall, allowing release of cytosolic components. Therefore, this fraction contains denatured cell wall and cytosolic components. Macrophage facing M. tuberculosis stimuli. A Treatments applied to M. tuberculosis in order to obtain the different fractions subsequently tested. Intact bacteria consist of a mycobacterial membrane (green) with superficial components such as DAT, PAT, SL, PDIM, LM and LAM (yellow). Intracellular molecules like Hsp70 and Hsp65 are represented in blue, whereas extracellular secreted protein, like ESAT-6:CFP-10, Esp family, PP and PPE family and Ag85, are indicated in pink. Four treatments were performed to M. tuberculosis: secreted protein where only extracellular molecules were collected; inactivation by PFA that generates a crosslinking (grey mesh) preserving the superficial components; a cell lysate to break the cell and release superficial and intracellular components; and an inactivation by heat that damages the membrane (green) and denatures proteins (amorphous blue balls). B Migration of macrophages in 2.5 mg/ml collagen gels towards a gradient generated at the top (+ y axis, indicated by grey dots) with M. tuberculosis bacterial fractions: non-stimulated control (gray), secreted protein (pink), inactivated by PFA (yellow), cell lysate (blue) and inactivated by heat (green). Directionality of migration, where the radius is the number of cells migrating in each direction and normalized by the total number of cells per condition. C Percentage of macrophages as a function of the verticality of their migration and weighted by the total distance traveled. Those macrophages whose final position is above their initial position contribute to the bar graph exceeding the red line set at 0.5. The dots in each bar graph correspond to the percentage of macrophages whose final position is above or below their starting position, but not weighted by the total distance migrated. n (devices) = 9 (control), 8 (secreted protein), 6 (inactivated by PFA), 6 (cell lysate) and 5 (inactivated by heat). D M. tuberculosis secreted protein fraction titration assay. The y-axis shows the percentage of macrophages migrating in the y-plane weighted by the total distance moved; the x-axis shows the concentrations of serial dilutions (1:10), ranging from 10–6 to 10–2 (pink intensity), including the control without bacterial stimulus (gray). The red line set at 0.5. E Western blot of the secreted protein fraction of M. tuberculosis in which ESAT-6 and CFP-10 proteins were detected. Unprocessed and uncropped versions of the membranes can be visualized on Supplementary Figure S3. F Titration assay of purified ESAT-6 (smooth bar) and CFP-10 (striped bar) proteins. The y-axis shows the percentage of macrophages migrating in the y-plane weighted by the total distance moved; the x-axis shows the concentrations of serial dilutions (1:10), ranging from 1 ng/mL to 10 µg/mL (pink intensity), including the control without bacterial stimulus (gray). The red line set at 0.5 A gradient was generated individually for each bacterial fraction in the microfluidic devices, and the macrophage migration was monitored for 24 h. Next, we quantified the number of macrophages that migrated in each direction from their point of origin, and represented this migration in a rosette plot (Fig. 3B). Since the gradient with bacterial stimulus is generated in the upper part of the devices, it provides qualitative information on whether the macrophages have migrated directionally towards these bacterial extracts. This qualitative information is transformed into quantitative information enumerating the percentage of macrophages whose final position was above or below their initial position, weighted by the effective distance traveled (Fig. 3C). Qualitatively, when exposing macrophages to M. tuberculosis fractions, a clear directional migration toward the secreted protein, its cell lysate, and bacteria inactivated by heat treatment is observed, since the areas of the upper portions in the rosette plot are higher than in bottom portions. In contrast, in the non-stimulated control, macrophage migrate in the same proportion to the upper or the lower area of the microfluidic device (Fig. 3B). Analyzing the quantitative information, the randomness in the migration of macrophages without the presence of stimulus is demonstrated, since the percentage of cells migrating towards the top or the bottom of the hydrogel, with respect to their original position and weighted by the distance traveled, is similar. However, when generating a gradient with any of the M. tuberculosis fractions, the percentage of macrophages that migrate towards the top of the device, where the stimulus is located, always exceeds the 0.5 baseline. The fractions promoting the higher migration were the cell lysate and bacterial extracts inactivated by heat, inducing more than 60% macrophages effectively migrating towards the stimuli (Fig. 3C). After determining that the different M. tuberculosis fractions exert an attractant effect on macrophages, we analyzed whether different concentration of the stimuli influenced macrophage migration. For this purpose, serial dilutions with respect to the original secreted protein fraction were studied. The results show that macrophages migrate attracted to the bacterial stimulus, regardless of the concentration of the fraction, as long as it exceeds a minimum threshold. In cases where the fraction is diluted to 10–6 and 10–5 concentrations, migration percentages in the y-plane are similar to the unstimulated control. In contrast, a clear peak in the verticality of migration is observed when moving from the 10–5 dilution to the 10–4 dilution. This could indicate that the sensitivity range of the assay for this study fraction is in the range of a 10–4 and 10–5 dilution. Subsequent more concentrated dilutions than 10–4 show high percentages of migration towards the top of the chip (Fig, 3D). The secreted protein fraction contains an uncharacterized composition of molecules. However, numerous proteins secreted by M. tuberculosis are described in the literature. In particular, the two more immunogenic proteins from this pathogen are ESAT-6 and CFP-10 [41]. First, we confirmed the presence of both proteins in our secreted protein sample by western blot (Fig. 3E). Then, we performed titration assays with purified samples of ESAT-6 and CFP-10 proteins, in a range of concentrations from 1 ng/mL to 10 µg/mL. In the case of ESAT-6, the most diluted concentration, 1 ng/mL, was able to stimulate the directional migration of macrophages. A gradual increase in directionality of migration is observed as the stimulus concentration is increased from 1 to 10 ng/mL. However, once this concentration is reached, the percentage in vertical migration remains around 0.7. In the case of CFP-10, we again observed a threshold effect, in which concentrations of 1 and 10 ng/ml do not stimulate the directional migration of macrophages. However, by increasing the concentration to 100 ng/mL, we observed a prominent rise in verticality (Fig. 3F). Altogether, these results support the hypothesis of a minimum concentration threshold that must be exceeded to induce macrophage directional migration in our device, and this threshold depends on the specific stimulus. Macrophages show directional migration towards bacterial fractions of S. typhimurium We applied those conditions tested in M. tuberculosis to S. typhimurium, since both bacteria are intracellular pathogens targeted by macrophages (Fig. 4A). Macrophage migration was analyzed in 2.5 mg/mL collagen gels for 24 h. Qualitatively, it is observed that macrophages preferentially migrate towards all S. typhimurium fractions (Fig. 4B). By weighting the number of macrophages migrating toward the gradients generated with the Salmonella fractions, it is demonstrated that all S. typhimurium fractions exert an effect on macrophages by attracting them directionally. Migration towards secreted protein results in the highest macrophage migration relative to the remaining fractions, exceeding 0.65 value (Fig. 4C). Titration assays with the secreted protein from S. typhimurium show the same migration pattern previously observed with M. tuberculosis. At concentrations below the threshold, (10–6 dilution relative to the fraction used in panels B and C), macrophages migration is similar to the non-stimulated control. As soon as the concentration is increased to a 10–5 dilution, macrophages migrate directionally towards the top of the hydrogel. Once reached the threshold, this verticality of migration is maintained more or less constant, regardless of the stimulus concentration applied (Fig. 4D). Macrophage facing S. typhimurium stimuli. A Treatments applied to S. typhimurium in order to obtain the different fractions subsequently tested. Intact bacteria consist of a gram-negative membrane (green) with superficial components such as LPS or OMP porines (yellow). Intracellular components are represented in blue, whereas extracellular secreted protein, like flagellin, PrgJ, SrfH, Sip and Sop family, are indicated in pink. Four treatments were performed to S. typhimurium: secreted protein where only extracellular molecules were collected; inactivation by PFA that generates a crosslinking (grey mesh) preserving the superficial components; a cell lysate to break the cell and release superficial and intracellular components; and an inactivation by heat that damages the membrane (green) and denatures proteins (amorphous blue balls). B Migration of macrophages in 2.5 mg/ml collagen gels towards a gradient generated at the top (+ y axis, indicated by grey dots) with S. typhimurium bacterial fractions: non-stimulated control (gray), secreted protein (pink), inactivated by PFA (yellow), cell lysate (blue) and inactivated by heat (green). Directionality of migration, where the radius is the number of cells migrating in each direction and normalized by the total number of cells per condition. C Percentage of macrophages as a function of the verticality of their migration and weighted by the total distance traveled. Those macrophages whose final position is above their initial position contribute to the bar graph exceeding the red line set at 0.5. The dots in each bar graph correspond to the percentage of macrophages whose final position is above or below their starting position, but not weighted by the total distance migrated. n (devices) = 9 (control), 6 (secreted protein), 5 (inactivated by PFA) and 5 (cell lysate) and 5 (inactivated by heat). D S. typhimurium secreted protein fraction titration assay. The y-axis shows the percentage of macrophages migrating in the y-plane weighted by the total distance moved; the x-axis shows the concentrations of serial dilutions (1:10), ranging from 10–6 to 10–2 (pink intensity), including the control without bacterial stimulus (gray). The red line set at 0.5 Macrophages migrate towards fractions of non-pathogenic E. coli and M. smegmatis Since macrophages are innate immune cells able to recognize foreign antigens, they are able to target invading microorganisms irrespective of their pathogenic or non-pathogenic nature. Accordingly, once demonstrated the migration of macrophages towards pathogenic bacterial fractions, we sought to demonstrate that these innate immune cells are also recruited in the presence non-pathogenic microorganisms. Migration assays were also performed in response to stimuli from M. smegmatis, a mycobacteria commonly used as a non-pathogenic model of M. tuberculosis. Both qualitatively and quantitatively, a clear directional attraction of macrophages upon exposure to all bacterial fractions of M. smegmatis was observed (Fig. 5A and B). Specifically, the fraction that recruits the higher percentage of macrophages is the heat-inactivated bacterium, with more than three quarters of the macrophages migrating towards it. Migration of macrophages in 2.5 mg/ml collagen gels towards a gradient generated at the top (+ y axis, indicated by grey dots) with M. smegmatis bacterial fractions: non-stimulated control (gray), secreted protein (pink), inactivated by PFA (yellow), cell lysate (blue) and inactivated by heat (green). A Directionality of migration, where the radius is the number of cells migrating in each direction and normalized by the total number of cells per condition. B Percentage of macrophages as a function of the verticality of their migration and weighted by the total distance traveled. Those macrophages whose final position is above their initial position contribute to the bar graph exceeding the red line set at 0.5. The dots in each bar graph correspond to the percentage of macrophages whose final position is above or below their starting position, but not weighted by the total distance migrated. n (devices) = 9 (control), 5 (secreted protein), 5 (inactivated by PFA), 5 (cell lysate) and 5 (inactivated by heat) The response of macrophages to E. coli DH5α bacteria was also tested. This E. coli strain is a non-pathogenic bacterium, routinely used in laboratories as a genetic model. Qualitatively, macrophages preferentially migrate toward the secreted protein and the PFA-fixed bacteria (Fig. 6A). However, quantitatively, it is observed that macrophage, not only sense attraction to these two fractions, but also to bacteria inactivated by heat (Fig. 6B). Overall, the chemotactic response generated by the secreted protein fraction results in up to 70% macrophage recruitment. In contrast, the cell lysate failed to stimulate a clear directional migration of macrophages. Migration of macrophages in 2.5 mg/ml collagen gels towards a gradient generated at the top (+ y axis, indicated by grey dots) with E. coli bacterial fractions: non-stimulated control (gray), secreted protein (pink), inactivated by PFA (yellow), cell lysate (blue) and inactivated by heat (green). A Directionality of migration, where the radius is the number of cells migrating in each direction and normalized by the total number of cells per condition. B Percentage of macrophages as a function of the verticality of their migration and weighted by the total distance traveled. Those macrophages whose final position is above their initial position contribute to the bar graph exceeding the red line set at 0.5. The dots in each bar graph correspond to the percentage of macrophages whose final position is above or below their starting position, but not weighted by the total distance migrated. n (devices) = 9 (control), 5 (secreted protein), 5 (inactivated by PFA), 5 (cell lysate) and 5 (inactivated by heat) Even though M. smegmatis and E. coli fractions are able to recruit macrophages, it is key to remember that under a physiological environment, the fate of these bacteria within the macrophage differs from that of pathogenic S. typhimurium and M. tuberculosis. Specifically, intracellular multiplication of M. smegmatis and E. coli DH5α is restricted in activated macrophages, since these bacteria does not possess intracellular survival mechanisms, unlike their M. tuberculosis and S. typhimurium counterparts [42, 43]. Pathogen-host interaction has been studied by molecular- and cellular-based methods. However, despite having contributed invaluable knowledge, these methods still show difficulties in mimicking a physiological infection. For example, in vitro or ex vivo models do not include the incorporation of realistic flow or biomechanical environments and culture conditions are sometimes not suitable for the growth of all microorganisms. This leads to results obtained that do not always correlate with in vivo models [44]. The emergence of new technologies, such as microfluidics, allows overcoming some of these limitations [14]. Specifically, the microfluidic device used in this article allows us to generate a three-dimensional biomechanical environment that recreates the extracellular matrix through which macrophages physiologically migrate. This is of utmost importance, since it has been shown that the two-dimensional and three-dimensional migration of this cell type presents differences, such as greater variation in cell dynamics and morphology in 3D environments [45]. Our three-dimensional approach presents some limitations that have to be carefully analyzed to understand the impact of our research work. First, although macrophages are fully embedded in a three-dimensional hydrogel, their movement in the z-plane is limited by the central chamber height of 300 microns. Second, when analyzing their migration, a single z-stack is chosen, so although the movement is three-dimensional, only the cells located in this plane are tracked. Finally, the cells present a small random motion around themselves, which has been defined as morphodynamics. These changes in cell shape contribute to migratory motion [46], but can give rise to abnormally high mean velocities, since this measure encompasses cell positions of all the frames, averaging the instantaneous velocities between them. Therefore, even small vibrations will contribute to the final value. To avoid the misinterpretation of these results, especially in analyses with small cell sizes and short migrated distances, mean speeds are shown together with effective velocities, that only take into account the initial and final position of the cells. For providing directional migration, a stimulus must first be present, as demonstrated by the lack of macrophage directional movement in the non-stimulated controls. Subsequently, this stimulus is sensed and a signaling cascade is generated that activates the cytoskeleton machinery so that the macrophage migrates towards the received signal [47]. Although there are numerous stimuli of different nature in cell migration, such as chemical or mechanical [48], the current literature on macrophage migration has only described stimuli of biochemical origin. This implies that a physical contact must be established between the signaling molecule and the sensitive macrophage receptor [49]. The design of the microfluidic device used in this study allows the generation of a chemical gradient from the side channel where the molecules of interest are introduced to the central chamber where the three-dimensional cell culture is located [50]. The diffusive coefficient of a particle in a hydrogel is defined by the Stokes–Einstein equation modified to take into account the porosity of the matrix (not to confuse with the diffusive term D of the MSD analysis that represents macrophage motility and movement persistence). Thus, the effective diffusion coefficient of a particle (Dp) is calculated following the formula: $${D}_{p}=\frac{{k}_{B}T}{6\pi \eta r}\cdot \mathrm{exp}\left(-\left[\sqrt{\varphi }\cdot \left(1+\frac{r}{{r}_{f}}\right)\right]\right)$$ where kB is Boltzmann's constant, T is the absolute temperature, η is the viscosity of the fluid, r is the radius of the diffusing molecule, φ is the void ratio of the matrix and rf is the radius of the fiber [50]. Thus, we hypothesize that the components of the bacterial fractions diffuse into the hydrogel until they come into contact with macrophages. The bacterial fractions used in the present study consist on a great variety of molecules of different sizes, which makes it impossible to calculate the diffusion coefficient of each individual component. However, the size of the largest component, which is the intact bacterium after fixation with PFA, is known. Typically, a prokaryotic bacteria has an average length of 1–5 microns. Therefore, for the calculation of the effective diffusive coefficient we are going to use as study molecule a particle of 3 microns in diameter. The fractions are dissolved in RPMI medium supplemented with 10% fetal bovine serum, which assumes a dynamic viscosity of the medium of approximately 0.95 mPa x s [51]. Migration assays are carried out on 2.5 mg/mL collagen gels, whose three-dimensional structure has been studied by Olivares et al. determining that the void ratio is 85.01% and the approximate radius of the collagen fibers is 0.12 microns [52]. Applying these data to the previously described formula, we obtain an effective diffusion coefficient of 1.77—10–14 m2 x s−1. This indicates that the whole bacterium is able to diffuse through the hydrogel and will come into contact with the macrophages, stimulating them. Therefore, we hypothesize that if the largest component of the fractions is able to diffuse, the remaining molecules, which are smaller in size, will be also able to diffuse towards the central channel containing cells. This would result in the overall trend of macrophages to migrate upwards when bacterial fractions are present, exceeding in all conditions the migration parameters of the non-stimulated control. However, despite the clear trend in directional migration when bacterial fractions are present, statistical differences were not found among conditions (Supplementary Figure S2). This is probably imposed by the technical limitation in analyzing 13 experimental conditions using 95 chips, so that a low number of chips (5 to 9) were analyzed in each condition (See Table S1 for details). Since our study aims to establish a proof-of-concept to study bacterial migration using microfluidics, we have preferred to design wide, exploratory, analyses; but we are confident that focusing on a single experimental condition would have resulted in stronger statistical differences. The M. tuberculosis fractions contains well-known molecules that interact with macrophages. This pathogen contains in its genome five specialized secretion systems, known as ESX1-5 [53]. Among them, the ESX-1, which secretes ESAT-6 and CFP-10 is finely characterized. These genes are found in the RD1 region, which is lacking in the BCG tuberculosis vaccine, indicating their essentiality in the virulence of this bacterium [54]. ESAT-6 and CFP-10 each contains a dense repertoire of T-cell epitopes and are known for their immunogenicity; indeed, they are used as vaccine candidates [41]. Both proteins induce recruitment of macrophages to infected areas in in vivo models [55]. In line with these observations, we have demonstrated a concentration-dependent macrophage migration when using purified ESAT-6 and CFP-10 antigens. This result opens translational applications, since other relevant antigens, from any cell or microorganism, could be tested using our microfluidic design. This knowledge has a direct and useful application in the biomedical and vaccine field. On the other hand, S. typhimurium is characterized by a series of virulence-associated genes that are divided into 17 pathogenicity islands (SPI) throughout its genome, with SPI-1 being the most important. The best-known virulence system of Salmonella is the type III secretion system (T3SS), which translocates effector proteins, both to the extracellular matrix and to the cytoplasm of host cells thanks to its needle-like structure [56]. Since these SPI are absent in E. coli DH5α, it is tempting to speculate that differences in macrophage migrations towards Salmonella and Escherichia fractions might be explained on the basis of differential antigen composition in both bacteria. Furthermore, we determined that the directional migration of macrophages is regulated by the concentration of the bacterial stimulus. A migration model is proposed in which there is a threshold concentration that must be exceeded to induce the attraction of these immune cells. Once exceeded, macrophages show the same percentage of verticality in their movement, regardless of the concentration of the fraction. Even if application of microfluidics to microbiology is an emerging field, recent studies show that this technique is gaining momentum. Kim et al. were able to grow intestinal microbiota bacteria and intracellular pathogens in a microfluidic device that recreated the gut. The composition of this device was based on the culture of a monolayer of intestinal epithelium on a hydrogel that recreates the ECM and the passage of fluid flow at low rate that mimics peristaltic movements [57, 58]. Thacker et al. used a similar, but more complex approach in which they co-cultured lung epithelium, along with endothelial cells and macrophages to recreate lung physiology. In this case, M. tuberculosis was introduced to mimic a realistic infection situation [59]. It should be also noted that the mechanical environment, such as stiffness, pH or hydrodynamics, also influences cell behavior [60]. Our microfluidic device would allow replacing the collagen-based matrix by other hydrogel options that bring new features to study macrophage motility characteristics. Scaffolds can be made sensitive to temperature, with polysaccharides such as hyaluronic acid or cellulose, to electric fields using sulfonated polystyrene, or to pH using polyacrylamide [61]. These perspectives show the great potential and versatility of the application of microfluidics in microbiology. Raw and processed data about macrophage migration will be made available upon request to [email protected] or [email protected]. WHO. Global Health Estimates. 2019. WHO. Global Tuberculosis Report. 2021. WHO. WHO estimates of the global burden of foodborne diseases. 2015. Rivera-Calzada A, Famelis N, Llorca O, Geibel S. Type VII secretion systems: structure, functions and transport models. Nat Rev Microbiol. 2021;19:567–84 (Springer US). Simeone R, Sayes F, Lawarée E, Brosch R. Breaching the phagosome, the case of the tuberculosis agent. Cell Microbiol. 2021;23:1–11. Pai M, Behr MA, Dowdy D, Dheda K, Divangahi M, Boehme CC, et al. Tuberculosis. Nat Rev Dis Prim. 2016;2:1–23 Kaper JB, Nataro JP, Mobley HLT. Pathogenic Escherichia coli. Nat Rev Microbiol. 2004;2:123–40. Croxen MA, Finlay BB. Molecular mechanisms of Escherichia coli pathogenicity. Nat Rev Microbiol. 2010;8:26–38. Deng W, Marshall NC, Rowland JL, McCoy JM, Worrall LJ, Santos AS, et al. Assembly, structure, function and regulation of type III secretion systems. Nat Rev Microbiol. 2017;15:323–37. Coburn B, Sekirov I, Finlay BB. Type III secretion systems and disease. Clin Microbiol Rev. 2007;20:535–49. Ley K, Pramod AB, Croft M, Ravichandran KS, Ting JP. How mouse macrophages sense what is going on. Front Immunol. 2016;7:1–17. Underhill DM, Ozinsky A. Phagocytosis of microbes: Complexity in action. Annu Rev Immunol. 2002;20:825–52. Sackmann E, Fulton A, Beebe D. The present and future role of microfluidics in biomedical research. Nature. 2014;507:181–9. Pérez-Rodríguez S, García-Aznar JM, Gonzalo-Asensio J. Microfluidic devices for studying bacterial taxis, drug testing and biofilm formation. Microb Biotechnol. 2021;15(2):395–414. Perroud TD, Kaiser JN, Sy JC, Lane TW, Branda CS, Singh AK, et al. Microfluidic-based cell sorting of Francisella tularensis infected macrophages using optical forces. Anal Chem. 2008;80:6365–72. Srivastava N, Brennan JS, Renzi RF, Wu M, Branda SS, Singh AK, et al. Fully integrated microfluidic platform enabling automated phosphoprofiling of macrophage response. Anal Chem. 2009;81:3261–9. Hondroulis E, Movila A, Sabhachandani P, Sarkar S, Cohen N, Kawai T, et al. A droplet-merging platform for comparative functional analysis of M1 and M2 macrophages in response to E. coli-induced stimuli. Biotechnol Bioeng. 2017;114:705–9. James CD, Moorman MW, Carson BD, Branda CS, Lantz JW, Manginell RP, et al. Nuclear translocation kinetics of NF-κB in macrophages challenged with pathogens in a microfluidic platform. Biomed Microdevices. 2009;11:693–700. Huang C, Wang H, De Figueiredo P, Automatic HA, Platform M-B, for Investigating the Emergence of Pathogenicity. 20th Int Conf Solid-State Sensors, Actuators Microsystems Eurosensors XXXIII, Transducers 2019 Eurosensors XXXIII. IEEE. 2019;2019:2239–42. Kijanka GS, Dimov IK, Burger R, Ducrée J. Real-time monitoring of cell migration, phagocytosis and cell surface receptor dynamics using a novel, live-cell opto-microfluidic technique. Anal Chim Acta. 2015;872:95–9 (Elsevier B.V.). Gopalakrishnan N, Hannam R, Casoni GP, Barriet D, Ribe JM, Haug M, et al. Infection and immunity on a chip: A compartmentalised microfluidic platform to monitor immune cell behaviour in real time. Lab Chip Royal Society of Chemistry. 2015;15:1481–7. Polacheck WJ, Li R, Uzel SGM, Kamm RD. Microfluidic platforms for mechanobiology. Lab Chip. 2013;13:2252–67. Moreno-Arotzena O, Borau C, Movilla N, Vicente-Manzanares M, García-Aznar J. Fibroblast Migration in 3D is Controlled by Haptotaxis in a Non-muscle Myosin II-Dependent Manner. Ann Biomed Eng. 2015;43:3025–39. Pérez-Rodríguez S, Tomás-González E, García-Aznar JM. 3D cell migration studies for chemotaxis on microfluidic-based chips: a comparison between cardiac and dermal fibroblasts. Bioengineering. 2018;5(2):45. Cole ST, Brosch R, Parkhill J, Garnier T, Churcher C, Harris D, et al. Deciphering the biology of Mycobacterium tuberculosis from the complete genome sequence. Nature. 1998;396:537–44. Snapper SB, Melton RE, Mustafa S, Kieser T Jr, WRJ. Isolation and characterization of efficient plasmid transformation mutants of Mycobacterium smegmatis. Mol Microbiol. 1990;4:1911–9. Hoiseth SK, Stocker BAD. Aromatic-dependent Salmonella typhimurium are non-virulent and effective as live vaccines. Nature. 1981;291:238–9. Shin Y, Han S, Jeon JS, Yamamoto K, Zervantonakis IK, et al. Microfluidic assay for simultaneous culture of multiple cell types on surfaces or within hydrogels. Nat Protoc. 2012;7:1247–59. Frantz C, Stewart K, Weaver V. The extracellular matrix at a glance. J Cell Sci. 2010;123:4195–200. Plou J, Juste-Lanas Y, Olivares V, del Amo C, Borau C, García-Aznar JM. From individual to collective 3D cancer dissemination: roles of collagen concentration and TGF-β. Sci Rep. 2018;8:1–14. Loosley AJ, O'Brien XM, Reichner JS, Tang JX. Describing directional cell migration with a characteristic directionality time. PLoS One. 2015;10:1–18. Luzhansky ID, Schwartz AD, Cohen JD, Macmunn JP, Barney LE, Jansen LE, et al. Anomalously diffusing and persistently migrating cells in 2D and 3D culture environments. APL Bioeng. 2018;2:1–15. Duval K, Grover H, Han LH, Mou Y, Pegoraro AF, Fredberg J, et al. Modeling physiological events in 2D vs. 3D cell culture. Physiology. 2017;32:266–77. Pérez-Rodríguez S, Huang SA, Borau C, García-Aznar JM, Polacheck WJ. Microfluidic model of monocyte extravasation reveals the role of hemodynamics and subendothelial matrix mechanics in regulating endothelial integrity. Biomicrofluidics. AIP Publishing LLC; 2021;15:054102. Woringer M, Izeddin I, Favard C, Berry H. Anomalous Subdiffusion in Living Cells: Bridging the Gap Between Experiments and Realistic Models Through Collaborative Challenges. Front Phys. 2020;8:1–9. van den Bos E, Walbaum S, Horsthemke M, Bachg AC, Hanley PJ. Time-lapse imaging of mouse macrophage chemotaxis. J Vis Exp. 2020;158. Nguyen-Chi M, Laplace-Builhe B, Travnickova J, Luz-Crawford P, Tejedor G, Phan QT, et al. Identification of polarized macrophage subsets in zebrafish. Elife. 2015;4:1–14. Grabher C, Cliffe A, Miura K, Hayflick J, Pepperkok R, Rørth P, et al. Birth and life of tissue macrophages and their migration in embryogenesis and inflammation in medaka. J Leukoc Biol. 2007;81:263–71. Chao Y, Zhang T. Optimization of fixation methods for observation of bacterial cell morphology and surface ultrastructures by atomic force microscopy. Appl Microbiol Biotechnol. 2011;92:381–92. Smelt JPPM, Brul S. Thermal Inactivation of Microorganisms. Crit Rev Food Sci Nutr. 2014;54:1371–85. Copin R, Coscollá M, Efstathiadis E, Gagneux S, Ernst JD. Impact of in vitro evolution on antigenic diversity of Mycobacterium bovis bacillus Calmette-Guerin (BCG). Vaccine. 2014;32:5998–6004. Ibarra JA, Steele-Mortimer O. Salmonella - the ultimate insider. Salmonella virulence factors that modulate intracellular survival. Cell Microbiol. 2009;11:1579–86. Zhai W, Wu F, Zhang Y, Fu Y, Liu Z. The immune escape mechanisms of Mycobacterium Tuberculosis. Int J Mol Sci. 2019;20(2);340:1–18 Bodor A, Bounedjoum N, Vincze GE, Erdeiné Kis Á, Laczi K, Bende G, et al. Challenges of unculturable bacteria: environmental perspectives. Rev Environ Sci Biotechnol. 2020;19:1–22. Gao WJ, Liu JX, Liu MN, Yao Y DA, Liu ZQ, Liu L, et al. Macrophage 3D migration: A potential therapeutic target for inflammation and deleterious progression in diseases. Pharmacol Res. 2021;167:105563 (Elsevier Ltd;). Eddy CZ, Raposo H, Manchanda A, Wong R, Li F, Sun B. Morphodynamics facilitate cancer cells to navigate 3D extracellular matrix. Sci Rep Nature Publishing Group UK. 2021;11:1–10. SenGupta S, Parent CA, Bear JE. The principles of directed cell migration. Nat Rev Mol Cell Biol. 2021;22:529–47 (Springer US). Del Amo C, Borau C, Movilla N, Asín J, García-Aznar J. Quantifying 3D chemotaxis in microfluidic-based chips with step gradients of collagen hydrogel concentrations. Integr Biol. 2017;9:339–49. Dianqing WU. Signaling Mechanisms for Regulation of Chemotaxis. Cell Res. 2005;15:52–6. Moreno-Arotzena O, Mendoza G, Cóndor M, Rüberg T, García-Aznar JM. Inducing chemotactic and haptotatic cues in microfluidic devices for three-dimensional in vitro assays. Biomicrofluidics. 2014;8:1–15. Poon C. Measuring the density and viscosity of culture media for optimized computational fluid dynamics analysis of in vitro devices. bioRxiv. 2020;2020.08.25.266221. Olivares V, Cóndor M, Del Amo C, Asín J, Borau C, García-Aznar JM. Image-based characterization of 3d collagen networks and the effect of embedded cells. Microsc Microanal. 2019;25:971–81. Simeone R, Bottai D, Frigui W, Majlessi L, Brosch R. ESX/type VII secretion systems of mycobacteria: Insights into evolution, pathogenicity and protection. Tuberculosis Elsevier Ltd. 2015;95:S150–4. Lewis KN, Liao R, Guinn KM, Hickey MJ, Smith S, Behr A, et al. Deletion of RD1 from Mycobacterium tuberculosis Mimics Bacille Calmette-Guérin Attenuation. J Infect Dis. 2003;187:117–23. Davis JM, Ramakrishnan L. The Role of the Granuloma in Expansion and Dissemination of Early Tuberculous Infection. Cell. 2009;136:37–49 (Elsevier Inc.;). Agbor TA, Mccormick BA. Salmonella effectors: Important players modulating host cell function during infection. Cell Microbiol. 2011;13:1858–69. Kim HJ, Huh D, Hamilton G, Ingber DE. Human gut-on-a-chip inhabited by microbial flora that experiences intestinal peristalsis-like motions and flow. Lab Chip. 2012;12:2165–74. Kim HJ, Li H, Collins JJ, Ingber DE. Contributions of microbiome and mechanical deformation to intestinal bacterial overgrowth and inflammation in a human gut-on-a-chip. Proc Natl Acad Sci U S A. 2016;113:E7-15. Thacker VV, Dhar N, Sharma K, Barrile R, Karalis K, McKinney JD. A lung-on-chip model of early M. tuberculosis infection reveals an essential role for alveolar epithelial cells in controlling bacterial growth. Elife. 2020;9:1–22. Persat A, Nadell CD, Kim MK, Ingremeau F, Siryaporn A, Drescher K, et al. The mechanical world of bacteria. Cell. 2015;161:988–97. Mantha S, Pillai S, Khayambashi P, Upadhyay A, Zhang Y, Tao O, et al. Smart Hydrogels in Tissue Engineering and Regenerative Medicine. Materials (Basel). 2019;12:33. This work was supported by the following grants: FPU Grant FPU16/04398 funded by the Spanish Ministry of Education, Culture and Sport to S. P.-R.; Grant Adg-101018587 funded by the European Research Council, Grant RTI2018-094494-B-C21 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe" to J. M. G.A.; Grant PID2019-104690RB-I00 funded by MCIN/AEI/ 10.13039/501100011033 to J. G.-A. The authors declare that there is no conflict of interest. Department of Mechanical Engineering, Multiscale in Mechanical and Biological Engineering, University of Zaragoza, 50018, Zaragoza, Spain Sandra Pérez-Rodríguez, Carlos Borau & José Manuel García-Aznar Aragon Institute of Engineering Research, University of Zaragoza, 50018, Zaragoza, Spain Grupo de Genética de Micobacterias. Departamento de Microbiología. Facultad de Medicina, Universidad de Zaragoza, IIS Aragón, 50009, Zaragoza, Spain Sandra Pérez-Rodríguez & Jesús Gonzalo-Asensio CIBER Enfermedades Respiratorias, Instituto de Salud Carlos III, 28029, Madrid, Spain Jesús Gonzalo-Asensio Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), 50018, Zaragoza, Spain Sandra Pérez-Rodríguez Carlos Borau José Manuel García-Aznar S. P.-R., J. M. G.-A. and J. G.-A. conceived and designed the experiments; S. P.-R. conducted the experiments; J. M. G.-A. and J. G.-A. obtained funding and contributed materials and facilities; S. P.-R. and C. B. analyzed the data; S. P.-R., J. M. G.-A. and J. G.-A. conceived figures; S. P.-R. and C. B. designed figures; S. P.-R. wrote the manuscript draft; J. M. G.-A. and J. G.-A. reviewed the final version of the manuscript. The author(s) read and approved the final manuscript. Correspondence to Jesús Gonzalo-Asensio. Not aplicable. The co-authors have no competing interests to disclose. Number of cells analyzed in migration assays. Figure S1. Vertical analysis of the migration of macrophages in gels of different collagen concentration. Figure S2. Statistical analysis of macrophage migration in y-plane towards bacterial stimuli. Figure S3. Original versions of the Western-blot membranes shown in Figure 3E. Pérez-Rodríguez, S., Borau, C., García-Aznar, J.M. et al. A microfluidic-based analysis of 3D macrophage migration after stimulation by Mycobacterium, Salmonella and Escherichia. BMC Microbiol 22, 211 (2022). https://doi.org/10.1186/s12866-022-02623-w DOI: https://doi.org/10.1186/s12866-022-02623-w Intracellular pathogens Phagocytes Chemotaxis Escherichia Respiratory pathogen
CommonCrawl
Pramana October 2019 , 93:52 | Cite as Determination of classical behaviour of the Earth for large quantum numbers using quantum guiding equation Ali Soltanmanesh Afshin Shafiee For quantum systems, we expect to see the classical behaviour at the limit of large quantum numbers. Hence, we apply Bohmian approach for describing the evolution of Earth around the Sun. We obtain possible trajectories of the Earth system with different initial conditions which converge to a certain stable orbit, known as the Kepler orbit, after a given time. The trajectories are resulted from the guiding equation \(p=\nabla S\) in the Bohmian mechanics, which relates the momentum of the system to the phase part of the wave function. Except at some special situations, Bohmian trajectories are not Newtonian in character. We show that the classic behaviour of the Earth can be interpreted as the consequence of the guiding equation at the limit of large quantum numbers. Bohmian mechanics quantum trajectories correspondence principle PACS Nos 03.65.Ta 03.65.−w 03.65.Ca 04.25.−g The authors would like to thank M Koorepaz Mahmoodabadi for his assistance and useful comments on the nonlinear equations with closed cycles which improved their ideas on the subject. The differential equation (18) can be solved in any time domain in which \(\xi \) is considered as a constant. One can write this equation as $$\begin{aligned} \frac{\mathrm{d}r}{\mathrm{d}t}=\xi \sin \theta \sqrt{\frac{2\mu }{r(t)}-\frac{\mu }{a}}, \end{aligned}$$ (A.1) where both \(\theta \) and r are time-dependent, but have no dependency on each other. For \(\theta \) we have from (9) $$\begin{aligned} \sin \theta =\sqrt{1-\frac{Z_h^2}{r^2(t)}}. \end{aligned}$$ During the Earth's rotation around the Sun, the time variations of r are insignificant. With regard to (A.2) and noticing that the constant \(Z_h\) has the same order of magnitude as r, one can neglect the time dependency of \(\theta \) too. Therefore, to solve eq. (A.1), we can consider the term \(\xi \sin \theta \) as a constant. Define the variable q as $$\begin{aligned} q=r-r_\mathrm{eq}, \end{aligned}$$ where \(r_\mathrm{eq}\) represents the equilibrium distance, and eq. (A.1) yields $$\begin{aligned} \frac{\mathrm{d}q}{\mathrm{d}t}=\xi \sin \theta \sqrt{\frac{2\mu }{r_\mathrm{eq}(\frac{q}{r_\mathrm{eq}}+1)}-\frac{\mu }{a}}. \end{aligned}$$ Hence, the variable q represents the deviation of r from the equilibrium distance \(r_\mathrm{eq}\simeq a\). The term \(q/r_\mathrm{eq}\) is small, so that one can expand \((1+q/r_\mathrm{eq})^{-1}\) to obtain $$\begin{aligned} \frac{\mathrm{d}q}{\mathrm{d}t}=\xi \sin \theta \sqrt{\frac{\mu }{r_\mathrm{eq}}\left( 2-\frac{q}{r_\mathrm{eq}}\right) -\frac{\mu }{a}}. \end{aligned}$$ By expanding the radical term, one finally gets $$\begin{aligned} \frac{\mathrm{d}q}{\mathrm{d}t}=\xi \sin \theta \left[ B-\frac{\mu }{2Br_\mathrm{eq}^2}q-\frac{\mu ^2}{8B^3r_\mathrm{eq}^4}q^2\right] , \end{aligned}$$ where B is already defined in eq. (22). Equation (A.6) is a linear differential equation which can be solved easily. Consequently, eq. (20) is obtained as an answer from (A.6), followed by relations (21) and (22) as definitions in (20). D Keeports, Eur. J. Phys. 33, 1587 (2012)CrossRefGoogle Scholar D Dürr, S Goldstein and N Zanghi, J. Stat. Phys. 67, 843 (1992)ADSCrossRefGoogle Scholar P Holland, The quantum theory of motion: An account of the de Broglie–Bohm casual interpretation of quantum mechanics (Cambridge University Press, Cambridge, 1995)zbMATHGoogle Scholar F Rahmani, M Golshani and M Sarbishei, Pramana – J. Phys. 86(4), 747 (2016)ADSCrossRefGoogle Scholar F Rahmani, M Golshani and M Sarbishei, Pramana – J. Phys. 87(2): 23 (2016)ADSCrossRefGoogle Scholar J Y Koupaei and M Golshani, J. Math. Phys. 54(12), 122107 (2013)ADSMathSciNetCrossRefGoogle Scholar A Tilbi, T Boudjedaa and M Merad, Pramana – J. Phys. 87(5): 66 (2016)ADSCrossRefGoogle Scholar D Dürr and S Teufel, Bohmian mechanics: The physics and mathematics of quantum theory (Springer, Berlin, Heidelberg, 2009)zbMATHGoogle Scholar E Flöthmann and K H Welge, Phys. Rev. A 54, 1884 (1996)ADSMathSciNetCrossRefGoogle Scholar M C Gutzwiller, Chaos in classical and quantum mechanics (Springer, Berlin, Heidelberg, 1990)CrossRefGoogle Scholar J Laskar, P Robutel, F Joutel, M Gastineau, A C M Correia and B Levrard, Astron. Astrophys. 428, 261 (2004)ADSCrossRefGoogle Scholar E Battista and G Esposito, Phys. Rev. D 89, 084030 (2014)ADSCrossRefGoogle Scholar M C Gutzwiller, Rev. Mod. Phys. 70, 589 (1998)ADSCrossRefGoogle Scholar M J Holman and N W Murray, Astron. J. 112, 1278 (1996)ADSCrossRefGoogle Scholar H A Bethe and E E Salpeter, Quantum mechanics of one- and two-electron atoms (Springer Verlag, Berlin, New York, 1957)CrossRefGoogle Scholar E Nielsen, D V Federov, A S Jensen and E Garrido, Phys. Rep. 347, 373 (2001)ADSMathSciNetCrossRefGoogle Scholar J Carlson and R Schiavilla, Rev. Mod. Phys. 70 743 (1998)ADSCrossRefGoogle Scholar G Tanner, K Richter and J M Rost, Rev. Mod. Phys. 72, 497 (2000)ADSCrossRefGoogle Scholar A S Jensen, A Cobis, D V Fedorov, E Garrido and E Nielsen, Few-Body Syst. 10, (Suppl.) 19 (1999)CrossRefGoogle Scholar B Jonson and K Riisager, Philos. Trans. R. Soc. Lond. A 356, 2063 (1998)ADSCrossRefGoogle Scholar J M Randazzo, F Buezas, A L Frapiccini, F D Colavecchia and G Gasaneo, Phys. Rev. A 84, 052715 (2011)ADSCrossRefGoogle Scholar F Robicheaux, Phys. Rev. A 60, 1706 (1999)ADSCrossRefGoogle Scholar G Chen, Z Ding, A Perronnet and Z Zhang, J. Math. Phys. 49, 062102 (2008)ADSMathSciNetCrossRefGoogle Scholar A Yahalom, J Levitan, M Lewkowicz and L Horwitz, Phys. Lett. A 375, 2111 (2011)ADSCrossRefGoogle Scholar X Cui, Few-Body Syst. 52, 65 (2012)ADSCrossRefGoogle Scholar R Castelli, Commun. Nonlinear Sci. 17, 804 (2012)CrossRefGoogle Scholar H W Hamber and S Liu, Phys. Lett. B 357, 51 (1995)ADSCrossRefGoogle Scholar S Naoz, B Kocsis, A Loeb and N Yunes, Astrophys. J. 773(2), 187 (2013)ADSCrossRefGoogle Scholar J Hartung and J Steinhoff, Ann. Phys. 523, 783 (2011)CrossRefGoogle Scholar T Damour, P Jaranowski and G Schäfer, Phys. Rev. D 62, 021501 (2000)ADSMathSciNetCrossRefGoogle Scholar J S Townsend, A modern approach to quantum mechanics (University Science Books, CA, USA, 2000)Google Scholar D Bohm, Phys. Rev. 85, 166 (1952)ADSCrossRefGoogle Scholar D Bohm and B J Hiley, The undivided universe: An ontological interpretation of quantum theory (Routledge, UK, 2006)CrossRefGoogle Scholar © Indian Academy of Sciences 2019 1.Research Group on Foundations of Quantum Theory and Information, Department of ChemistrySharif University of TechnologyTehranIran 2.School of PhysicsInstitute for Research in Fundamental Sciences (IPM)TehranIran Soltanmanesh, A. & Shafiee, A. Pramana - J Phys (2019) 93: 52. https://doi.org/10.1007/s12043-019-1820-5 Received 29 August 2018 Revised 07 April 2019 Publisher Name Springer India
CommonCrawl
Stability and separation property of radial solutions to semilinear elliptic equations DCDS Home On the oscillation behavior of solutions to the one-dimensional heat equation July 2019, 39(7): 4091-4126. doi: 10.3934/dcds.2019165 Random dynamics of fractional nonclassical diffusion equations driven by colored noise Renhai Wang 1, , Yangrong Li 1,, and Bixiang Wang 2, School of Mathematics and statistics, Southwest University, Chongqing 400715, China Department of Mathematics, New Mexico Institute of Mining and Technology, Socorro, NM 87801, USA * Corresponding author: [email protected] (Yangrong Li) Received October 2018 Revised January 2019 Published April 2019 Full Text(HTML) The random dynamics in $ H^s(\mathbb{R}^n) $ with $ s\in (0,1) $ is investigated for the fractional nonclassical diffusion equations driven by colored noise. Both existence and uniqueness of pullback random attractors are established for the equations with a wide class of nonlinear diffusion terms. In the case of additive noise, the upper semi-continuity of these attractors is proved as the correlation time of the colored noise approaches zero. The methods of uniform tail-estimate and spectral decomposition are employed to obtain the pullback asymptotic compactness of the solutions in order to overcome the non-compactness of the Sobolev embedding on an unbounded domain. Keywords: Nonclassical diffusion equation, fractional Laplace operator, colored noise, random attractor, upper semi-continuity. Mathematics Subject Classification: Primary: 37L55; Secondary: 35B40, 60H15. Citation: Renhai Wang, Yangrong Li, Bixiang Wang. Random dynamics of fractional nonclassical diffusion equations driven by colored noise. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4091-4126. doi: 10.3934/dcds.2019165 A. Adili and B. Wang, Random attractors for stochastic FitzHugh-Nagumo systems driven by deterministic non-autonomous forcing, Discrete Contin. Dyn. Syst. Ser. B, 18 (2013), 643-666. doi: 10.3934/dcdsb.2013.18.643. Google Scholar E. C. Aifantis, On the problem of diffusion in solids, Acta Mechanica, 37 (1980), 265-296. doi: 10.1007/BF01202949. Google Scholar E. C. Aifantis adient nanomechanics, applications to deformation, fracture, and diffusion in nanopolycrystals, Metallurgical and Materials Transactions A, 42 (2011), 2985-2998. Google Scholar C. T. Anh and T. Q. Bao, Pullback attractors for a class of non-autonomous nonclassical diffusion equations, Nonlinear Anal., 73 (2010), 399-412. doi: 10.1016/j.na.2010.03.031. Google Scholar C. T. Anh and T. Q. Bao, Dynamics of non-autonomous nonclassical diffusion equations on $\mathbb{R}^n$, Commun. Pure Appl. Anal., 11 (2012), 1231-1252. doi: 10.3934/cpaa.2012.11.1231. Google Scholar V. Anishchenko, V. Astakhov, A. Neiman, T. Vadivasova and L. Schimansky-Geier, Nonlinear Dynamics of Chaotic and Stochastic Systems: Tutorial and Modern Developments, Springer Series in Synergetics. Springer-Verlag, Berlin, 2002. Google Scholar L. Bai and F. Zhang, Existence of random attractors for 2D-stochastic nonclassical diffusion equations on unbounded domains, Results Math., 69 (2016), 129-160. doi: 10.1007/s00025-015-0505-8. Google Scholar P. W. Bates, K. Lu and B. X. Wang, Random attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2009), 845-869. doi: 10.1016/j.jde.2008.05.017. Google Scholar P. W. Bates, K. Lu and B. Wang, Attractors of non-autonomous stochastic lattice systems in weighted spaces, Phys. D, 289 (2014), 32-50. doi: 10.1016/j.physd.2014.08.004. Google Scholar P. W. Bates, K. Lu and B. Wang, Tempered random attractors for parabolic equations in weighted spaces, J. Math. Phys., 54 (2013), 081505, 26 pp. doi: 10.1063/1.4817597. Google Scholar L. Caffarelli, J. Roquejoffre and Y. Sire, Variational problems for free boundaries for the fractional Laplacian, J. Eur. Math. Soc., (JEMS) 12 (2010), 1151-1179. doi: 10.4171/JEMS/226. Google Scholar T. Caraballo and J. A. Langa, Stability and random attractors for a reaction-diffusion equation with multiplicative noise, Disrete Contin. Dyn. Syst., 6 (2000), 875-892. doi: 10.3934/dcds.2000.6.875. Google Scholar T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuss and J. Valero, Asymptotic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 439-455. doi: 10.3934/dcdsb.2010.14.439. Google Scholar T. Caraballo, M. J. Garrido-Atienza and T. Taniguchi, The existence and exponential behavior of solutions to stochastic delay evolution equations with a fractional Brownian motion, Nonlinear Anal., 74 (2011), 3671-3684. doi: 10.1016/j.na.2011.02.047. Google Scholar H. Crauel and M. Scheutzow, Minimal random attractors, J. Differential Equations, 265 (2018), 702-718. doi: 10.1016/j.jde.2018.03.011. Google Scholar H. Crauel, A. Debussche and F. Flandoli, Random attractors, J. Dynam. Differ. Equ., 9 (1997), 307-341. doi: 10.1007/BF02219225. Google Scholar E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar F. Flandoli and B. Schmalfuss, Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative white noise, Stochastics Stochastics Rep., 59 (1996), 21-45. doi: 10.1080/17442509608834083. Google Scholar H. Gao and C. Sun, Random dynamics of the 3D stochastic Navier-Stokes-Voight equations, Nonlinear Anal. RWA, 13 (2012), 1197-1205. doi: 10.1016/j.nonrwa.2011.09.013. Google Scholar M. J. Garrido-Atienza and B. Schmalfuss, Ergodicity of the infinite dimensional fractional Brownian motion, J. Dynam. Differ. Equ., 23 (2011), 671-681. doi: 10.1007/s10884-011-9222-5. Google Scholar M. J. Garrido-Atienza, A. Ogrowsky and B. Schmalfuss, Random differential equations with random delays, Stoch. Dyn., 11 (2011), 369-388. doi: 10.1142/S0219493711003358. Google Scholar W. Gerstner, W. Kistler, R. Naud and L. Paninski, Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition, Cambridge University Press, Cambridge, 2014. Google Scholar B. Gess, W. Liu and M. Rockner, Random attractors for a class of stochastic partial differential equations driven by general additive noise, J. Differential Equations, 251 (2011), 1225-1253. doi: 10.1016/j.jde.2011.02.013. Google Scholar B. Gess, Random attractors for degenerate stochastic partial differential equations, J. Dynam. Differ. Equ., 25 (2013), 121-157. doi: 10.1007/s10884-013-9294-5. Google Scholar B. Gess, Random attractors for singular stochastic evolution equations, J. Differential Equations, 255 (2013), 524-559. doi: 10.1016/j.jde.2013.04.023. Google Scholar A. Gu, B. Guo and B. Wang, Long term behavior of random Navier-Stokes equations driven by colored noise, (2018), submitted. Google Scholar A. Gu, D. Li, B. Wanga and H. Yang, Regularity of random attractors for fractional stochastic reaction-diffusion equations on $\mathbb{R}^n$, J. Differential Equations, 264 (2018), 7094-7137. doi: 10.1016/j.jde.2018.02.011. Google Scholar A. Gu and B. Wang, Asymptotic behavior of random FitzHugh-Nagumo systems driven by colored noise, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 1689-1720. Google Scholar M. Jara, Nonequilibrium scaling limit for a tagged particle in the simple exclusion process with long jumps, Comm. Pure Appl. Math., 62 (2009), 198-214. doi: 10.1002/cpa.20253. Google Scholar R. Jones and B. Wang, Asymptotic behavior of a class of stochastic nonlinear wave equations with dispersive and dissipative terms, Nonlinear Anal. RWA, 14 (2013), 1308-1322. doi: 10.1016/j.nonrwa.2012.09.019. Google Scholar P. E. Kloeden and J. A. Langa, Flattening, squeezing and the existence of random attractors, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 463 (2007), 163-181. doi: 10.1098/rspa.2006.1753. Google Scholar K. Kuttler and E. C. Aifantis, Quasilinear evolution equations in nonclassical diffusion, SIAM J. Math. Anal., 19 (1998), 110-120. doi: 10.1137/0519008. Google Scholar D. Li, B. Wang and X. Wang, Limiting behavior of non-autonomous stochastic reaction-diffusion equations on thin domains, J. Differential Equations, 262 (2017), 1575-1602. doi: 10.1016/j.jde.2016.10.024. Google Scholar D. Li, K. Lu, B. Wang and X. Wang, Limiting behavior of non-autonomous stochastic reaction-diffusion equations on thin domains, Discrete Contin. Dyn. Syst., 38 (2018), 187-208. Google Scholar Y. Li, A. Gu and J. Li, Existence and continuity of bi-spatial random attractors and application to stochastic semilinear Laplacian equations, J. Differential Equations, 258 (2015), 504-534. doi: 10.1016/j.jde.2014.09.021. Google Scholar Y. Li and R. Wang, Random attractors for 3D Benjamin-Bona-Mahony equations derived by a Laplace-multiplier noise, Stoch. Dyn., 18 (2018), 1850004, 26pp. doi: 10.1142/S0219493718500041. Google Scholar Y. Li and J. Yin, A modified proof of pullback attractors in a Sobolev space for stochastic Fitzhugh-Nagumo equations, Discrete Contin. Dyn. Syst. Ser. B, 21 (2016), 1203-1223. Google Scholar K. Lu and B. Wang, Wong-Zakai approximations and long term behavior of stochastic partial differential equations, J. Dynam. Differ. Equ., (2017), 1-31. doi: 10.1007/s10884-017-9626-y. Google Scholar H. Lu, P. W. Bates, S. Lu and M. Zhang, Dynamics of 3D fractional complex Ginzburg-Landau equation, J. Differential Equations, 259 (2015), 5276-5301. doi: 10.1016/j.jde.2015.06.028. Google Scholar H. Lu, P. W. Bates, J. Xin and M. Zhang, Asymptotic behavior of stochastic fractional power dissipative equations on $\mathbb{R}^n$, Nonlinear Anal., 128 (2015), 176-198. doi: 10.1016/j.na.2015.06.033. Google Scholar H. Lu, P. W. Bates, S. Lu and M. Zhang, Dynamics of the 3D fractional Ginzburg-Landau equation with multiplicative noise on an unbounded domain, Commun. Math. Sci., 14 (2016), 273-295. Google Scholar C. Morosi and L. Pizzocchero, On the constants for some fractional Gagliardo-Nirenberg and Sobolev inequalities, Expo. Math., 36 (2018), 32-77. doi: 10.1016/j.exmath.2017.08.007. Google Scholar X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary, J. Math. Pures Appl., 101 (2014), 275-302. doi: 10.1016/j.matpur.2013.06.003. Google Scholar R. Servadei and E. Valdinoci, On the spectrum of two different fractional operators, Proc. Roy. Soc. Edinburgh Sect. A, 144 (2014), 831-855. doi: 10.1017/S0308210512001783. Google Scholar R. Servadei and E. Valdinoci, Variational methods for non-local operators of elliptic type, Discrete Contin. Dyn. Syst., 33 (2013), 2105-2137. Google Scholar G. Uhlenbeck and L. Ornstein, On the theory of Brownian motion, Phys. Rev., 36 (1930), 823. doi: 10.1103/PhysRev.36.823. Google Scholar B. Wang, Random attractors for the stochastic Benjamin-Bona-Mahony equation on unbounded domains, J. Differential Equations, 246 (2009), 2506-2537. doi: 10.1016/j.jde.2008.10.012. Google Scholar B. Wang, Asymptotic behavior of stochastic wave equations with critical exponents on $\mathbb{R}^{3}$, Tran. Amer. Math. Soc., 363 (2011), 3639-3663. doi: 10.1090/S0002-9947-2011-05247-5. Google Scholar B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar B. Wang, Asymptotic behavior of non-autonomous fractional stochastic reaction-diffusion equations, Nonlinear Anal., 158 (2017), 60-82. doi: 10.1016/j.na.2017.04.006. Google Scholar B. Wang, Random attractors for non-autonomous stochastic wave equations with multiplicative noise, Discrete Contin. Dyn. Syst., 34 (2014), 269-300. Google Scholar L. Wang, Y. Wang and Y. Qin, Upper semi-continuity of attractors for nonclassical diffusion equations in $H(\mathbb{R}^3)$, Appl. Math. Comput., 240 (2014), 51-61. doi: 10.1016/j.amc.2014.04.092. Google Scholar Z. Wang, S. Zhou and A. Gu, Random attractor for a stochastic damped wave equation with multiplicative noise on unbounded domains, Nonlinear Anal. RWA, 12 (2011), 3468-3482. doi: 10.1016/j.nonrwa.2011.06.008. Google Scholar Y. Wang, Z. Zhu and P. Li, Regularity of pullback attractors for nonautonomous nonclassical diffusion equations, J. Math. Anal. Appl., 459 (2018), 16-31. doi: 10.1016/j.jmaa.2017.10.075. Google Scholar X. Wang, K. Lu and B. Wang, Wong-Zakai approximations and attractors for stochastic reaction-diffusion equations on unbounded domains, J. Differential Equations, 246 (2018), 378-424. doi: 10.1016/j.jde.2017.09.006. Google Scholar Y. Xie, Q. Li and K. Zhu, Attractors for nonclassical diffusion equations with arbitrary polynomial growth nonlinearity, Nonlinear Anal. RWA, 31 (2016), 23-37. doi: 10.1016/j.nonrwa.2016.01.004. Google Scholar M. Yang and P. E. Kloeden, Random attractors for stochastic semi-linear degenerate parabolic equations, Nonlinear Anal. RWA, 12 (2011), 2811-2821. doi: 10.1016/j.nonrwa.2011.04.007. Google Scholar M. Yang, J. Duan and P. E. Kloeden, Asymptotic behavior of solutions for random wave equations with nonlinear damping and white noise, Nonlinear Anal. RWA, 12 (2011), 464-478. doi: 10.1016/j.nonrwa.2010.06.032. Google Scholar W. Zhao and S. Song, Dynamics of stochastic nonclassical diffusion equations on unbounded domains, Electronic J. Differential Equations, 282 (2015), 1-22. Google Scholar F. Zhang and W. Han, Pullback attractors for nonclassical diffusion delay equations on unbounded domains with non-autonomous deterministic and stochastic forcing terms, Electronic J. Differential Equations, 139 (2016), 1-28. Google Scholar F. Zhang and Y. Liu, Pullback attractors in $H^1(\mathbb{R}^n)$ for non-autonomous nonclassical diffusion equations, Dynamical Systems, 29 (2014), 106-118. doi: 10.1080/14689367.2013.854317. Google Scholar S. Zhou, Random exponential attractor for cocycle and application to non-autonomous stochastic lattice systems with multiplicative white noise, J. Differential Equations, 263 (2017), 2247-2279. doi: 10.1016/j.jde.2017.03.044. Google Scholar S. Zhou and M. Zhao, Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise, Discrete Contin. Dyn. Syst., 36 (2017), 2887-2914. Google Scholar Xiaoming Wang. Upper semi-continuity of stationary statistical properties of dissipative systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 521-540. doi: 10.3934/dcds.2009.23.521 Delin Wu and Chengkui Zhong. Estimates on the dimension of an attractor for a nonclassical hyperbolic equation. Electronic Research Announcements, 2006, 12: 63-70. Nguyen Huy Tuan, Donal O'Regan, Tran Bao Ngoc. Continuity with respect to fractional order of the time fractional diffusion-wave equation. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020033 Anhui Gu, Bixiang Wang. Asymptotic behavior of random fitzhugh-nagumo systems driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1689-1720. doi: 10.3934/dcdsb.2018072 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Wafa Hamrouni, Ali Abdennadher. Random walk's models for fractional diffusion equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2509-2530. doi: 10.3934/dcdsb.2016058 Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120 Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875 Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 Anhui Gu, Boling Guo, Bixiang Wang. Long term behavior of random Navier-Stokes equations driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020020 Hong Lu, Mingji Zhang. Dynamics of non-autonomous fractional Ginzburg-Landau equations driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020072 Samia Challal, Abdeslem Lyaghfouri. Hölder continuity of solutions to the $A$-Laplace equation involving measures. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1577-1583. doi: 10.3934/cpaa.2009.8.1577 María Astudillo, Marcelo M. Cavalcanti. On the upper semicontinuity of the global attractor for a porous medium type problem with large diffusion. Evolution Equations & Control Theory, 2017, 6 (1) : 1-13. doi: 10.3934/eect.2017001 Dalibor Pražák. Exponential attractor for the delayed logistic equation with a nonlinear diffusion. Conference Publications, 2003, 2003 (Special) : 717-726. doi: 10.3934/proc.2003.2003.717 Jun Shen, Kening Lu, Bixiang Wang. Convergence and center manifolds for differential equations driven by colored noise. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4797-4840. doi: 10.3934/dcds.2019196 Monica Conti, Filippo Dell'Oro, Vittorino Pata. Nonclassical diffusion with memory lacking instantaneous damping. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2035-2050. doi: 10.3934/cpaa.2020090 Roman Chapko, B. Tomas Johansson. An alternating boundary integral based method for a Cauchy problem for the Laplace equation in semi-infinite regions. Inverse Problems & Imaging, 2008, 2 (3) : 317-333. doi: 10.3934/ipi.2008.2.317 Claude Bardos, François Golse, Ivan Moyano. Linear Boltzmann equation and fractional diffusion. Kinetic & Related Models, 2018, 11 (4) : 1011-1036. doi: 10.3934/krm.2018039 Boris P. Belinskiy, Peter Caithamer. Energy estimate for the wave equation driven by a fractional Gaussian noise. Conference Publications, 2007, 2007 (Special) : 92-101. doi: 10.3934/proc.2007.2007.92 HTML views (180) Renhai Wang Yangrong Li Bixiang Wang
CommonCrawl
Pullback random attractors for fractional stochastic $ p $-Laplacian equation with delay and multiplicative noise Global dynamics analysis of a time-delayed dynamic model of Kawasaki disease pathogenesis doi: 10.3934/dcdsb.2021170 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. The effect of spatial variables on the basic reproduction ratio for a reaction-diffusion epidemic model Tianhui Yang 1, , Ammar Qarariyah 2, and Qigui Yang 1,, School of Mathematics, South China University of Technology, Guangzhou, Guangdong 510640, China Department of Mathematics and Statistics, Arab American University, 240 Jenin 13, Zababdeh, Palestine Received January 2021 Revised May 2021 Early access June 2021 Fund Project: Our research were supported by National Natural Science Foundation of China (12071151) and Natural Science Foundation of Guangdong Province (2021A1515010052) In this paper, we study the influence of spatial-dependent variables on the basic reproduction ratio ($ \mathcal{R}_0 $) for a scalar reaction-diffusion equation model. We first investigate the principal eigenvalue of a weighted eigenvalue problem and show the influence of spatial variables. We then apply these results to study the effect of spatial heterogeneity and dimension on the basic reproduction ratio for a spatial model of rabies. Numerical simulations also reveal the complicated effects of the spatial variables on $ \mathcal{R}_0 $ in two dimensions. Keywords: basic reproduction ratio, reaction-diffusion equation, spatial heterogeneity, two-dimensions, epidemic model. Mathematics Subject Classification: Primary: 92D30, 35K57. Citation: Tianhui Yang, Ammar Qarariyah, Qigui Yang. The effect of spatial variables on the basic reproduction ratio for a reaction-diffusion epidemic model. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021170 L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic patch model, SIAM J. Appl. Math., 67 (2007), 1283-1309. doi: 10.1137/060672522. Google Scholar L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic reaction-diffusion model, Discrete Contin. Dynam. Syst., 21 (2008), 1-20. doi: 10.3934/dcds.2008.21.1. Google Scholar N. Bacaër and S. Guernaoui, The epidemic threshold of vector-borne diseases with seasonality, J. Math. Biol., 53 (2006), 421-436. doi: 10.1007/s00285-006-0015-0. Google Scholar S. Chen and J. Shi, Asymptotic profiles of basic reproduction number for epidemic spreading in heterogeneous environment, SIAM J. Appl. Math., 80 (2020), 1247-1271. doi: 10.1137/19M1289078. Google Scholar K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, Berlin, Heidelberg, 1985. doi: 10.1007/978-3-662-00547-7. Google Scholar O. Diekmann, J. Heesterbeek and J. A. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations, J. Math. Biol., 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar D. Gao, Travel frequency and infectious diseases, SIAM J. Appl. Math., 79 (2019), 1581-1606. doi: 10.1137/18M1211957. Google Scholar D. Gao and C. Dong, Fast diffusion inhibits disease outbreaks, Proc. Amer. Math. Soc., 148 (2020), 1709-1722. doi: 10.1090/proc/14868. Google Scholar H. Inaba, On a new perspective of the basic reproduction number in heterogeneous environments, J. Math. Biol., 65 (2012), 309-348. doi: 10.1007/s00285-011-0463-z. Google Scholar D. Jiang, Z. Wang and L. Zhang, A reaction-diffusion-advection SIS epidemic model in a spatially-temporally heterogeneous environment, Discrete Contin. Dyn. Syst. Ser. B, 23 (2018), 4557-4578. doi: 10.3934/dcdsb.2018176. Google Scholar D. Kincaid and W. Cheney, Numerical Analysis: Mathematics of Scientific Computing, 3$^{rd}$ edition, Brooks/Cole, Pacific Grove, CA, 2002. Google Scholar X. Liang, L. Zhang and X. Zhao, Basic reproduction ratios for periodic abstract functional differential equations (with application to a spatial model for lyme disease), J. Dynam. Differential Equations, 31 (2019), 1247-1278. doi: 10.1007/s10884-017-9601-7. Google Scholar P. Magal, G. F. Webb and Y. Wu, On the basic reproduction number of reaction-diffusion epidemic models, SIAM J. Appl. Math., 79 (2019), 284-304. doi: 10.1137/18M1182243. Google Scholar J. D. Murray, E. A. Stanley and D. L. Brown, On the spatial spread of rabies among foxes, Proc. Royal Soc. London Ser. B, 229 (1986), 111-150. Google Scholar D. Pang and Y. Xiao, The SIS model with diffusion of virus in the environment, Math. Biosci. Eng., 16 (2019), 2852-2874. doi: 10.3934/mbe.2019141. Google Scholar R. Peng and X. Zhao, A reactionn-diffusion SIS epidemic model in a time-periodic environment, Nonlinearity, 25 (2012), 1451-1471. doi: 10.1088/0951-7715/25/5/1451. Google Scholar D. Posny and J. Wang, Computing the basic reproductive numbers for epidemiological models in nonhomogeneous environments, Appl. Math. Comput., 242 (2014), 473-490. doi: 10.1016/j.amc.2014.05.079. Google Scholar P. Song, Y. Lou and Y. Xiao, A spatial SEIRS reaction-diffusion model in heterogeneous environment, J. Differential Equations, 267 (2019), 5084-5114. doi: 10.1016/j.jde.2019.05.022. Google Scholar H. R. Thieme, Spectral bound and reproduction number for infinite-dimensional population structure and time heterogeneity, SIAM J. Appl. Math., 70 (2009), 188-211. doi: 10.1137/080732870. Google Scholar W. Wang and X.-Q. Zhao, Threshold dynamics for compartmental epidemic models in periodic environments, J. Dynam. Differential Equations, 20 (2008), 699-717. doi: 10.1007/s10884-008-9111-8. Google Scholar W. Wang and X.-Q. Zhao, Basic reproduction numbers for reaction-diffusion epidemic models, SIAM J. Appl. Dyn. Syst., 11 (2012), 1652-1673. doi: 10.1137/120872942. Google Scholar F. Yang, W. Li and S. Ruan, Dynamics of a nonlocal dispersal SIS epidemic model with Neumann boundary conditions, J. Differential Equations, 267 (2019), 2011-2051. doi: 10.1016/j.jde.2019.03.001. Google Scholar T. Yang and L. Zhang, Remarks on basic reproduction ratios for periodic abstract functional differential equations, Discrete Contin. Dyn. Syst. Ser. B, 24 (2019), 6771-6782. doi: 10.3934/dcdsb.2019166. Google Scholar X.-Q. Zhao, Basic reproduction ratios for periodic compartmental models with time delay, J. Dynam. Differential Equations, 29 (2017), 67-82. doi: 10.1007/s10884-015-9425-2. Google Scholar Figure 1. The relation of $ \mathcal{R}_0 $ with different disease transmission $ \beta $ in $ 2D $. (a-c) $ \mathcal{R}_0 $ in $ 2D $ versus $ c_1 $ and $ c_2 $, where $ \beta = 0.2192(1+c_{1}\cos(\pi x_1))(1+c_{2}\cos(\pi x_2)) $ in (a-c), $ \alpha = 0.2 $ in (a), $ \alpha(x_1,x_2) = 0.2(1+0.5\cos(\pi x_1))(1+0.5\cos(\pi x_2)) $ in (b), and $ \alpha(x_1,x_2) = 0.2(1+\cos(\pi x_1))(1+\cos(\pi x_2)) $ in (c). (d) Comparison between $ \mathcal{R}_0 $ versus $ c_1 $ in $ 1D $ and that in $ 2D $, where $ \beta = 0.2192(1+c_{1}\cos(\pi x_1))(1+c_{2}\cos(\pi x_2)) $ for the $ 2D $ case, $ 0\leq c_{i} \leq 1 $, $ i = 1,2 $, and $ \beta = 0.2192(1+c_{1}\cos(\pi x_1)) $ for the $ 1D $ case (that is, $ \beta $ is independent of $ x_2 $), and $ \alpha = 0.2 $ in both dimensions Figure 2. The optimal vaccine strategy over a two-dimensional environment. (a) $ \mathcal{R}_0 $ versus $ L_1 $ and $ L_2 $ under $ c_0 = 0.61 $. (b) The lowest $ \mathcal{R}_0 $ versus $ L_1 $ under $ c_0 = 0.61 $. (c-d) The disease transmission $ \beta $ before and after taking the optimal vaccine strategy, respectively. The boundary of the vaccination region is highlighted (bold red lines). Xiaoyan Zhang, Yuxiang Zhang. Spatial dynamics of a reaction-diffusion cholera model with spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2625-2640. doi: 10.3934/dcdsb.2018124 Linda J. S. Allen, B. M. Bolker, Yuan Lou, A. L. Nevai. Asymptotic profiles of the steady states for an SIS epidemic reaction-diffusion model. Discrete & Continuous Dynamical Systems, 2008, 21 (1) : 1-20. doi: 10.3934/dcds.2008.21.1 Haomin Huang, Mingxin Wang. The reaction-diffusion system for an SIR epidemic model with a free boundary. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 2039-2050. doi: 10.3934/dcdsb.2015.20.2039 Wenzhang Huang, Maoan Han, Kaiyu Liu. Dynamics of an SIS reaction-diffusion epidemic model for disease transmission. Mathematical Biosciences & Engineering, 2010, 7 (1) : 51-66. doi: 10.3934/mbe.2010.7.51 Liang Zhang, Zhi-Cheng Wang. Threshold dynamics of a reaction-diffusion epidemic model with stage structure. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3797-3820. doi: 10.3934/dcdsb.2017191 Geng Lai, Wancheng Sheng, Yuxi Zheng. Simple waves and pressure delta waves for a Chaplygin gas in two-dimensions. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 489-523. doi: 10.3934/dcds.2011.31.489 W. E. Fitzgibbon, M.E. Parrott, Glenn Webb. Diffusive epidemic models with spatial and age dependent heterogeneity. Discrete & Continuous Dynamical Systems, 1995, 1 (1) : 35-57. doi: 10.3934/dcds.1995.1.35 M. Grasselli, V. Pata. A reaction-diffusion equation with memory. Discrete & Continuous Dynamical Systems, 2006, 15 (4) : 1079-1088. doi: 10.3934/dcds.2006.15.1079 Chengxia Lei, Jie Xiong, Xinhui Zhou. Qualitative analysis on an SIS epidemic reaction-diffusion model with mass action infection mechanism and spontaneous infection in a heterogeneous environment. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 81-98. doi: 10.3934/dcdsb.2019173 Takashi Kajiwara. A Heteroclinic Solution to a Variational Problem Corresponding to FitzHugh-Nagumo type Reaction-Diffusion System with Heterogeneity. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2133-2156. doi: 10.3934/cpaa.2017106 Takashi Kajiwara. The sub-supersolution method for the FitzHugh-Nagumo type reaction-diffusion system with heterogeneity. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2441-2465. doi: 10.3934/dcds.2018101 Yasumasa Nishiura, Takashi Teramoto, Xiaohui Yuan. Heterogeneity-induced spot dynamics for a three-component reaction-diffusion system. Communications on Pure & Applied Analysis, 2012, 11 (1) : 307-338. doi: 10.3934/cpaa.2012.11.307 Costică Moroşanu. Stability and errors analysis of two iterative schemes of fractional steps type associated to a nonlinear reaction-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2020, 13 (5) : 1567-1587. doi: 10.3934/dcdss.2020089 Shuling Yan, Shangjiang Guo. Dynamics of a Lotka-Volterra competition-diffusion model with stage structure and spatial heterogeneity. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1559-1579. doi: 10.3934/dcdsb.2018059 Elvira Barbera, Giancarlo Consolo, Giovanna Valenti. A two or three compartments hyperbolic reaction-diffusion model for the aquatic food chain. Mathematical Biosciences & Engineering, 2015, 12 (3) : 451-472. doi: 10.3934/mbe.2015.12.451 H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3785-3801. doi: 10.3934/dcdss.2020433 Zhaosheng Feng. Traveling waves to a reaction-diffusion equation. Conference Publications, 2007, 2007 (Special) : 382-390. doi: 10.3934/proc.2007.2007.382 Nick Bessonov, Gennady Bocharov, Tarik Mohammed Touaoula, Sergei Trofimchuk, Vitaly Volpert. Delay reaction-diffusion equation for infection dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2073-2091. doi: 10.3934/dcdsb.2019085 Keng Deng. On a nonlocal reaction-diffusion population model. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 65-73. doi: 10.3934/dcdsb.2008.9.65 Zhiting Xu, Yingying Zhao. A reaction-diffusion model of dengue transmission. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2993-3018. doi: 10.3934/dcdsb.2014.19.2993 Tianhui Yang Ammar Qarariyah Qigui Yang
CommonCrawl
Earth, Planets and Space Latitudinal dependence on the frequency of Pi2 pulsations near the plasmapause using THEMIS satellites and Asian-Oceanian SuperDARN radars Mariko Teramoto1, Nozomu Nishitani2, Yukitoshi Nishimura3 & Tsutomu Nagatsuma4 Earth, Planets and Space volume 68, Article number: 22 (2016) Cite this article We herein describe a harmonic Pi2 wave that started at 09:12 UT on August 19, 2010, with data that were obtained simultaneously at 19:00–20:00 MLT by three mid-latitude Asian-Oceanian Super Dual Auroral Radar Network (SuperDARN) radars (Unwin, Tiger, and Hokkaido radars), three Time History of Events and Macroscale Interactions during Substorms (THEMIS) satellites (THEMIS A, THEMIS D, and THEMIS E), and ground-based magnetometers at low and high latitudes. All THEMIS satellites, which were located in the plasmasphere, observed Pi2 pulsations dominantly in the magnetic compressional (B //) and electric azimuthal (E A) components, i.e., the fast-mode component. The spectrum of Pi2 pulsations in the B // and E A components contained two spectral peaks at approximately 12 to 14 mHz (f 1, fundamental) and 23 to 25 mHz (f 2, second harmonic). The Poynting flux derived from the electric and magnetic fields indicated that these pulsations were waves propagating earthward and duskward. Doppler variations (V) from the 6-s or 8-s resolution camping beams of the Tiger and Unwin SuperDARN radars, which are associated with Pi2 pulsations in the eastward electric field component in the ionosphere, observed Pi2 pulsations within and near the footprint of the plasmapause, whose location was estimated by the THEMIS satellites. The latitudinal profile of f 2 power normalized by f 1 power for Doppler velocities indicated that the enhancement of the normalized f 2 power was the largest near the plasmapause at an altitude-adjusted corrected geomagnetic (AACGM) latitude of 60° to 65°. Based on these features, we suggest that compressional waves propagate duskward away from the midnight sector, where the harmonic cavity mode is generated. Pi2 pulsations with a period of 40 to 150 s and damping waveforms usually occur at the onset of a substorm (Keiling and Takahashi 2011). These pulsations can be observed over a wide latitudinal range on the ground within the nightside sector. Ground magnetometer data along one meridian on the nightside have shown that Pi2 pulsations have a common period at latitudes lower than the plasmapause position. The longitudinal profiles of phase and amplitude show an H-component 180° phase shift across the plasmapause position and an H-component amplitude maximum equatorward of the plasmapause position (Yeoman and Orr 1989). Observations taken with ground magnetometers on the nightside have revealed that the plasmapause, where the phase and amplitude change, is the transition region between high- and low-latitude Pi2 pulsations. Cavity mode resonance (Saito and Matsushita 1968) in the plasmasphere has been proposed as a possible generation mechanism for the mid- and low-latitude Pi2 pulsations. This mode is excited by impulsive fast-mode waves, which propagate earthward in the magnetic equatorial plane from the magnetotail at the substorm onset. After reaching the plasmasphere, the fast-mode waves propagate back and forth between the inner and outer boundaries (i.e., the ionosphere and the plasmapause) and establish standing waves in the plasmasphere. In this scenario, the plasmapause plays an important role as the outer boundary in establishing standing waves in the plasmasphere. By using the magnetic field data from the equatorial orbiting Active Magnetospheric Particle Tracer Explorer (AMPTE)/Charge Composition Explorer (CCE) satellite and data from the Kakioka (KAK), Japan, ground station located at L = 1.23, Takahashi et al. (1995) statistically investigated the spatial characteristics of Pi2 pulsations in the inner magnetosphere and found that Pi2 pulsations in the compressional (B //) and radial (B R) components, which have high coherence with those observed in the northward (H) component on the ground on the nightside, are primarily observed on the nightside at L < 4. They assumed cavity mode resonance within the plasmapause, which was located at L ≅ 4 in their observations. To clarify the relationships between Pi2 pulsations and the plasmapause in more detail, Takahashi et al. (2003a) studied the radial structure of the amplitude and cross-phase between Pi2 pulsations in the inner magnetosphere and on the ground as well as their dependence on the plasmapause position by using electron density and electric and magnetic field data obtained from the Combined Release and Radiation Effects Satellite (CRRES). They showed that the E A-H cross-phase is approximately 90° at all distances in the plasmasphere even near the plasmapause, whereas the B //-H cross-phase clusters at either 0° or 180° near the plasmapause. These radial properties of amplitude and cross-phase imply that the fundamental cavity mode resonance is excited in the plasmasphere. The power spectrum of ground Pi2 pulsations at low latitudes often contains up to four harmonics (Lin et al. 1991; Nosé 1999; Cheng et al. 2000). These harmonic Pi2 pulsations have been attributed to the cavity mode resonance with higher harmonics. Satellites orbiting in the inner magnetosphere can detect the signatures of multifrequency Pi2 pulsations in the electric and magnetic fields (Denton et al. 2002). The nodal structure for second-harmonic Pi2 pulsations was investigated in detail by Takahashi et al. (2003b) who employed the electric and magnetic field data obtained by the CRRES and ground magnetic field data at KAK. They found that the CRRES spectra of Pi2 pulsations in the E A and B Z components have a single peak at the fundamental (f 1) and second (f 2) frequencies, respectively, whereas those of Pi2 pulsations in the H component at KAK exhibited a peak at both f 1 and f 2. They also showed the phase relationship among the field components. At f 1, the phase of E A relative to H was −90°, and the phase of B // relative to H was −180° at f 2. The spectral and phase features of the electric and magnetic fields from the satellite provide evidence for harmonic cavity mode resonance in the plasmasphere. The radial profile of the harmonic cavity mode resonance was shown by Luo et al. (2011) who used data from the Time History of Events and Macroscale Interactions during Substorms (THEMIS) multisatellite observations and low-latitude ground observations, which were performed at approximately the same longitude on the nightside. These data also provide evidence for the harmonic cavity mode resonance, which is confined in the plasmasphere. Investigating the spatial structure of ultra low-frequency (ULF) waves in the electric field of the ionosphere by using the high-frequency (HF) Super Dual Auroral Radar Network (SuperDARN) radars is possible (Ponomarenko et al. 2003). Over large areas, these radars emit oblique HF signals and receive ionospheric scatter (IS) and ground/sea scatter (GS) echoes, which are backscattered from decameter-scale electron density irregularities in the E or F region ionosphere and from irregularities on the ground or sea surface, respectively. Gjerloev et al. (2007) first reported that sub-auroral Pi2 pulsations detected by the Wallops radar were highly correlated with those in the magnetic field from a nearby ground station. Modeling of the event indicated that the predominantly shear Alfvén mode from an altitude of 1000 km provides amplitudes and phase relations that agree with the observations. By using data obtained by the Blackstone SuperDARN radar and nearby ground magnetometers, which were located in the pre-midnight sector of the sub-auroral region, Frissell et al. (2011) studied the fine spatial and temporal details of radar data at the ionospheric projection of the plasmapause and found evidence of field-line compressions. Comparing these Pi2 pulsations and the earthward-moving busty bulk flows (BBFs) observed by the THEMIS E and D satellites, Frissell et al. (2011) concluded that these compressions provide support for a BBF-driven Pi2 model, which is an alternative generation mechanism for mid- and low-latitude Pi2 pulsations that was first proposed by Kepko and Kivelson (1999). In the BBF-driven Pi2 model, quasiperiodical braking of earthward-flowing BBFs from the magnetotail produces compressional waves in the inner magnetosphere, which directly drive Pi2 pulsations from high to low latitudes. These compression features can be detected only by radars that provide significantly better spatial resolution of the range gate cell along the line of sight. Although radar observations are capable of investigating the spatial structure of Pi2 pulsations at mid latitudes (36°–60°), only a few studies have investigated Pi2 pulsations by using the SuperDARN radars at mid latitudes (Ponomarenko et al. 2003; Gjerloev et al. 2007; Frissell et al. 2011; Teramoto et al. 2014). To investigate generation mechanisms of Pi2 pulsations, multipoint observations are needed because Pi2 pulsations are globally excited in the mid and low latitudes. In particular, a comprehensive survey of Pi2 pulsations near the plasmapause is important because the plasmapause is the transitional region for the cavity mode resonance (Yeoman et al. 1991). In this study, we focus on a mid-latitude Pi2 pulsation event that occurred around the plasmapause at 09:12 UT on August 19, 2010, and the event was evaluated with data that were obtained simultaneously on the duskside by three THEMIS satellites (THEMIS A (THA), THEMIS D (THD), and THEMIS E (THE)) as well as the mid-latitude Unwin, Tiger, and Hokkaido SuperDARN radars and ground magnetometer stations. Based on these observations, we compared the spectral properties of Pi2 pulsations and found that they had both fundamental and second-harmonic frequencies. The enhancement of the power spectral density of second harmonics was radially localized in a small area at mid latitudes near the plasmapause. We will show that the Pi2 pulsations were excited by propagating fast-mode waves, which were generated by the cavity mode resonance on the nightside with a harmonic structure. The remainder of this paper is organized as follows. In the "Data sets" section, we present the data sets used in this study. In the "Observations" section, we present the results for the Pi2 pulsations that occurred at the substorm onset on August 19, 2010. We then discuss the generation mechanisms of this event in the "Discussion" section and present a summary in the "Summary" section. The data used in this study were obtained by three THEMIS satellites (THA, THD, and THE) (Angelopoulos et al. 2008); three mid-latitude SuperDARN radars (Greenwald et al. 1995; Chisham et al. 2007) that are installed in Hokkaido, Japan (Hokkaido (HOK) radar, altitude-adjusted corrected geomagnetic (AACGM) coordinates 36.5°, −145.3°), Unwin, New Zealand (Unwin (UNW) radar, AACGM coordinates −54.4°, −106.3°), and Tasmania, Australia (Tiger (TIG) radar, AACGM coordinates 36.5°, −145.3°); and from ground stations in East Asia and Oceania, which were located approximately along the 210° magnetic meridian. Figure 1 shows the field of view of the three SuperDARN radars, the footprints of the THEMIS satellites, and the locations of ground stations on an AACGM map (Baker and Wing 1989). The footprints of the satellites were mapped at a height of 100 km in the ionosphere along the Earth's magnetic field by using the Tsyganenko 96 model (see Tsyganenko 1996). The field of view of the radars, the footprints of the THEMIS satellites, and the ground stations were located on the duskside at 19:00–21:00 MLT; the substorm started at 09:12 UT on August 19, 2010. Field of view of the Hokkaido, Tiger, and Unwin radars, the footprints of the THEMIS A, D, and E satellites at an altitude of 100 km, and geomagnetic stations at 09:12 UT on August 19, 2010. Shaded regions of the field of view indicate locations of echoes with Pi2 frequencies. Satellite footprints were calculated by using the T96 geomagnetic field model THEMIS satellites The THEMIS satellites, specifically, five satellites (THEMIS A through THEMIS E) with elliptic orbits, were launched on February 17, 2007. The mission of these satellites is comprised of several stages with different orbital configurations. In the present study, Pi2 pulsations were observed at stage 12 of the "Dusk Phase." During this stage, the apogee (perigee) of three satellites, namely, THA, THD, and THE, was approximately 12 R E (1.5 R E) and was located on the dusk (dawn)side of the magnetosphere. We used the magnetic and electric field data obtained by a fluxgate magnetometer (FGM) and an electric field instrument (EFI) with a 3-s spin period resolution that were carried onboard the THEMIS satellites for the data analyses. The electric field data used in this study is the spin-axis component, which was derived by assuming that no electric field exists along the background magnetic field (E · B = 0). By using the International Geomagnetic Reference Field (IGRF) model, the coordinates of the magnetic and electric field data were converted to the local magnetic (LMG) coordinate system, where e // is a unit vector parallel to the model magnetic field, e A pointing eastward is the unit azimuthal component perpendicular to the e // vector and the position vector of the satellite, and e R pointing radially outward is the unit radial component parallel to e A × e //. SuperDARN radars The SuperDARN is operated as an international radar network project for investigating the upper atmosphere and ionosphere, and in 2015, the network was comprised of 22 radar sites in the Northern Hemisphere and 11 radar sites in the Southern Hemisphere. In the standard operating mode, each radar sweeps through 16 beam directions that are azimuthally separated by 3.24° with a total azimuthal field of view of 52°. Each beam is comprised of 75 or 110 range gates, where the first gate is 180 km in length and the others are 45 km in length. The SuperDARN radars transmit the HF signals with oblique incidence and receive the HF signals that are backscattered by small-scale ionospheric irregularities at the E or F region height with Doppler shifts or by the sea/ground surface. A number of operating modes are possible with the SuperDARN radar. Normally, the SuperDARN radar operates once every minute during the normalscan mode with all beams. In this study, we used the Doppler velocity measurements taken during the themisscan mode by the three mid-latitude SuperDARN radars. When the themisscan mode is operated, the SuperDARN radar provides high temporal resolutions for one single beam, which is called the camping beam. The camping beam of beam 4 (beams 14 and 4) at the HOK (UNW and TIG) radar is sampled every 8 (6) seconds, which is suitable for analyzing Pi2 pulsations with periods of 40–150 s. The locations of the camping beams are shown as meshed areas in Fig. 1. The electric field of ULF waves at ionospheric height can be detected as Doppler velocity variations in both IS and GS echoes. In the IS echoes, the radar measures the line-of-sight component of the E × B drift velocities, which are produced by the ULF electric field E in the background magnetic field B 0. The east–west electric field component primarily affects the Doppler velocity variations in the IS echoes of the camping beams at the HOK, UNW, and TIG radars because all camping beams point approximately toward the geomagnetic poles. According to previous studies (e.g., Bourdillon et al. 1989), ULF variations in the Doppler velocities of sea scatter echoes are primarily the result of vertical motions due to E × B drift. Ground magnetometers To compare the variation of the Doppler velocities with ground Pi2 pulsations, we used the geomagnetic field data obtained by the five ground stations listed in Table 1. In this paper, three-letter codes were used to identify the ground stations. Two of the five stations, namely, Macquarie (MCQ) and Dumont d'Urville (DRV), were located in the Southern Hemisphere. All geomagnetic field data had time resolutions of 1 s. The H (D) component is northward (eastward) in the geographic coordinates. Table 1 Coordinates of ground stations Magnetospheric conditions The event which we focused on had a 3-h Kp index of less than 1+ all day before August 19, 2010, which indicates that the magnetosphere remained in a quiet and steady state. Figure 2a shows the provisional auroral electrojet (AL) index from 09:00 to 10:00 UT on August 19, 2010. The decrease from −70 to −250 nT in the AL index indicates that the substorm onset occurred at 09:12 UT, as represented by the vertical dashed line. To show that Pi2 pulsations occurred globally at low latitudes, Fig. 2b illustrates the time series plots of the H component at Honolulu (HON) and Memanbetsu (MMB) at low latitudes during the interval between 09:00 and 10:00 UT. The substorm onset can also be identified by the positive bay in the geomagnetic field data at HON, which occurred at midnight. Clear Pi2 pulsations appeared simultaneously at the substorm onset at HON and MMB. a Auroral electrojet AL index, b Honolulu (HON) data, and c Memanbetsu (MMB) data for the H component from 09:00 to 10:00 UT on August 19, 2010. The vertical dashed lines indicate the substorm onset at 09:12 UT on August 19, 2010 The electric field experiments on the THEMIS satellites allowed us to estimate the electron number densities and to determine the location of the plasmapause. Figure 3 shows the electron number density derived from the THEMIS spacecraft potential data plotted as a function of an AACGM latitude in the ionosphere at a height of 100 km, which was determined by use of the Tsyganenko 96 model. The electron number density increased steeply around 65.5° AACGM latitude, which was the location of the plasmapause. During the time intervals of Pi2 pulsations observed by the satellites, which are indicated by the horizontal lines in Fig. 3, all three satellites were located in the plasmasphere. Radial profile of plasma density estimated from the spacecraft potential data of THEMIS on August 19, 2010. Horizontal bars indicate intervals of Pi2 pulsations observed by the THEMIS satellites Satellite observations of Pi2 pulsations Figure 4c shows the perturbations of the electric and magnetic fields from the satellites, which were observed simultaneously at the low-latitude MMB station when THA (THD and THE) was (were) located in the Northern (Southern) Hemisphere pre-midnight sector, as shown in Fig. 4a, b at 09:12 UT. The THD and THE were very close to each other during the Pi2 pulsations. For all three satellites, the E R, E A, and B // components exhibited oscillations in the Pi2 frequency band with slightly different waveforms from the Pi2 pulsations in the H component at MMB. The Pi2 pulsation did not appear clearly in the B R and B A components. Note that regular oscillations with periods of 1 min in the B A component were artificial. The E R, E A, and B // components oscillated with periods of 40 and 70 s throughout the event, whereas Pi2 pulsations at MMB oscillated with dominant periods of 70 s. The amplitude of the 40-s oscillations was much smaller than that of the 70-s oscillations. Oscillations in the B // and E R components were in-phase relative to those on the ground, whereas the Pi2 pulsations in the E A component had an out-of-phase relation with those on the ground. These phase characteristics indicate that the Pi2 pulsations from satellites were not standing waves but rather propagating waves. Locations of the THEMIS satellites at 09:12 UT on August 19, 2010, in a the X-Y plane and b the X-Z plane in the solar magnetic (SM) coordinate system. c Time series of the electric field and magnetic field data from THEMIS and the geomagnetic field in the H component of MMB Figure 5 shows the spectral properties of the event. The electric and magnetic field data from satellites were linearly interpolated and resampled at the same 1-s time rate as the geomagnetic field data. After subtracting 300-s boxcar data from the original data to remove long-term variations, which were defined as dH, dB, and dE, a fast Fourier transform (FFT) was applied to the residual dH, dB, and dE in the time interval from 09:11 to 09:19 UT (512 data points), and three-point smoothing was conducted in the frequency domain, thus providing a Nyquist frequency of 500 mHz and a frequency resolution of 1.9 mHz. The upper left, middle, and right panels of Fig. 5 indicate the spectral properties of the Pi2 pulsations in the electric and magnetic fields observed by the THA, THD, and THE satellites, respectively. The power spectral densities for the dE A, dE R, and dB // variations from the satellites and the dH variations at MMB, which represent S (dEA) (f), S (dER) (f), S (dB//) (f), and S (dH) (f), exhibited clear peaks around 12 to 14 (f 1) mHz. At 23 to 25 (f 2) mHz, S (dEA), S (dER), S (dB//), and S (dH) had their second highest peaks. The total S(f 2)/S(f 1) power ratio for the dE A, dE R, dB //, and dH components was lower than 0.7. At both f 1 and f 2, the dE A–dH, dE R–dH, and dB //–dH coherences were nearly perfect (approximately 1.0), and the cross-phase spectra showed that dH was in-phase with dB // and dE R was out of phase with dE A. Spectral properties of the dB //, dE A, and dE R components for the THEMIS satellites and the dH component for MMB. (top) Power spectral density. (middle) Coherence. (bottom) Cross-phase given in degrees In order to identify the direction of the wave energy propagation, we calculated the Poynting flux P by using the following formula: $$ \mathbf{P}=\frac{\mathrm{d}\mathbf{E}\times \mathrm{d}\mathbf{B}}{\mu_0} $$ Figure 6 shows the Poynting flux in the radial, azimuthal, and parallel components (positive radially outward, duskward, and northward, respectively). The transverse components exhibited non-zero mean values during the Pi2 interval. The negative P A and P R components imply that wave energy propagated westward and earthward. These features were inconsistent with the cavity mode model, in which the Poynting flux in the radial component had a mean value close to zero because the cavity mode consisted of standing fast-mode waves. The amplitude of the Poynting flux in the parallel component was smaller than that in the transverse components. The sign of P // from THA was positive, whereas the signs of P // from THE and THD were negative. This means that the electromagnetic energy flux was directed away from the magnetic equator and toward both hemispheres. Poynting flux in the radial (bottom), azimuthal (middle), and compressional (bottom) components associated with the Pi2 pulsations observed by the THEMIS satellites on August 19, 2010 SuperDARN observations Figure 7 shows the time series plots of the line-of-sight Doppler velocities obtained with the UNW radar (beam 14, ranges of 3–5 and 14–22 (Fig. 7a)); with the TIG radar (beam 4, range of 13–27 (Fig. 7b)); and with the HOK radar (beam 4, ranges of 21–29 and 53–57 (Fig. 7c)) on August 19, 2010, at 09:10–09:20 UT. The black arrows indicate the plasmapause locations, which were estimated by the electron number density from the THEMIS satellites, as shown in Fig. 3. The plasmapause was located between the range of 21 and 22 for the UNW, and it occurred at latitudes higher than the observation points of the HOK and UNW radars. Furthermore, we present a time series plot of the geomagnetic field variations in the dH component at MCQ in the bottom panels of the UNW and TIG observations and those in the dH component at St. Palatunka (PTK) in the bottom panels of the HOK observations. These geomagnetic observations were made at almost the same latitudes as the observation points in the field of view of each radar. At lower latitudes, the amplitudes of Doppler velocity variations from the HOK radar at lower latitudes were smaller than those at higher latitudes. The periods of Doppler velocity variations from the HOK radar at ranges 53–56 were almost identical to those of the Pi2 pulsations in the dH component at PTK with a 180° phase difference. Data from the UNW radar showed that the periods of Doppler variations were much shorter than those of the Pi2 pulsation in the dH component at MCQ and the largest amplitude was recorded around a range of 16. At higher ranges (14–22), the waveforms of Doppler variations from the UNW radar were almost identical. In the Doppler velocities from the TIG radar, the periods of the variations were identical at all ranges between 09:13 and 09:15 UT, whereas the periods at higher latitudes (range of 19–27), which were located near or outside the plasmapause, were longer than those at lower latitudes (range of 13–18) after 09:15 UT. In the Doppler velocities from both the UNW and TIG radars, Pi2 pulsations followed those at MCQ. Waveforms of Doppler variations from the UNW and TIG radars were quite different from those at MCQ. Time series of Doppler velocities from a beam 14 at ranges 3–5 and 14–22 for UNW, b beam 4 at the range 13–27 for TIG, and c beam 4 at ranges 21–29 and 52–57 for HOK in the upper panels, and the magnetic field data in the dH component at a and b MCQ and c PTK in the bottom panels. The arrows indicated the locations of the plasmasphere Figure 8a–c shows the corresponding power spectral densities S(f) for Doppler velocity variations (dV), as shown in Fig. 7a–c. The plasmapause locations are identified by black arrows. The spectra of dV from the HOK radar, which exhibited echoes at a lower latitude than the UNW and TIG radars, showed a dominant peak at approximately 14 mHz with a maximum power density near the range of 53. In comparison, two dominant spectral peaks at f 1 and f 2 appeared in the S(f) of dV from the UNW radar, as shown in Fig. 8a. The S(f) of dV from beam 14 of the UNW radar exhibited a dominant peak at 24 mHz, which reached a maximum power at a range of 16, and a smaller peak at 11 to 14 mHz, which archived maximum power at lower latitudes in the range of 5. The amplitudes of the 24-mHz signal in the range of 14–22 for the UNW radar were much larger than those of the 11- to 14-mHz and 24-mHz signals for the HOK and TIG radars. The 11- to 14-mHz signal exhibited maximum power in the range of 15 for the TIG radar, whereas the 23- to 24-mHz signal had a smaller or comparable peak to those of other frequencies. The S(f) of dV from the TIG radar at ranges of 23–27 also showed a single peak in the frequency range from 13 to 18 mHz, which was centered at approximately 15 mHz. The S(f) shapes of dV from the TIG radar at ranges of 23–27 were different from those of TIG at ranges of 13–18, which had double peaks at f 1 and f 2. These features indicate that Pi2 pulsations observed at ranges of 23–27 for the TIG radars, which were located outside the plasmasphere, were excited by different mechanisms from those at lower latitudes in the plasmasphere. Power spectral densities of Doppler velocities from a beam 14 at ranges 3–5 and 14–22 for UNW, b beam 4 at the range 13–27 for TIG, and c beam 4 at ranges 21–29 and 52–57 for HOK, as shown in Fig. 7 Figure 9 shows the latitudinal characteristics of the f 1 and f 2 peaks that appeared in the Doppler velocities of the HOK, UNW, and TIG radars. To identify the location of the latitudinal power peak of the f 2 signal that appeared in the radars, the latitudinal profile for the f 2 peak normalized by the f 1 peak S(f 2)/S(f 1) was plotted, and the results are shown in Fig. 9a. The vertical dashed lines indicate the plasmapause locations. The latitudinal S(f 2)/S(f 1) peaks greater than two in the Doppler data were localized at 61° to 65° AACGM latitude and were located near the plasmapause. However, the S(f 2)/S(f 1) of dV from the TIG radar at 61° to 65° AACGM latitude was much smaller than that from the UNW radar. This could be attributed to the longitudinal difference between the UNW and TIG radars. The HOK radar, which was located at a lower latitude than the other radars, observed S(f 2)/S(f 1) to be less than two. Figure 9b, c shows the latitudinal profile for the cross-phase between dV from radars and dH at MMB with high coherence (>0.7) between dV and dH at f 1 and f 2, respectively. The latitudinal phase profiles at both f 1 and f 2 exhibited changes at 55° AACGM latitude. At lower latitudes, the cross-phase was approximately 180° at both f 1 and f 2, whereas at higher latitudes, the cross-phases were between 0° and 90° (around 90°) at f 1 (f 2). a Absolute AACGM latitudinal distribution of normalized power, which was defined as S(f 2)/S(f 1), and the cross-phase between the Doppler velocity and dH component of MMB at b f 1 and c f 2 Ground observations Figure 10a, b shows the geomagnetic field data in the H and D components on the ground from 09:08 to 09:23 UT. The bottom two panels show the geomagnetic field data in the Southern Hemisphere. All ground stations except for Dumont d'Urville (DRV) were located in the plasmasphere. The waveforms of Pi2 pulsations at MQR and DRV were different from those in both the H and D components at MMB, PTK, and Magadan (STC), which were located in the plasmasphere, whereas Pi2 pulsations in the H component at MMB, PTK, and STC oscillated with the same periods and with a slight phase difference. Time series of geomagnetic field data in a the H component and b the D component at high and low latitudes along almost the same meridian as the radar observations Figure 11 shows the spectrum densities of the geomagnetic field in the dH and dD components. The power spectra at MMB, PTK, and STC had clear peaks in the frequency range from 9 to 18 mHz, and peaks were centered at approximately 14 mHz. However, the frequency of peaks in the dH component at MCQ was slightly different from those at lower latitudes. Moreover, the frequency of Pi2 pulsations at DRV was much lower than those at lower latitudes. These spatial characteristics of frequencies indicate that the Pi2 pulsations inside the plasmapause were excited by the same source, whereas those outside or close to the plasmapause were excited by different sources. Power spectral density of the (left) dH and (right) dD components of the geomagnetic field data on the ground We investigated the spatial characteristics of Pi2 pulsations from data obtained simultaneously by the mid-latitude SuperDARN radars, THEMIS satellites, and ground stations, which were located along the 210° magnetic meridian on the nightside. Our observational results can be summarized as follows: The THA, THD, and THE satellites, which were located in the nightside plasmasphere, observed Pi2 pulsations in the compressional component of the magnetic field (dB //) and in the radial and azimuthal components of the electric field (dE R and dE A, respectively) with dominant frequencies of approximately 12 to 14 mHz (f 1, fundamental) and 23 to 25 mHz (f 2, second harmonic) with almost identical waveforms as the Pi2 pulsations in the H component (dH) at a low-latitude ground station. The cross-phases of Pi2 pulsations in the dB // and dE R (dE A) components relative to the dH component at low latitude were 0° (180°) at both f 1 and f 2. The Poynting flux derived from the electric and magnetic field data of the satellites indicated that the energy of Pi2 pulsations propagated duskward and earthward. All radars observed Doppler dV with multiple dominant frequencies of f 1 and f 2 at latitudes lower than the ionospheric projection of the plasmapause. Outside the plasmapause, dV from the TIG radar had a dominant frequency of 15 mHz. The power of Pi2 pulsations at f 2 from the UNW radar was maximized at 61°–65° AACGM latitude near the plasmapause. The dV cross-phase relative to the low-latitude dH component was 180° for both f 1 and f 2 at lower latitudes and 0°–90° (around 90°) for f 1 (f 2) at higher latitudes in the plasmasphere. The ground stations observed Pi2 pulsations with frequencies of f 1 and f 2 over a wide range of L in the plasmasphere, whereas Pi2 pulsations appeared outside the plasmapause with frequencies lower than either f 1 or f 2. In this section, we compare the results of the present study with those of previous studies of mid-latitude Pi2 pulsations and discuss the generation mechanisms of Pi2 pulsations based on this case study. Latitudinal observations obtained with mid-latitude SuperDARN radars near the ionospheric projection of the plasmapause can provide new information on the frequency of Pi2 pulsations around the plasmapause. Spectral properties of Doppler velocity variations from the radars revealed that plasmaspheric Pi2 pulsations observed on August 19, 2010, in the Doppler velocity data had common dominant frequencies of approximately 12 to 14 mHz (f 1) and 23 to 25 mHz (f 2). While Pi2 pulsations at both f 1 and f 2 were excited over a wide range in the plasmasphere, Doppler velocity perturbations from the TIG radar at ranges of 23–27 with a frequency of 15 mHz were localized outside the plasmasphere. The localized perturbations on the nightside for 4 < L < 7 were reported as transient toroidal waves (TTWs) by Takahashi et al. (1996) and Nosé et al. (1998) and quasiperiodic oscillations (QPOs) by Saka et al. (1996). The TTWs (QPOs) are generated by standing Alfvén waves on individual magnetic field lines associated with substorm onsets. Based on the Polar satellite observations, Keiling et al. (2003) demonstrated that TTWs appear outside the plasmasphere. Their results are consistent with the localized perturbations at 15 mHz in our study. Fujita et al. (2002) described the transient behavior of magnetohydrodynamic (MHD) perturbations in the inner magnetosphere from the liner MHD-wave simulations. By using a magnetic model comprised of a dipole magnetic field, the plasmasphere, the ionosphere with Pedersen conductivity, and a free outer boundary, they employed an impulsive eastward magnetospheric current, which was localized at L = 10 around the magnetic equator with a 2-h longitudinal extent around midnight, as a driver of Pi2 pulsations while assuming the substorm current wedge. The liner MHD-wave simulations demonstrated that localized toroidal-mode waves, which are dominant in the magnetic azimuthal and electric radial components, appear simultaneously on the plasmapause and coupled to the poloidal mode when global-mode waves are excited in the poloidal component in the plasmasphere. Because of the wave coupling, localized perturbations appeared in the Doppler velocity data obtained by the TIG radar; although beam 4 of the radar pointing poleward could not observe east–west plasma motions in the ionosphere, which were associated with toroidal waves with the electric field in the north–south component of the ionosphere. Our observations indicate that localized perturbations at 15 mHz were excited at higher latitudes outside the plasmasphere by the coupling TTWs when Pi2 pulsations at f 1 and f 2 were simultaneously excited in the plasmasphere by the global mode. Although power spectra of Pi2 pulsations in the magnetic and electric field data and Doppler velocity data showed double-peak structures at f 1 and f 2, only Doppler variations obtained by the mid-latitude UNW and TIG radars showed that the power of the f 2 signal was greater than that of the f 1 signal. The localized f 2 power enhancement observed by the UNW radar can be explained by the harmonic structure of Pi2 pulsations, as suggested in Takahashi et al. (2003b). They showed the radial profile of the fundamental and second harmonics of the cavity mode resonance by using magnetic and electric field data from the CRRES satellite. In Fig. 12, which was adapted from Figure 1 of Takahashi et al. (2003b), a two-dimensional box-shaped magnetosphere for the cavity mode resonance was considered wherein the magnetic field was (i) straight and uniform, (ii) directed along the z-axis, and (iii) fixed to the northern and southern boundaries (ionosphere) of the box. Moreover, the field lines at the inner and outer boundaries were fixed. Takahashi et al. (2003b) assumed that the mode is generated between rigid boundaries, namely, the inner boundary of the ionosphere (L b1) and the outer boundary of the plasmapause (L b2). For both harmonic modes, a node of E A and an antinode of B // were located at these boundaries, where the amplitudes of E A and B // had a minimum and a maximum, respectively. The radial structure of the harmonic cavity mode had three nodes or antinodes (labeled L n1, L n2, and L n3) between the two boundaries. For the fundamental mode, E A and B // had an antinode and a node at L n2, at which the second-harmonic E A and B // had a node and an antinode, respectively. For the second-harmonic mode, there were two further nodes (antinodes) of B // (E A ) at L n1 (located between L b1 and L n2) and at L n3 (located between L n2 and L b2). Considering realistic plasma and magnetic field models, the nodes (antinodes) of both the fundamental and second harmonics at L n1, L n2, and L n3 were located closer to L b2. In reality, the antinodes of the second-harmonic Pi2 pulsations in the E A component at L n1 and L n3 are confined in the small region near the plasmapause. From the THEMIS observations shown in Fig. 3, the outer boundary of the projection plasmapause was located at 65.5° AACGM latitude in the ionosphere. The latitudinal structure of power at the second-harmonic frequency with respect to the fundamental power observed from the UNW radar was largest between 60° and 65° AACGM latitudes close to the plasmapause, whereas the spectrum of Pi2 pulsations observed at lower latitudes by the HOK radar was enhanced at the fundamental frequency. These latitudinal profiles of power were consistent with the radial profile of the harmonic cavity mode structure confined by the plasmapause. Radial profiles of phase and amplitude in the identical harmonic cavity mode resonance, which were adapted from Figure 1 of Takahashi et al. (2003b) As shown in Fig. 9a, the f 2 power observed by the UNW radar was largest at 60°–65° AACGM latitude, which was close to the plasmapause, although the power of the f 2 signals was much smaller than that of the f 1 signals at the nearby MCQ ground station in Fig. 11. Moreover, the mismatch of spectral shapes between ground magnetometers and radars was reported by Ponomarenko and Waters (2013). The existence of the f 2 signal enhancement not at MCQ, but rather in the radar, might be explained by the spatial coverage of the instruments. The radar can detect ULF waves in the ionosphere with much better spatial resolution for the 45-km range gate cell along the line of sight (LOS). Since the ground magnetometer integrates the signal from a much larger area of the ionosphere, the ground magnetometer could have superimposed signals in a larger area, wherein the f 1 peak enhancement would be larger than that of the f 2 peak. Therefore, the f 2 signal enhancement can be detected only by radars because of their higher spatial resolution. Another key feature for harmonic cavity mode resonance is the radial (latitudinal) profile of the cross-phase. By using data from the CRRES satellite in the plasmasphere, Takahashi et al. (2003b) presented the radial profile of the E A-H and B //-H cross-phases, while assuming that B // at L b1 was in-phase with the H component measured on the same magnetic meridian. The fundamental E A phase relative to H was −90° anywhere between L b1 and L b2, whereas the second-harmonic E A-H phase was −90° between L b1 and L n2 and 90° between L n2 and L b2. As shown in Fig. 9b, the latitudinal profile of the V-H cross-phase, which corresponds to the E A-H cross-phase, was 0°–90° at higher latitudes and was out of phase at lower latitudes. The latitudinal profile of the V-H phase was inconsistent with the cavity mode resonance. In addition, the E A-H cross-phase of 180° at f 1 and f 2 from our satellite observations in the plasmasphere did not provide support for the cavity mode model, in which Pi2 pulsations in the E A component were ±90° out of phase with those in the H component at low latitudes. In contrast to the E A component, the B //-H cross-phase of 0° at f 1 and f 2 for THA, THD, and THE in this study was consistent with the observations of Takahashi et al. (1995, 2003b). The B //-H, E A-H, and E R-H phase properties imply that the compressional waves propagate duskward and earthward in the plasmasphere, which was also confirmed by the Poynting flux shown in Fig. 6. These propagating waves have never been reported for magnetic and electric data obtained in the plasmasphere by satellites. To investigate whether Pi2 pulsations propagate, we estimated the phase velocity from the cross-phase between the Pi2 pulsations in the compressional component from THA and THD at f 1 and f 2, which is equivalent to the time delay between Pi2 pulsations. After dB // from THD was linearly interpolated and resampled at the same 3-s time rate as dB // from THA, we applied FFT to dB // in the time interval from 09:11 to 09:19 UT, and three-point smoothing was conducted in the frequency domain. We derived the coherence and cross-phase between THA and THD. The cross-phases of THD relative to THA were −15.2° and 15.7° at f 1 and f 2, respectively, at which the coherence was perfect (approximately 1). The velocities of −1329 km/s and 2424 km/s were derived from the cross-phases and the radial distance between THA and THD (approximately 4121 km). This result indicates that the wave fronts do not propagate radially because the derived velocities are much larger than the Alfvén speed in the plasmasphere in the magnetic equatorial plane (Moore et al. 1987). We also estimated the azimuthal velocities of Pi2 pulsations from the cross-phase between the Doppler velocity variations observed at almost the same latitudes on beam 14 at the range of 15 from UNW (AACGM Mlat −61.9° and Mlon 252.1°) and on beam 4 at the range of 13 from TIG (AACGM Mlat −62.0° and Mlon 226.1°). We applied FFT to dV in the interval from 09:11 to 09:23 UT (128 data points). The power spectra of dV from TIG and UNW and TIG coherence and the cross-phase relative to UNW were derived. The cross-phases of TIG relative to UNW were 113.6° and 191.7° at f 1 and f 2, respectively, at which the power showed peaks and the coherence was high (approximately 1). The wave fronts of Pi2 pulsations propagated at 56.2 and 63.3 km/s westward in the ionosphere, which correspond to 590 and 650 km/s duskward propagations in the plasmasphere. By using electric and magnetic field data from THEMIS in the inner magnetosphere and low-latitude ground magnetic field data from two stations, Kwon et al. (2012) reported longitudinal variations of Pi2 pulsations associated with fast-mode waves at midnight. Their observations suggest that the Pi2 wave energy is lost as it propagates azimuthally from a source region localized longitudinally. They also suggested that the azimuthal-mode structure takes a longer time to develop than the radial-mode structure because the azimuthal scale size is longer than the radial and north–south length scales in a dipole-like system. The Poynting flux properties from the THEMIS satellite and westward wave front propagations in this study might be explained by duskward propagating waves from the plasmaspheric cavity mode resonance, which is localized at midnight and has a harmonic structure, with energy leaking earthward and duskward. The alternative generation mechanism for mid-latitude Pi2 pulsations is a BBF-driven model, wherein periodical BBFs propagating earthward from the magnetotail cause periodic pressure pulses in the inner magnetosphere and generate Pi2 pulsations on the ground. If BBFs generate Pi2 pulsations, both the waveform and the frequency content should be similar between Pi2 pulsations in the plasma flow data in the magnetotail on the nightside and ground magnetometer data on the flankside (Kepko and Kivelson 1999; Kepko et al. 2001). However, in this study, we could not confirm that the Pi2 pulsations were generated by BBFs because there were no observations in the magnetotail. We were able to, however, evaluate the generation mechanisms by latitudinal variations of the waveforms and the dominant frequency of Pi2 pulsations on the ground and from radars. As shown in Fig. 8, Pi2 pulsations in dV with the dominant frequencies of f 1 and f 2 were confined in the plasmasphere, whereas the dominant frequency, which was observed outside the plasmasphere by the TIG radar, was different from f 1 and f 2. In addition, the dominant frequencies of Pi2 pulsations at DRV, which was located on the ground at the highest latitude, were also different from those observed by the mid- and low-latitude ground stations in the plasmasphere (Fig. 10). The results of this study indicate that Pi2 pulsations inside and outside the plasmasphere were not generated by common sources, such as periodic BBFs in the magnetotail. Therefore, we did not use the BBF-driven Pi2 model as a generation mechanism for our observations. Recently, Pi2 pulsations have been associated with high-speed plasma flows in the plasma sheet besides BBFs. By using data from the THEMIS satellite and a ground magnetometer, Keiling (2012) presented ballooning-mode plasma perturbations propagating westward in the near-Earth plasma sheet around 9 to 12 R E when westward-traveling Pi2 pulsations appeared simultaneously at high latitudes (>60° magnetic latitude) at the conjugate ground station. They proposed that Pi2 pulsations at high latitudes are generated by the ballooning mode. Auroral observations from the ground and Pi2 magnetic disturbances from THEMIS satellites in the near-Earth plasma sheet region showed that most of the wavelike bright spot structure in the aurora moved westward with large azimuthal wave numbers and the movements of the structure were associated with the phase velocities of Pi2 disturbances, thus providing support for the ballooning instability as Pi2 pulsations arrived at high latitudes (Chang and Cheng 2015). Fujita and Tanaka (2013) considered the scenario of Pi2 pulsations at high and low latitudes by investigating the plasma disturbances in the global MHD model of Tanaka et al. (2010) instead of Pi2 signals in the magnetic field. In the model, they proposed that the SCW could not be regarded as the source of Pi2 pulsations in the inner magnetosphere because it was not reproduced. Instead of the SCW, the high-speed earthward flow in the plasma sheet at the substorm onset suddenly stopped in the ring current region, in which the ambient magnetic field intensity increased. Then, the inner magnetosphere suddenly compressed. The sudden compression invoked a compressional MHD wave propagating in the inner magnetosphere, which have generated cavity mode resonance. They also suggested that the ballooning instability might trigger Pi2 pulsations. To investigate whether Pi2 pulsations at mid and low latitudes are associated with ballooning-mode Pi2 pulsations at high latitudes and the SCW, further investigations over a wider range in the magnetosphere will be required. In this study, we compared Pi2 pulsations that were observed simultaneously along almost the 210° magnetic meridian on the duskside during a substorm onset at 09:10 UT on August 19, 2010; the data were obtained from the three Asian-Oceanian SuperDARN radars (Unwin, Tiger, and Hokkaido radars), three THEMIS satellites (THA, THD, and THE), and low- and high-latitude ground stations. All three THEMIS satellites, which were located in the plasmasphere, observed Pi2 pulsation in the compressional magnetic and the radial and azimuthal electric fields. Based on the Poynting flux estimated from the magnetic and electric field measurements, the Pi2 energy propagated earthward and duskward. We compared the power spectral densities, S(f), of Pi2 pulsations and found that they experienced peaks at 12 to 14 mHz (f 1) and 23 to 25 mHz (f 2) in the plasmasphere. We also investigated the latitudinal profile of S(f 2)/S(f 1) in the Doppler velocity (dV) from all radars and the dV cross-phase relative to the dH component at the low-latitude ground station MMB. The S(f 2)/S(f 1) was largest at 21:00 MLT near the plasmapause, which was estimated from the electron number density derived from the THEMIS satellites. While the cross-phases at the lower latitude (<55°) clustered around 180° at both f 1 and f 2, the cross-phase at the higher latitude (>55°) was distributed between 0° and 90° (around 90°) at f 1 (f 2). We conclude that Pi2 pulsations were generated by the compressional waves propagating duskward from the source region in the midnight plasmasphere. Angelopoulos V, Sibeck D, Carlson CW, McFadden JP, Larson D, Lin RP, Bonnell JW, Mozer FS, Ergun R, Cully C, Glassmeier KH, Auster U, Roux A, LeContel O, Frey S, Phan T, Mende S, Frey H, Donovan E, Russell CT, Strangeway R, Liu J, Mann I, Rae J, Raeder J, Li X, Liu W, Singer HJ, Sergeev VA, Apatenkov S, Parks G, Fillingim M, Sigwarth J (2008) First results from the THEMIS mission. Space Sci Rev 141:453–476. doi:10.1007/s11214-008-9378-4 Baker KB, Wing S (1989) A new coordinate system for conjugate studies at high latitudes. J Geophys Res 94(A7):9139–9143. doi:10.1029/JA094iA07p09139 Bourdillon A, Delloue J, Parent J (1989) Effects of geomagnetic pulsations on the Doppler shift of HF backscatter radar echoes. Radio Sci 24:183–195 Chang T-F, Cheng C-Z (2015) Relationship between wave-like auroral arcs and Pi2 disturbances in plasma sheet prior to substorm onset. Earth Planet Space 67:168 Cheng C-C, Chao J-K, Yumoto K (2000) Spectral power of low-latitude Pi2 pulsations at the 210° magnetic meridian stations and plasmaspheric cavity residences. Earth Planet Space 52:615 Chisham G et al (2007) A decade of the Super Dual Auroral Radar Network (SuperDARN): scientific achievements, new techniques and future directions. Surv Geophys 28:33–109. doi:10.1007/s10,712-007-9017-8 Denton RE, Lee DH, Takahashi K, Goldstein J, Anderson R (2002) Quantitative test of the cavity resonance explanation of plasmaspheric Pi2 frequencies. J Geophys Res 107(A7):SMP-4. doi:10.1029/2001JA000272 Frissell NA, Baker JBH, Ruohoniemi JM, Clausen LBN, Kale ZC, Rae IJ, Kepko L, Oksavik K, Greenwald RA, West ML (2011) First radar observations in the vicinity of the plasmapause of pulsed ionospheric flows generated by bursty bulk flows. Geophys Res Lett 38:L01103. doi:10.1029/2010GL045857 Fujita S, Nakata H, Itonaga M, Yoshikawa A, Mizuta T (2002) A numerical simulation of the Pi2 pulsations associated with the substorm current wedge. J Geophys Res 107(A3):SMP-2. doi:10.1029/2001JA900137 Fujita S, Tanaka T (2013) Possible generation mechanisms of the Pi2 pulsations estimated from a global MHD simulation. Earth Planet Space 65:453–461 Gjerloev JW, Greenwald RA, Waters CL, Takahashi K, Sibeck D, Oksavik K, Barnes R, Baker J, Ruohoniemi JM (2007) Observations of Pi2 pulsations by the Wallops HF radar in association with substorm expansion. Geophys Res Lett 34:L20103. doi:10.1029/2007GL030492 Greenwald RA, Baker KB, Dudeney JR, Pinnock M, Jones TB, Thomas EC, Villain J-P, Cerisier J-C, Senior C, Hanuise C, Hunsucker RD, Sofko G, Koehler J, Nielsen E, Pellinen R, Walker ADM, Sato N, Yamagishi H (1995) DARN/SuperDARN: a global view of the dynamics of high-latitude convection. Space Sci Rev 71:761–795 Keiling A, Kim K-H, Wygant JR, Cattell C, Russell CT, Kletzing CA (2003) Electrodynamics of a substorm-related field line resonance observed by the Polar satellite in comparison with ground Pi2 pulsations. J Geophys Res 108(A7):1275. doi:10.1029/2002JA009340 Keiling A, Takahashi K (2011) Review of Pi2 models. Space Sci Rev 161:63 Keiling A (2012) Pi2 pulsations driven by ballooning instability. J Geophys Res 117:A03228 Kepko L, Kivelson M (1999) Generation of Pi2 pulsations by bursty bulk flows. J Geophys Res 104(A11):25021–25034. doi:10.1029/1999JA900361 Kepko L, Kivelson MG, Yumoto K (2001) Flow bursts, braking, and Pi 2 pulsations. J Geophys Res 106(A2):1903–1915 Kwon H-J, Kim K-H, Lee D-H, Takahashi K, Angelopoulos V, Lee E, Jin H, Park Y-D, Lee J, Sutcliffe PR, Auster HU (2012) Local time-dependent Pi2 frequencies confirmed by simultaneous observations from THEMIS probes in the inner magnetosphere and at low-latitude ground stations. J Geophys Res 117:A01206. doi:10.1029/2011JA016815 Lin CA, Lee LC, Sun YJ (1991) Observations of Pi 2 pulsations at a very low latitude (L = 1.06) station and magnetospheric cavity resonances. J Geophys Res 96(A12):21105–21113. doi:10.1029/91JA02029 Luo H, Chen GX, Du AM, Angelopoulos V, Xu WY, Zhao XD, Wang Y (2011) THEMIS multipoint observations of Pi2 pulsations inside and outside the plasmasphere. J Geophys Res 116:A12206. doi:10.1029/2011JA016746 Moore T, Gallagher DL, Horwitz JL, Comfort RH (1987) MHD wave braking in the outer plasmasphere. Geophys Res Lett 14:1007 Nosé M, Iyemori T, Nakabe S, Nagai T, Matsumato H, Goka T (1998) ULF pulsations observed by the ETS-VI satellite: Substorm associated azimuthal Pc4 pulsations on the nightside. Earth Planet Space 50:63–80 Nosé M (1999) Automated detection of Pi2 pulsations using wavelet analysis: 2. An application for dayside Pi2 pulsations study. Earth Planet Space 51:23 Ponomarenko PV, Menk FW, Waters CL (2003) Visualization of ULF waves in SuperDARN data. Geophys Res Lett 30:1926. doi:10.1029/2003GL017757 Ponomarenko PV, Waters CL (2013) Transition of Pi2 ULF wave polarization structure from the ionosphere to the ground. Geophys Res Lett 40:1474–1478. doi:10.1002/grl.50271 Saito T, Matsushita S (1968) Solar cycle effects on geomagnetic Pi 2 pulsations. J Geophys Res 73(1):267–286. doi:10.1029/JA073i001p00267 Saka O, Akaki H, Watanabe O, Baker DN (1996) Ground-satellite correlation of low-latitude Pi2 pulsations: a quasi-periodic field line oscillation in the magnetosphere. J Geophys Res 101:15433–15440 Takahashi K, Ohtani S, Anderson BJ (1995) Statistical analysis of Pi 2 pulsations observed by the AMPTE CCE spacecraft in the inner magnetosphere. J Geophys Res 100(A11):21929–21941. doi:10.1029/95JA01849 Takahashi K, Anderson BJ, Ohtani S-I (1996) Multisatellite study of nighttime transient toroidal waves. J Geophys Res 101:24815–24825 Takahashi K, Lee D-H, Nose' M, Anderson RR, Hughes WJ (2003a) CRRES electric field study of the radial mode structure of Pi2 pulsations. J Geophys Res 108(A5):1210. doi:10.1029/2002JA009761 Takahashi K, Anderson RR, Hughes WJ (2003b) Pi2 pulsations with second harmonic: CRRES observations in the plasmasphere. J Geophys Res 108(A6):1242. doi:10.1029/2003JA009847 Tanaka T, Nakamizo A, Yoshikawa A, Fujita S, Shinagawa H, Shimazu H, Kikuchi T, Hashimoto KK (2010) Substorm convection and current system deduced from the global simulation. J Geophys Res 115:A05220. doi:10.1029/2009JA014676 Teramoto M, Nishitani N, Pilipenko V, Ogawa T, Shiokawa K, Nagatsuma T, Yoshikawa A, Baishev D, Murata KT (2014) Pi2 pulsation simultaneously observed in the E and F region ionosphere with the SuperDARN Hokkaido radar. J Geophys Res 119:3444–3462. doi:10.1002/2012JA018585 Tsyganenko NA (1996) Effects of the solar wind conditions on the global magnetospheric configuration as deduced from data-based field models. In: Rolfe E, Kaldeich B (eds) Third International Conference on Substorms (ICS-3) Eur Space Agency Spec Publ, ESA SP-389., pp 181–185 Yeoman TK, Orr D (1989) Phase and spectral power of mid-latitude Pi 2 pulsations: evidence for a plasmaspheric cavity resonance. Planet Space Sci 37:1367–1383. doi:10.1016/0032-0633(89)90107-4 Yeoman TK, Lester M, Milling DK, Orr D (1991) Polarization, propagation and MHD wave modes of Pi2 pulsations: SABRE/SAMNET results. Planet Space Sci 39(7):983 The provisional AL index and Kp index were provided by the World Data Center (WDC) for Geomagnetism, Kyoto. We are grateful to the staff of the Kakioka Geomagnetic Observatory for kindly supplying the high-quality MMB data. The present study was supported by the Global COE Program of Nagoya University through the "Quest for Fundamental Principles in the Universe (QFPU)" project and through a joint research program with the Solar-Terrestrial Environment Laboratory, Nagoya University. We also acknowledge a National Aeronautics and Space Administration (NASA) contract (NAS5-02099) and would like to thank V. Angelopoulos for the use of the data from the THEMIS mission. The authors would also like to thank J. W. Bonnell and F. S. Mozer for the use of the EFI data and K. H. Glassmeier, U. Auster, and W. Baumjohann for the use of the FGM data that was provided by the Technical University of Braunschweig with financial support from the German Ministry for Economy and Technology and the German Center for Aviation and Space (DLR) under contract 50 OC 0302. Lately, we would also like to thank P. Dyson, J. Devlin, and M. Parkinson for operating and providing the data from the TIGER radar. Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa, 252-5210, Japan Mariko Teramoto Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan Nozomu Nishitani Department of Atmospheric and Oceanic Sciences, University of California, Los Angeles, CA, 90095-1565, USA Yukitoshi Nishimura National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo, 184-8795, Japan Tsutomu Nagatsuma Correspondence to Mariko Teramoto. MT conducted these studies, participated in the sequence alignment, and drafted the manuscript. NN, YN, and TN participated in designing the study and coordinating the work; they also helped to draft the manuscript. All authors read and approved the final manuscript. Teramoto, M., Nishitani, N., Nishimura, Y. et al. Latitudinal dependence on the frequency of Pi2 pulsations near the plasmapause using THEMIS satellites and Asian-Oceanian SuperDARN radars. Earth Planet Sp 68, 22 (2016). https://doi.org/10.1186/s40623-016-0397-1 Pi2 pulsations near the plasmapause Multipoint observations Mid-latitude SuperDARN radars 6. Geodesy The 12th International Conference on Substorms
CommonCrawl
Pathophysiology and clinical implications of the veno-arterial PCO2 gap Zied Ltaief1, Antoine Guillaume Schneider1 & Lucas Liaudet1,2 This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2021. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2021. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from https://link.springer.com/bookseries/8901. The persisting high mortality of circulatory shock highlights the need to search for sensitive early biomarkers to assess tissue perfusion and cellular oxygenation, which could provide important prognostic information and help guide resuscitation efforts. Although blood lactate and venous oxygen saturation (SvO2) are commonly used in this perspective, their usefulness remains hampered by several limitations. The veno-arterial difference in the partial pressure of carbon dioxide (Pv-aCO2 gap) has been increasingly recognized as a reliable tool to evaluate tissue perfusion and as a marker of poor outcome during circulatory shock, and it should therefore be part of an integrated clinical evaluation. In this chapter, we present the physiological and pathophysiological determinants of the Pv-aCO2 gap and review its implications in the clinical assessment of circulatory shock. Physiological aspects of CO2 production and transport Under aerobic conditions, CO2 is produced at the mitochondrial level as a by-product of substrate oxidation (pyruvate and citric acid cycle intermediates) (Fig. 1). The relationship between the amount of oxygen consumed (VO2) and CO2 produced (VCO2) during aerobic metabolism is termed the respiratory quotient (RQ = VCO2/VO2), and differs according to the main type of oxidized substrate (glucose, RQ = 1; proteins, RQ = 0.8; lipids, RQ = 0.7). Under anaerobic conditions, protons (H+) resulting from lactic acid production and ATP hydrolysis may generate CO2 following buffering by bicarbonates (HCO3−), leading to the formation of so-called "anaerobic CO2" [1]. Once formed, CO2 diffuses within the surrounding environment and capillary blood, to be transported to the lungs for elimination. In blood, CO2 transport is partitioned into three distinct fractions [2]: Dissolved CO2 fraction, which is in equilibrium with the partial pressure of CO2 (PCO2), according to Henry's law of gas solubility: Vgas = Sgas × (Pgas/Patm), where Vgas is the volume of dissolved gas (in ml/ml), Sgas is the Henry's constant of gas solubility (0.52 ml/ml for CO2 at 37 °C), and Patm the atmospheric pressure. Thus, in arterial blood with a PaCO2 of 40 mmHg (at sea level, 37 °C), dissolved CO2 = [0.52 × (40/760)] = 27 ml/l, which is about 5% of the total CO2 (note that, in mmol/l, Henry's constant for CO2 = 0.03 mmol/l/mmHg; also note that the conversion factor from mmol to ml CO2 is ~ 22.3). Bicarbonate (HCO3−). CO2 in blood readily diffuses within red blood cells (RBCs), where it combines with H2O to form carbonic acid (H2CO3), a reaction catalyzed by the enzyme carbonic anhydrase. In turn, H2CO3 dissociates to form HCO3− and H+. While H+ is buffered by hemoglobin (formation of HbH), HCO3−exits the RBC in exchange for a chloride anion (Cl−) via a HCO3−-Cl− transporter (erythrocyte chloride shift or Hamburger effect). Thus, the HCO3− concentration increases in venous blood whereas the Cl− concentration diminishes. CO2 transport as HCO3− (RBC and plasma fraction) represents about 90% of the total CO2 content in arterial blood (this proportion is lower in venous blood due to the Haldane effect). Taking into account a normal hematocrit of 0.45, the CO2 content under the form of HCO3− (in whole blood) is ~ 435 ml/l. Formation of carbamino compounds within hemoglobin: part of the CO2 within the RBC combines with free amino (R-NH2) groups within hemoglobin to form carbamino-hemoglobin (R-NH2-CO2). This reaction is enhanced when hemoglobin carries less oxygen, implying that more CO2 is transported as (R-NH2-CO2) when the PO2 decreases, which is the basis of the Haldane effect described below. CO2 transport under the form of (R-NH2-CO2) represents about 5% of the total CO2 content in arterial blood (~ 1.1 mmol/L ≈ 25 ml/l). Physiology of CO2 production and transport. In cells, CO2 is produced (in mitochondria) as a byproduct of substrate oxidation. Under anaerobic conditions, CO2 is generated in small amounts, as the results of HCO3− buffering of protons released by lactic acid and the hydrolysis of ATP. CO2 diffuses into the interstitial tissues and then into capillaries, where it is transported as dissolved CO2 in plasma (in equilibrium with the PCO2), bound to hemoglobin as carbamino-hemoglobin (HbCO2) in red blood cells (RBC), and as HCO3−, following the reaction of CO2 with H2O within RBC, a reaction catalyzed by carbonic anhydrase to form HCO3− and H+. HCO3− exits the RBC in exchange with chloride anions (Cl−), whereas protons are buffered by hemoglobin, forming HbH In summary, the total CO2 content of blood under physiological conditions equals: $$[{\text{Dissolved}}\,{\text{CO}}_{2} ] + \left[ {{\text{HCO}}_{3}^{ - } } \right] + \left[ {{\text{R}} - {\text{NH}}_{2} - {\text{CO}}_{2} } \right]$$ which is ≈ 490 ml/l in arterial blood and ≈ 535 ml/l in mixed venous blood, hence a veno-arterial difference of approximately 45 ml/l. A more precise calculation of the CO2 content of blood can obtained by the Douglas equation, but this is too complex to be calculated at the bedside [3]. The CO2 dissociation curve (PCO2-CCO2 relationship) As is the case for oxygen, a relationship exists between the PCO2 and the CO2 content (CCO2) of blood (Fig. 2). However, in contrast to the sigmoid shape of the O2 dissociation curve, the CO2 dissociation curve is slightly curvilinear, indicating a proportional increase in CCO2 over a wide range of PCO2. In the physiological range, the relationship between CCO2 and PCO2 can therefore be resolved by the equation: $${\text{PCO}}_{2} = k \times {\text{CCO}}_{2}$$ The CO2 dissociation curve. A curvilinear relationship exists between CO2 partial pressure (PCO2) and CO2 content (CCO2), so that PCO2 = k × CCO2. At low values of PCO2, the slope of the relationship is steeper, implying a smaller increase of PCO2 at any CCO2 than at high values of PCO2, where the slope of the relationship flattens. The position of the relationship is modified by various factors. A rightward and downward shift of the curve, corresponding to an increase of the k coefficient is produced by high PaO2 (Haldane effect), elevated temperatures, high hemoglobin concentrations and metabolic acidosis. A rightward shift of the curves implies that, for a same CCO2, the PCO2 increases, as indicated by the points A, B and C Important information provided by the PCO2-CCO2 relationship is the shift produced at different values of oxygen saturation of hemoglobin (HbO2). Indeed, as hemoglobin gets saturated with O2, it can carry less CO2 as carbaminoHb, and inversely. This behavior is known as the Haldane effect, which implies that for a same PCO2, CCO2 is higher at lower HbO2 saturation. In other words, this means that as the k constant in the relationship above decreases, the PCO2-CCO2 curve is shifted to the left. The consequence of this effect is that, in tissues, more CO2 is loaded by Hb as it releases O2, allowing PCO2 to increase only moderately (from 40 to 46 mmHg), in spite of a marked increase in CCO2 due to the tissue production of CO2. Without the Haldane effect, the venous PCO2 would increase significantly more for a similar increase in CO2 content. The curvilinearity of the CO2 dissociation curve indicates that CCO2 increases more steeply at low values of PCO2 and is more flat at high PCO2 values. It is also noticeable that the curve can be displaced by a certain number of factors: In conditions of metabolic acidosis, the reduction in HCO3− due to H+ buffering reduces the formation of carbamino (R-NH2-CO2) compounds inside hemoglobin [4]. As a result, for a given CCO2, the PCO2 must increase, which means an increase in the k constant, and a rightward shit of the relationship. The opposite occurs under conditions of metabolic alkalosis. Other factors influencing the curve are the hematocrit and temperature. At increasing hematocrit, there is a decrease in plasma space with a reduction of HCO3− and a decrease in CO2 content at any value of PCO2, with a shift to the right of the curve. At increasing temperatures, the reduced CO2 solubility also shifts the relationship to the right [4]. These considerations imply, therefore, that PvCO2 may vary at constant total venous CCO2 according to the particular conditions (HbO2 saturation [i.e., the Haldane effect], arterial pH, temperature and hematocrit). The Pv-aCO2 gap: pathophysiology and clinical implications A discussed earlier, the CCO2 in the venous side of the circulation is determined by the aerobic production of CO2 in tissues, influenced by the metabolic rate and the respiratory quotient, and may also increase via non-aerobic production of CO2. The generation of CO2 de facto increases the CCO2 on the venous side of the circulation, implying an obligatory difference between arterial and venous CCO2, termed the veno-arterial difference in CCO2, or veno-arterial CCO2 gap: va-CCO2 gap = (venous - arterial) CCO2 [1]. The tissue VCO2 does not accumulate under normal conditions, being washed out by the blood flowing across the tissue and eliminated by the lungs. Accordingly, any reduction in tissue blood flow (stagnant condition) will result in an accumulation of tissue CO2, implying an increase in the va-CCO2 gap, in accordance with Fick's principle: $${\text{VCO}}_{{2{\text{tissue}}}} = \left[ {\left( {{\text{Blood}}\,{\text{flow}}_{{{\text{tissue}}}} \times \left( {{\text{va}} - {\text{CCO}}_{2} \,{\text{gap}}_{{{\text{tissue}}}} } \right)} \right)} \right]$$ At the systemic level, the relationship is: $${\text{VCO}}_{2} = \left[ {\left( {{\text{Cardiac}}\,{\text{output}} \times \left( {{\text{va}} - {\text{CCO}}_{2} \,{\text{gap}}} \right)} \right)} \right]$$ According to the equation (PCO2 = k × CCO2), the Fick equation for CO2 can be rewritten as: $$k \times {\text{VCO}}_{2} = \left[ {{\text{Cardiac}}\,{\text{output}} \times \left( {{\text{Pv}} - {\text{PaCO}}_{2} } \right)} \right]$$ $$\left( {{\text{Pv}} - {\text{PaCO}}_{2} } \right) = \left[ {\left( {k \times {\text{VCO}}_{{2}} } \right)/{\text{Cardiac}}\,{\text{output}}} \right]$$ Therefore, the Pv-aCO2 gap represents a very good surrogate indicator of the adequacy of cardiac output and tissue perfusion under a given condition of CO2 production. The normal Pv-aCO2 gap is comprised between 2 and 6 mmHg [5], and many studies assessing Pv-aCO2 gap in clinical conditions used a cut-off value of 6 mmHg above which the gap is considered abnormally elevated. Although the venous PCO2 should ideally be obtained in a mixed venous blood sampling, good agreement between central and mixed venous PCO2 values has been reported [6]. Therefore, both central and mixed venous PCO2 can be used for the calculation of the va-CO2 gap, as long as the variables are not interchanged during treatment in a given patient. The inverse relationship between cardiac output and the Pv-aCO2 gap The inverse relationship between cardiac output and the Pv-aCO2 gap (Fig. 3) has been repeatedly demonstrated in both experimental [7] and clinical [8] settings. It is noteworthy that this relationship is not linear, but curvilinear (Fig. 3). At very low cardiac output, the (Pv-aCO2 gap) indeed increases more rapidly. This large increase in Pv-aCO2 gap is primarily due to the flattened relation between CCO2 and PCO2 at high values of CCO2 in conditions of tissue hypercarbia [5], and this is further magnified if tissue metabolic acidosis develops, due to the rightward shift of the PCO2-CCO2 relationship in acidic conditions (increased k coefficient, see above). Also, venous accumulation of CO2 will increase as a consequence of low pulmonary perfusion and CO2 elimination, further widening the gap [9]. In contrast, the increase in Pv-aCO2 in very low flow states with conditions of VO2-oxygen delivery (DO2) dependence will be attenuated by the mandatory reduction in aerobic VCO2. Such a decrease in VCO2 results in a leftward shift of the cardiac output/Pv-aCO2 gap relationship, as shown in Fig. 3 [5]. The inverse relationship between cardiac output and the Pva-CO2 gap. A reduction in cardiac output is associated with a progressive increase in the Pva-CO2 gap, which becomes exponential at very low cardiac output values, because of the flat slope of the CO2 dissociation curve in conditions of tissue hypercarbia. The relationship is displaced to the right at higher CO2 production (VCO2) Pv-aCO2 gap and tissue dysoxia In addition to tracking changes in cardiac output and tissue perfusion, the Pv-aCO2 gap can increase through an augmentation of VCO2 [8]. Under aerobic conditions, that is in the absence of any clinical sign of shock or increased blood lactate, such an increase reflects an increased metabolic demand or an increase in RQ (glucidic diet), or both. Physiologically, an increased metabolic rate is generally coupled with an increase in cardiac output, but such adaptation may not occur in critically ill patients with inadequate cardiovascular reserves, which may result in an increased Pv-aCO2 gap. Interventions should here be targeted first to reduce the metabolic demand. Persistence of an increased Pv-aCO2 gap should not necessarily prompt therapies to increase cardiac output, given the risk associated with deliberate increase in cardiac output in the absence of tissue dysoxia [10]. However, it is noteworthy that an increased Pv-aCO2 gap immediately after surgery in high risk patients, independent of their hemodynamic condition, SvO2 and lactate, has been associated with significantly more complications [11]. This suggests that a high Pv-aCO2 gap could track insufficient resuscitation and might represent a goal for hemodynamic optimization in such patients, but this issue is controversial and remains to be proven [9]. Under anaerobic conditions, the question as to whether the Pv-aCO2 gap can be used as a marker of tissue dysoxia, by detecting increased anaerobic VCO2 from H+ buffering, has attracted much attention. An advantage of Pv-aCO2 gap in this sense would be its ability to rapidly track changes in CO2 formation, hence providing sensitive, rapid and continuous detection of ongoing anaerobiosis. This would contrast from usual markers of tissue dysoxia, such as SvO2 or lactate. Indeed, SvO2 can be unreliable in conditions of reduced oxygen extraction and hyperdynamic circulation (sepsis) [12]. The disadvantage of lactate is its lack of specificity as a marker of dysoxia (type A vs type B hyperlactatemia), and its relatively slow clearance kinetics dependent on liver perfusion and function [13], which limits its utility to rapidly track changes in tissue oxygenation [9]. The Pv-aCO2 gap in stagnant dysoxia In essence, tissue dysoxia is classically attributed to stagnant, hypoxic, anemic and cytopathic mechanisms. As a sensitive marker of reduced cardiac output, an increased Pv-aCO2 gap is a reliable indicator of stagnant dysoxia. Importantly, the major gap noted under very low flow conditions (see earlier) has been associated with a global reduction in VCO2 (VO2-DO2 dependence), implying that any increase in anaerobic VCO2 could not offset the depressed aerobic VCO2 [7]. Therefore, the increased Pv-aCO2 gap depends entirely on the stagnant accumulation of tissue CO2, but not on increased anaerobic VCO2 in low flow conditions [1, 14]. The Pv-aCO2 gap in hypoxic or anemic dysoxia To address the role of the Pv-aCO2 gap to detect hypoxic dysoxia, Vallet et al. reduced DO2 below the critical threshold in an isolated dog hindlimb model, by reducing blood flow or by decreasing PO2 [15]. Both conditions similarly reduced VO2 and O2 extraction, but the Pv-aCO2 gap increased exclusively in the ischemic, but not hypoxic condition, implying that stagnant, but not hypoxic dysoxia was the responsible mechanism [15]. Comparable results were obtained by Nevière et al. in the intestinal mucosa of pigs, following the systemic reduction in DO2 to similar levels either by reduction of cardiac output or arterial PO2 [16]. With respect to anemic dysoxia, similar conclusions were obtained in sheep hemorrhage models, in which no increase in Pv-aCO2 gap was detected under conditions of VO2/DO2 dependency due to reduced hemoglobin concentration [17], unless there was a concomitant reduction in cardiac output [18]. Hence, significant hypoxic or anemic dysoxia occurs in the absence of any Pv-aCO2 gap increase. The Pv-aCO2 gap in cytopathic dysoxia An acquired intrinsic abnormality of tissue O2 extraction and cellular O2 utilization, primarily related to mitochondrial impairment, defines the concept of cytopathic hypoxia, and the resulting cellular bioenergetic failure could represent an important mechanism of organ dysfunction in sepsis [19]. Mitochondrial defects have been demonstrated in several tissues obtained from animals in various models of sepsis, and limited data also exist on altered mitochondrial metabolism in human biopsy samples or circulating blood cells [20]. The detection of cytopathic hypoxia, however, is still not feasible at the bedside, although new techniques such as the measurement of mitochondrial O2 tension using protoporphyrin IX-Triplet State Lifetime Technique (PpIX-TSLT) are currently being developed [21]. Furthermore, impaired O2 extraction in sepsis does not necessary imply cytopathic hypoxia, as it may be related to impaired microcirculation. Theoretically, the increased anaerobic CO2 generation in conditions of cytopathic hypoxia could result in increased anaerobic VCO2 leading to an increased Pv-aCO2 gap. This assumption has been evaluated in a porcine model of high dose metformin intoxication, which induces mitochondrial defects comparable to cyanide poisoning [22]. As expected, treated pigs exhibited reduced VO2 and marked lactic acidosis, in spite of preserved systemic DO2. However, although VCO2 decreased less than VO2, suggesting some anaerobic VCO2, no significant increase in Pv-aCO2 gap was noted. In a human case report of massive metformin intoxication, Waldauf et al. also reported no elevation in Pv-aCO2 gap despite major lactic acidosis and reduced aerobic VO2, as detected by increased SvO2 [23]. Therefore, although data are very limited, cytopathic dysoxia related to impaired mitochondrial respiration appears not to widen the Pv-aCO2 gap. The Pv-aCO2 gap in sepsis Ongoing tissue dysoxia with persistent lactic acidosis is a hallmark of sepsis, and associated with a poor prognosis. Although a hyperdynamic circulation is characteristic of sepsis, many septic patients may have a cardiac output that is insufficient to meet metabolic demands, because of persistent hypovolemia or concomitant myocardial dysfunction. An increased Pv-aCO2 gap has been reported in patients with lower cardiac output in sepsis, consistent with the ability of the Pv-aCO2 gap to detect stagnant dysoxia, also in the context of sepsis [24]. In such conditions, an increase in cardiac output correlates with a parallel decrease in Pv-aCO2 gap [25]. Importantly, as reported by Vallee et al. [26], the Pv-aCO2 gap is able to detect persistently low cardiac output even in patients with a normal SvO2. Such a high Pv-aCO2 gap during the early resuscitation of septic shock has been correlated with more organ dysfunction and worse outcomes [27]. Many septic patients display persistent lactic acidosis in spite of an elevated cardiac output and normal or even increased SvO2. This implies that mechanisms unrelated to macrohemodynamics sustain tissue dysoxia in this setting, i.e., a loss of so-called hemodynamic coherence, with significant negative impact on outcome [28]. Impaired microcirculatory perfusion is indeed a prototypical perturbation in experimental [29] and human sepsis [30], which may impair tissue oxygenation. Such microcirculatory derangements result in tissue CO2 accumulation, which can be tracked, for example, by sublingual capnometry, as shown by Creteur et al. [31]. Accordingly, in a prospective observational study including 75 patients with septic shock, Ospina-Tascon et al. found a significant correlation between Pv-aCO2 gap and microcirculatory alterations. These were independent of systemic hemodynamic status and persisted even after correction for the Haldane effect [32], indicating that the Pv-aCO2 gap may be a useful tool to assess impaired microcirculation in sepsis [33]. Furthermore, Creteur et al. reported that increasing cardiac output with dobutamine in patients with impaired microcirculation resulted in a decreased regional PCO2 gap (sublingual and gastric mucosal) that was associated with a significant increase in well-perfused capillaries [31]. In summary, an elevated (> 6 mmHg) Pv-aCO2 gap in sepsis detects stagnant dysoxia, whether related to a low cardiac output or a derangement in microcirculatory blood flow, and this holds true even in the presence of a normal or elevated SvO2. As such, a high Pv-aCO2 gap might prompt a trial to improve tissue blood flow by increasing cardiac output [34]. Finally, many septic patients with an elevated cardiac output exhibit a normal Pv-aCO2 gap, resulting from elevated CO2 washout by increased tissue blood flow. Many of these patients still display signs of ongoing dysoxia with lactic acidosis and organ dysfunction. Whether this pattern reflects cytopathic dysoxia or regional microcirculatory alterations not tracked by Pv-aCO2 gap elevation remains to be established. Use of the Pv-aCO2 gap as a prognostic tool In sepsis, evidence exists that a Pv-aCO2 gap > 6 mmHg, even after normalization of blood lactate, is predictive of poor outcomes [35,36,37], which has been highlighted in a recent systematic review of 12 observational studies [38]. Whether this holds true for a broader population of critically ill patients with circulatory shock has been questioned in a recent meta-analysis of 21 studies with a total of 2155 patients from medical, surgical and cardiovascular ICUs [37]. Overall, a high Pv-aCO2 gap was associated with higher lactate levels, lower cardiac output and central venous oxygen saturation (ScvO2), and was significantly correlated with mortality. The latter was however restricted to medical and surgical patients, with no association found for cardiac surgery patients. Since the meta-analysis included only two studies in cardiac surgery, this negative result should be interpreted with caution. Three recent retrospective studies not included in the meta-analysis [39,40,41] indeed reported a negative impact of high postoperative Pv-aCO2 gap on major complications and mortality after cardiac surgery, although with limited diagnostic performance [41]. Future studies are needed to refine the value of the Pv-aCO2 gap as a prognostic biomarker in cardiac surgery patients, taking into account the low mortality (3.4%) in this population [42]. Pitfalls in the interpretation of the Pv-aCO2 gap As already mentioned, several factors may influence the position of the PCO2-CCO2 relationship by influencing the k factor of proportionality between both variables (see Fig. 2), which must be taken into account for a proper interpretation of the Pv-aCO2 gap. These include the oxygen saturation of hemoglobin (Haldane effect), metabolic shifts of pH, temperature and hemoglobin concentration. In addition, it is essential to consider possible sources of errors in the measurement of PCO2, including contamination of the samples with fluid or air bubbles, and insufficient precision of the gas analyzer. When comparing successive determinations of Pv-aCO2 gap, it is therefore recommended to consider only variations of at least ± 2 mmHg as real changes [43]. Two additional confounders in the interpretation of the Pv-aCO2 gap require some discussion. The first is hyperoxia. It has been observed that, in patients with circulatory shock, ventilation at 100% inspired oxygen fraction (FiO2) for 5 min increased venous PCO2, and hence the Pv-aCO2 gap, independent of changes in the hemodynamic status [44]. While this observation may be explained by a lower CO2 affinity of hemoglobin due to elevated venous PO2 (Haldane effect) [44], it may also reflect some impairment in microcirculatory blood flow, owing to the vasoconstrictive effects of hyperoxia [45]. The second confounder is acute hyperventilation with respiratory alkalosis. For example, as shown by Mallat et al. in 18 stable septic shock patients [46], an acute decrease in arterial PCO2 from 44 to 34 mmHg produced by transient hyperventilation (30 min) induced a significant increase in PCO2 gap (absolute 2.2 mmHg, relative + 48.5%). Possible mechanisms include, first, increased aerobic production of CO2 due to stimulated aerobic glycolysis under conditions of cellular alkalosis, and second, a reduction in microcirculatory blood flow due to the acute drop of CO2. Thus, both acute hyperoxia and hypocapnia may be important confounders in the interpretation of an increased Pv-aCO2 gap, which must be taken into account by the clinician. The Pv-aCO2 gap is a reliable indicator of impaired tissue perfusion, whether the result of a global reduction in cardiac output or to microcirculatory abnormalities, but it does not track tissue dysoxia, unless related to a stagnant mechanism. Being easily accessible and readily available, the Pva-CO2 gap should be included in the integrated evaluation of the patient in circulatory shock. Several diagnostic algorithms incorporating Pva-CO2 gradients have been proposed, such as those presented in Figs. 4 and 5. It remains to be established whether the Pva-CO2 gap should be part of a resuscitation bundle protocol, and whether therapies aimed at normalizing an increased Pva-CO2 gap could improve the dismal prognosis of circulatory shock. Usefulness of the Pva-CO2 gradient under conditions of circulatory shock. Proposed diagnostic algorithm integrating lactate, mixed (central) venous oxygen saturation (S(c)vO2) and the Pva-CO2 gap in patients with circulatory shock The Pva-CO2 gradient in the absence of circulatory shock. Proposed diagnostic algorithm to interpret an elevation in the Pva-CO2 gap in the absence of circulatory shock and with normal blood lactate. S(c)vO2 mixed (central) venous oxygen saturation Gavelli F, Teboul JL, Monnet X. How can CO2-derived indices guide resuscitation in critically ill patients? J Thorac Dis. 2019;11(Suppl 11):S1528–37. Geers C, Gros G. Carbon dioxide transport and carbonic anhydrase in blood and muscle. Physiol Rev. 2000;80:681–715. Douglas AR, Jones NL, Reed JW. Calculation of whole blood CO2 content. J Appl Physiol. 1988;65:473–7. Dash RK, Bassingthwaighte JB. Blood HbO2 and HbCO2 dissociation curves at varied O2, CO2, pH, 2,3-DPG and temperature levels. Ann Biomed Eng. 2004;32:1676–93. Dres M, Monnet X, Teboul JL. Hemodynamic management of cardiovascular failure by using PCO(2) venous-arterial difference. J Clin Monitor Comput. 2012;26:367–74. van Beest PA, Lont MC, Holman ND, Loef B, Kuiper MA, Boerma EC. Central venous-arterial pCO(2) difference as a tool in resuscitation of septic patients. Intensive Care Med. 2013;39:1034–9. Zhang H, Vincent JL. Arteriovenous differences in PCO2 and pH are good indicators of critical hypoperfusion. Am Rev Respir Dis. 1993;148:867–71. Teboul JL, Mercat A, Lenique F, Berton C, Richard C. Value of the venous-arterial PCO2 gradient to reflect the oxygen supply to demand in humans: effects of dobutamine. Crit Care Med. 1998;26:1007–10. Denault A, Guimond JG. Does measuring veno-arterial carbon dioxide difference compare to predicting a hockey game's final score? Can J Anesthesia. 2021;68:445–53. Hayes MA, Timmins AC, Yau EH, Palazzo M, Hinds CJ, Watson D. Elevation of systemic oxygen delivery in the treatment of critically ill patients. N Engl J Med. 1994;330:1717–22. Robin E, Futier E, Pires O, Fleyfel M, Tavernier B, Lebuffe G, et al. Central venous-to-arterial carbon dioxide difference as a prognostic tool in high-risk surgical patients. Crit Care. 2015;19:227. Monnet X, Julien F, Ait-Hamou N, Lequoy M, Gosset C, Jozwiak M, et al. Lactate and venoarterial carbon dioxide difference/arterial-venous oxygen difference ratio, but not central venous oxygen saturation, predict increase in oxygen consumption in fluid responders. Crit Care Med. 2013;41:1412–20. Ducrocq N, Kimmoun A, Levy B. Lactate or ScvO2 as an endpoint in resuscitation of shock states? Minerva Anestesiol. 2013;79:1049–58. Groeneveld AB. Interpreting the venous-arterial PCO2 difference. Crit Care Med. 1998;26:979–80. Vallet B, Teboul JL, Cain S, Curtis S. Venoarterial CO(2) difference during regional ischemic or hypoxic hypoxia. J Appl Physiol. 2000;89:1317–21. Neviere R, Chagnon JL, Teboul JL, Vallet B, Wattel F. Small intestine intramucosal PCO(2) and microvascular blood flow during hypoxic and ischemic hypoxia. Crit Care Med. 2002;30:379–84. Dubin A, Estenssoro E, Murias G, Pozo MO, Sottile JP, Baran M, et al. Intramucosal-arterial PCO2 gradient does not reflect intestinal dysoxia in anemic hypoxia. J Trauma. 2004;57:1211–7. Ferrara G, Kanoore Edul VS, Martins E, Canales HS, Canullan C, Murias G, et al. Intestinal and sublingual microcirculation are more severely compromised in hemodilution than in hemorrhage. J Appl Physiol. 2016;120:1132–40. Liaudet L, Oddo M. Role of poly(adenosine diphosphate-ribose) polymerase 1 in septic peritonitis. Curr Opin Crit Care. 2003;9:152–8. Fink MP. Cytopathic hypoxia and sepsis: is mitochondrial dysfunction pathophysiologically important or just an epiphenomenon. Pediatr Crit Care Med. 2015;16:89–91. Mik EG, Balestra GM, Harms FA. Monitoring mitochondrial PO2: the next step. Curr Opin Crit Care. 2020;26:289–95. Andreis DT, Mallat J, Tettamanti M, Chiarla C, Giovannini I, Gatti S, et al. Increased ratio of P[v-a]CO2 to C[a-v]O2 without global hypoxia: the case of metformin-induced lactic acidosis. Respir Physiol Neurobiol. 2021;285:103586. Waldauf P, Jiroutkova K, Duska F. Using PCO2 gap in the differential diagnosis of hyperlactatemia outside the context of sepsis: a physiological review and case series. Crit Care Res Pract. 2019;2019:5364503. Bakker J, Vincent JL, Gris P, Leon M, Coffernils M, Kahn RJ. Veno-arterial carbon dioxide gradient in human septic shock. Chest. 1992;101:509–15. Mecher CE, Rackow EC, Astiz ME, Weil MH. Venous hypercarbia associated with severe sepsis and systemic hypoperfusion. Crit Care Med. 1990;18:585–9. Vallee F, Vallet B, Mathe O, Parraguette J, Mari A, Silva S, et al. Central venous-to-arterial carbon dioxide difference: an additional target for goal-directed therapy in septic shock? Intensive Care Med. 2008;34:2218–25. Ospina-Tascon GA, Bautista-Rincon DF, Umana M, Tafur JD, Gutierrez A, Garcia AF, et al. Persistently high venous-to-arterial carbon dioxide differences during early resuscitation are associated with poor outcomes in septic shock. Crit Care. 2013;17:R294. De Backer D, Donadello K, Sakr Y, Ospina-Tascon G, Salgado D, Scolletta S, et al. Microcirculatory alterations in patients with severe sepsis: impact of time of assessment and relationship with outcome. Crit Care Med. 2013;41:791–9. Revelly JP, Liaudet L, Frascarolo P, Joseph JM, Martinet O, Markert M. Effects of norepineph-rine on the distribution of intestinal blood flow and tissue adenosine triphosphate content in endotoxic shock. Crit Care Med. 2000;28:2500–6. De Backer D, Creteur J, Preiser JC, Dubois MJ, Vincent JL. Microvascular blood flow is altered in patients with sepsis. Am J Respir Crit Care Med. 2002;166:98–104. Creteur J, De Backer D, Sakr Y, Koch M, Vincent JL. Sublingual capnometry tracks microcir-culatory changes in septic patients. Intensive Care Med. 2006;32:516–23. Ospina-Tascon GA, Umana M, Bermudez WF, Bautista-Rincon DF, Valencia JD, Madrinan HJ, et al. Can venous-to-arterial carbon dioxide differences reflect microcirculatory alterations in patients with septic shock? Intensive Care Med. 2016;42:211–21. De Backer D. Is microcirculatory assessment ready for regular use in clinical practice? Curr Opin Crit Care. 2019;25:280–4. Teboul JL, Saugel B, Cecconi M, De Backer D, Hofer CK, Monnet X, et al. Less invasive hemodynamic monitoring in critically ill patients. Intensive Care Med. 2016;42:1350–9. Mallat J, Pepy F, Lemyze M, Gasan G, Vangrunderbeeck N, Tronchon L, et al. Central venous-to- arterial carbon dioxide partial pressure difference in early resuscitation from septic shock: a prospective observational study. Eur J Anaesthesiol. 2014;31:371–80. Vallet B, Pinsky MR, Cecconi M. Resuscitation of patients with septic shock: please "mind the gap"! Intensive Care Med. 2013;39:1653–5. Al Duhailib Z, Hegazy AF, Lalli R, Fiorini K, Priestap F, Iansavichene A, et al. The use of central venous to arterial carbon dioxide tension gap for outcome prediction in critically ill patients: a systematic review and meta-analysis. Crit Care Med. 2020;48:1855–61. Diaztagle Fernandez JJ, Rodriguez Murcia JC, Sprockel Diaz JJ. Venous-to-arterial carbon dioxide difference in the resuscitation of patients with severe sepsis and septic shock: a systematic review. Med Intensiva. 2017;41:401–10. Mukai A, Suehiro K, Kimura A, Funai Y, Matsuura T, Tanaka K, et al. Comparison of the venous-arterial CO2 to arterial-venous O2 content difference ratio with the venous-arterial CO2 gradient for the predictability of adverse outcomes after cardiac surgery. J Clin Monitor Comput. 2020;34:41–53. Zante B, Reichenspurner H, Kubik M, Schefold JC, Kluge S. Increased admission central venous-arterial CO2 difference predicts ICU-mortality in adult cardiac surgery patients. Heart Lung. 2019;48:421–7. Huette P, Beyls C, Mallat J, Martineau L, Besserve P, Haye G, et al. Central venous-to-arterial CO2 difference is a poor tool to predict adverse outcomes after cardiac surgery: a retrospective study. Can J Anaesth. 2021;68:467–76. Mazzeffi M, Zivot J, Buchman T, Halkos M. In-hospital mortality after cardiac surgery: patient characteristics, timing, and association with postoperative length of intensive care unit and hospital stay. Ann Thorac Surg. 2014;97:1220–5. Mallat J, Lemyze M, Tronchon L, Vallet B, Thevenin D. Use of venous-to-arterial carbon dioxide tension difference to guide resuscitation therapy in septic shock. World J Crit Care Med. 2016;5:47–56. Saludes P, Proenca L, Gruartmoner G, Ensenat L, Perez-Madrigal A, Espinal C, et al. Central venous-to-arterial carbon dioxide difference and the effect of venous hyperoxia: a limiting factor, or an additional marker of severity in shock? J Clin Monitor Comput. 2017;31:1203–11. Orbegozo Cortes D, Puflea F, Donadello K, Taccone FS, Gottin L, Creteur J, et al. Normobaric hyperoxia alters the microcirculation in healthy volunteers. Microvasc Res. 2015;98:23–8. Mallat J, Mohammad U, Lemyze M, Meddour M, Jonard M, Pepy F, et al. Acute hyperventilation increases the central venous-to-arterial PCO2 difference in stable septic shock patients. Ann Intensive Care. 2017;7:31. The study and publication costs were funded by the intensive care unit research fund. Service of Adult Intensive Care Medicine, Lausanne University Hospital, 1011, Lausanne, Switzerland Zied Ltaief, Antoine Guillaume Schneider & Lucas Liaudet Unit of Pathophysiology, Faculty of Biology and Medicine, University of Lausanne, 1011, Lausanne, Switzerland Lucas Liaudet Zied Ltaief Antoine Guillaume Schneider ZL performed the literature review drew the figures and drafted the manuscript. AS critically reviewed the manuscript. LL critically reviewed the manuscript. All authors read and approved the final manuscript. Correspondence to Antoine Guillaume Schneider. All authors stated they have no conflicts of interest to declare. Ltaief, Z., Schneider, A.G. & Liaudet, L. Pathophysiology and clinical implications of the veno-arterial PCO2 gap. Crit Care 25, 318 (2021). https://doi.org/10.1186/s13054-021-03671-w DOI: https://doi.org/10.1186/s13054-021-03671-w Selected Articles from the Annual Update in Intensive Care and Emergency Medicine 2021
CommonCrawl
Fox sightings in a city are related to certain land use classes and sociodemographics: results from a citizen science project Theresa Walter1,4, Richard Zink2, Gregor Laaha3, Johann G. Zaller4 & Florian Heigl ORCID: orcid.org/0000-0002-0083-49084 Red foxes (Vulpes vulpes L.) have become successful inhabitants of urban areas in recent years. However, our knowledge about the occurrence, distribution and association with land uses of these urban foxes is poor, partly because many favoured habitats are on private properties and therefore hardly accessible to scientists. We assumed that citizen science, i.e. the involvement of the public, could enable researchers to bridge this information gap. We analysed 1179 fox sightings in the city of Vienna, Austria reported via citizen science projects to examine relationships between foxes and the surrounding land use classes as well as sociodemographic parameters. Conditional probabilities of encountering foxes were substantially higher in gardens, areas with a low building density, parks or squares as compared to agricultural areas, industrial areas or forests. Generalized linear model analyses showed that sociodemographic parameters such as education levels, district area, population density and average household income additionally improved the predictability of fox sightings. Reports of fox sightings by citizen scientists might help to support the establishment of wildlife management in cities. Additionally, these data could be used to address public health issues in relation with red foxes as they can carry zoonoses that are also dangerous to humans. Urban areas are increasing worldwide [1], hence, they will become more important for wildlife, especially for small and middle-sized carnivores [2]. For wildlife various potential habitats exist in urban areas including buildings, streets, squares, gardens, parks and other areas all associated with a wide variety of disturbances by humans. These land use classes are used by wildlife in different ways e.g. gardens as resource of food, parks as hiding places or streets for migration. The red fox (Vulpes vulpes L., 1758) is one of the globally most adaptive and widely distributed carnivore species [2]. Until the 1980s, foxes in urban areas were mainly reported from the United Kingdom, however, since 1985 they are frequently noticed in many other cities in Canada, Australia, Switzerland, Germany, Japan and Austria [2,3,4, 25, 62]. There are several reasons why urban areas are attractive to foxes [2]. First, there is a high and constant food availability in urban areas. Second, cities provide safety from interspecific competition. Third, urban areas can provide more shelter or den sites compared to rural areas. Fourth, in cities foxes are safe from being hunted as legal regulations usually restrict hunting near houses [5]. Studying fox ecology in urban areas with focus on occurrence, distribution and use of different land use classes is especially important for public authorities when identifying possible risks of human–wildlife conflicts or disease outbreaks. However, despite the increasing trend in urban fox populations there is still little known about associations between foxes and surrounding land use classes. The challenges to study urban foxes with common non-invasive monitoring methods like camera trapping, transect sampling or hair sampling are great due to (i) omnipresence of people and dogs, (ii) risk of theft of the exposed research equipment in the public space [6,7,8] and (iii) private property and therefore no access for scientists to many urban habitats frequently favoured by animals (e.g., private gardens, industrial areas) [6, 7, 9]. Findings from studies dealing with urban foxes show that home ranges of urban foxes tend to be smaller than home ranges of rural foxes [10] and urban foxes have been shown to being less territorial and more living in family groups than rural foxes due to more stable food abundances (e.g., Bristol 37 individuals/km2 [11], Melbourne 16 individuals/km2 [12]; in comparison rural Britain 0.16 to 2.62 individuals/km2 [13], rural Germany 0.7 to 2.7 individuals/km2 [14]). For the city of Zurich, Switzerland, analysis of fox stomach contents showed that more than 50% of an average stomach content was from anthropogenic food resources [15]. The objective of the current study was to test whether a citizen science approach might be suitable to address the above-mentioned challenges regarding research on urban foxes [6, 9, 16]. We define citizen science as scientific research carried out with the aid of interested volunteers [17]. We are aware of the concerns regarding potential biases of citizen science data regarding geographical coverage and data quality. However, in wildlife research, citizen science has a long-standing tradition and is in the meanwhile also scientifically acknowledged [6, 8, 17]. Despite potential bias in geographic coverage due to unbalanced numbers of participants in some regions of a project or uncertainties in data quality when appropriate quality controls are missing when using citizen science, citizen science can be cost-effective when conducting long term monitoring [18]. Only a few studies address human–carnivore encounters in urban areas [9, 19, 20]. These studies found that wildlife sightings depend on habitat use and activity patterns of wildlife, but also on the use and accessibility of different land use classes by humans, and the visibility of wildlife in different habitats [19, 20]. It was shown by comparing radio-telemetry data and public sightings of urban coyotes that public sightings overestimated the use of more open vegetation as habitat compared to forests with short sight distances. Public sightings were biased towards habitats where people concentrated and daylight when people are more active, although coyotes were moving greater distances at night [19]. The positive association of coyote encounters with building densities in another study was due to more people being present in these areas and not due to coyotes using these areas more frequently [20]. Additionally, both humans and wildlife show certain activity patterns in their daily life, which may influence wildlife sightings [21]. The aim of the current study was to assess to what extent sightings of urban foxes by citizens are influenced by the surrounding land use and/or sociodemographic parameters. In this study, we define sightings as human–fox encounters which are reported via our citizen science project website. Additionally, we investigated temporal changes over years, months and daytime in urban fox sightings. To the best of our knowledge, our study is the first to use citizen science as a non-invasive method to study urban fox occurrence and distribution on a large scale and to include sociodemographic data as an explanatory variable [20]. Results should (i) help researchers to establish a large-scale monitoring system for urban areas by using a citizen science method, (ii) inform wildlife managers in urban areas on human–fox-encounters and therefore (iii) lay the foundation for future systems to prevent human–wildlife conflicts as well as spreading of fox-related diseases. Urban fox sightings A total of 1179 fox sightings were reported between 2010 and 2015. Foxes were observed in every year of the study duration. The exact date of sighting is known for 966 fox sightings. Fox sightings were not equally distributed across months (Χ2 = 171.913, df = 11, P < 0.01; Fig. 1a). Across years, most fox sightings were reported in July (n = 130) the fewest in November (n = 29). Foxes in Vienna could be observed at every hour of the day, 41.66% of the sightings were reported between 9 p.m. and 3 a.m. (Fig. 1b). Fox sightings were not equally distributed across the day (X2 = 154.2564, df = 23, P-value < 0.01). Number of fox sightings per grid cell varied from 0 to 18 (Fig. 2a). The overall probability for sighting a fox in Vienna calculated per grid cell was 0.27. When calculating the conditional probabilities for each land use class, this value was then used as threshold for deciding which land use classes influenced fox sightings positively or negatively. Fox sightings in the city of Vienna per month (n = 966; a) and per hour of the day (n = 468; b) as percentage of total fox sightings between 2010 and 2015 Number of fox sightings in Vienna between 2010 and 2015 per 400 × 400 m grid cell (n = 1179; a). Map of conditional probabilities (P) of fox sightings for land use classes in Vienna (b). The darker green the area, the higher the values for P Conditional probabilities P(E|B) were calculated for all 58 land use classes (Fig. 2b). All land use classes with P > 0.27 are positively associated with fox sightings (Fig. 2b, Table 1), whereas land use classes with P < 0.27 are negatively associated with them (Fig. 2b, Table 2). Land use classes with an increased association of fox sightings accounted for 48.54% and land use classes with a negative association of fox sightings amounted to 51.46% of the total research area. Gardens and areas with a low building density, as well as parks and squares are positive associated with fox sightings, whereas agricultural areas, a diverse range of small other green areas, as well as factory premises and industrial areas and also the forest are negative associated with fox sightings. For some of these land use classes with a negative association of fox sightings, as for instance industrial areas, the relative frequency of fox sightings is rather high compared with others that are positively associated with fox sightings (Tables 1, 2 respectively). However, also the availability of these land use classes is high within the study area, and therefore the probability of fox sightings per land use class is put into perspective by dividing it by P(Bj), thus calculating the conditional probabilities of fox sightings per land use class. Table 1 Conditional probabilities of fox sightings in Vienna from 2010 to 2015 for all land use classes with a positive association with fox sightings with \(P\left( {{\text{E|B}}_{\text{j}} } \right)\) > 0.27 Table 2 Conditional probabilities of fox sightings for all land use classes with a negative association with fox sightings with \(P\left( {{\text{E}}|{\text{B}}_{\text{j}} } \right)\) < 0.27 Influencing factors for fox sightings Influencing factors for fox sightings were analysed with three different generalised linear models. The model GLM1, containing only percentage of land use classes as predictor variables, showed a highly significant positive influence of different kinds of gardens (detached house gardens, court gardens, allotments), parks and squares on fox sightings, as well as the zoo (Additional file 1: Table S1). Fields, streams, industrial areas and sport fields showed a significant negative influence on human–fox encounters. For this model AIC was 4302.3, Cox and Snell R2 = 0.3084 and VIF < 1.5 for all coefficients, therefore meeting the criterion of not exceeding a VIF of 10 [22]. The model GLM2 containing only sociodemographic influence factors showed a significant influence of the number of people with different education levels per district on reported fox sightings per grid cell (Additional file 2: Table S2). Number of reports of a fox sighting increased with increasing numbers of people with a university degree per district and decreased with increasing numbers of people with a compulsory education as highest level of education, increasing district area and average household income. Population density had no significant influence on reported fox sightings (AIC = 4661.4, Cox and Snell R2 = 0.18, VIF < 2.8 for all coefficients). In model GLM3 percentage of land use classes per grid cell was combined with sociodemographic influence factors (Table 3). Positive and negative influence of the different factors on reported fox sightings remained nearly the same, however model fit was improved compared to only considering land use information or sociodemographics (AIC = 4089.9, Cox and Snell R2 = 0.3701, VIF < 3.7 for all coefficients). Table 3 Model-averaged coefficients of the GLM3 for the factors influencing fox sightings in the city of Vienna containing land use classes and sociodemographic values as explanatory variables Research in urban wildlife ecology and human–wildlife interactions in cities becomes more important as more and more people live in urban areas [1, 2]. This is the first study analysing occurrence and distribution of urban fox sightings in relation to land use and sociodemographics in a European city using a citizen science approach. Fox sightings were not equally distributed across the year and over months, 51% of fox sightings were made between May and August. This could be explained by a fox population peak with many young foxes present and gradually starting to explore greater areas in these months [23]. It should also be considered that the internet platform "StadtWildTiere" was launched and promoted in the public at the end of May and citizens in general are more active outdoors in summer months. Fox sightings were also reported for every hour of the day (Fig. 1). Between 6 p.m. and midnight about 45% of all reported human–fox encounters took place, whereas between midnight and 6 a.m. only about 22% of the sightings were reported. This distribution of sightings is most likely a consequence of human behaviour and activity patterns, rather than due to fox activity patterns, since foxes are considered to be mainly active at night [24]. Gloor found that urban foxes in Zurich preferred public parks and other areas closed for humans during the first half of the night and used residential areas more, when human activity was low in these areas in the second half of the night [25]. For foxes in Bristol (UK) it was even shown that they crossed less roads before midnight than after, therefore supposedly adapting their activity patterns to reduce mortality risks by roads and avoiding human activity [26]. Fox sightings were reported throughout all districts of the city. Our analyses that fox sightings are affected by land use classes suggest that foxes prefer certain land use classes [20]. High conditional probabilities were calculated for different types of gardens and areas with a low building density, as well as for parks and squares. Our citizen science data are thus in line with telemetry studies on urban foxes [10, 25]. People also had good access to these land use classes and foxes are well visible, although during the day they tend to rest in vegetative structures [25]. Low conditional probabilities for fox sightings were calculated for agricultural areas, a diverse range of small other green areas, as well as factory premises and industrial areas and also forests. Based on several studies, one would assume that the number of sightings of foxes on these land use classes in Vienna was as high as that of gardens and parks, however several aspects should be considered [27,28,29]. First, a sighting of a fox on an agricultural field or in the forest within the city borders may not be as special for people, as a sighting in their own garden. Therefore, foxes seen on those land use classes might not be reported as often as foxes seen in gardens or in parks within the city. These results seem to be consistent with other research which found that sampling effort can bias results of citizen science projects [30,31,32]. Second, visibility of foxes in a forest is likely to be worse than in gardens or parks. Third, access to industrial areas and factory premises was limited to operating hours and to people who have access. There is the possibility, that in our project we might not have had enough citizen scientists with access to these land use classes, thus resulting in low conditional probability values. A special land use class category was the zoo, situated in the Schönbrunn castle grounds: while accounting for only 0.04% of the study area, foxes were reported in two out of three grid cells containing the zoo as land use class, resulting in the highest conditional probability value of all land use classes. Reported fox sightings from the zoo are sightings of a fox family, which is quite famous among Viennese people, roaming the premises of the zoo, and are not animals held in captivity. The last aspect to consider is of course the possibility that no foxes were present in areas with no reported sightings. Similarly, to the conditional probability results, analyses with GLMs indicated that fox sightings increased with increasing area of private gardens, public parks and squares. This again mirrored habitat use by urban foxes on a large scale like found in various studies on smaller scales using other methods [10, 25, 33, 34]. These land use classes provide easy access for foxes to food resources as well as shelter. Additionally, these land use classes are also preferred by humans, which makes a human–fox-encounter more likely. As mentioned above, these results certainly do not indicate that land use classes with no fox sightings, inhabit no foxes. It might also be that non-reports from these land use classes originated in human perception of the land use classes as 'not urban', therefore not worthy to report a fox sighting to a project on urban wildlife. This is similar to other studies which refer to 'reporting bias' in citizen science projects [35, 36]. The GLM containing only sociodemographic predictor variables showed that education level is highly significant, which indicates that people with a university degree reported fox sightings more often than people with only a compulsory education. Since citizen science in general is not restricted to higher educated people [37], this result can be interpreted in a way that our project promotion was focused on the target group of people interested in wildlife. However, the result is also in line with previous findings showing that some citizen science projects seem to be more attractive to people with higher education (e.g. [38]). This challenge of reaching a broad target audience to avoid bias in data collection in citizen science projects could be addressed by training observers with different educational backgrounds or reaching a broader audience through different public relation activities. However, the problem of reaching a broad audience is existing in science communication as well, is multi-facetted and can only be solved by many parallel activities by scientists, communicators, politicians and NGOs. As expected, district area showed no significant influence on explaining fox sightings and human population density did not remain within the model after a stepwise AIC was performed. Data quality is a core issue of every scientific research project, especially when citizen scientists are involved [39,40,41,42,43,44]. More than 60% of the sightings were submitted by citizen scientists without a photo for proof. Nevertheless, we considered submissions without a photo of the reported fox sighting to be sufficient, as the red fox is a well-known species and not easily confused with any other wildlife species living in Vienna. In citizen science projects, variation in observer quality and variation in sampling effort over time and space often pose challenges for data analysis [41, 45,46,47]. Including observer characteristics in statistical analysis can account for variation in data sets gathered by citizen scientists [21, 48]. In our study the combination of land use classes and sociodemographic data lead to a better model than just land use classes. When lacking information on the knowledge of every single observer, sociodemographic census data have been shown to be an important source of variation in citizen science data [20]. Additionally, 20% of the fox sightings were made in private gardens and other forms of private properties, which would be hard to access for researchers [9]. Therefore, citizen science proofed to be a feasible method to research urban foxes. Citizen science adds different research possibilities to mammal monitoring in urban areas compared to more traditional monitoring methods like camera trapping and transect monitoring. A citizen science approach to wildlife monitoring is appropriate when interactions with wildlife are central to the research question [9]. This can be of high interest when working in urban areas, as human–wildlife contact is increased in certain areas of cities [49]. When researching urban wildlife, the success of a citizen science project can be affected by the species studied. The red fox is a charismatic well-known species and therefore a suited study model. However, even for urban rats, a species not liked by many people, citizen science is nowadays considered as a research method [50]. Additionally, a new possibility of comparing data from different cities arises, when data on wildlife sightings is gathered through the same project design as it is currently done within the project "StadtWildTiere" in Zurich (Switzerland), Berlin (Germany) and Vienna (Austria). Our findings could also have implications for wildlife management in cities or public health issues. For red foxes in Central Europe, infection with and zoonotic transmission of the Fox tapeworm (Echinococcus multilocularis) is already of interest for urban areas [51,52,53,54]. Human–wildlife interactions affect red fox populations as well as predation rate of the infected intermediate hosts of E. multilocularis and should therefore be considered in management strategies of this disease [55]. Despite common reservations against citizen science as a method, our study demonstrated that these can be partly overcome by including sociodemographic factors in the analyses. Taking the results of this study as basis for future citizen science projects in urban areas, we recommend to develop advanced citizen science projects with a broad focus on various target groups to foster the reporting of fox sightings on a large scale. The mostly positive feeling associated with a personal observation of wildlife in the city during a citizen science project could be followed by a more relaxed coexistence of humans and animals in the cities in general. Additionally, such projects would have the potential to predict the likelihood of human–fox encounters in different places of the city to inform public authorities on possible wildlife conflict areas and public health issues. The current study uses data from the citizen science project "Wildtiere in Wien" (translated "wildlife in Vienna") running from 2010 until May 2015, and its follow-up project "StadtWildTiere" (translated "urban wildlife"; http://www.stadtwildtiere.at). The citizen science projects were conducted in Vienna, the capital city of Austria, with a total area of 414.87 km2 and about 1.8 million inhabitants in 2015 (Fig. 3). Vienna is surrounded by the Viennese forest in the west, the agricultural plains of the Marchfeld to the northeast, the floodplains of the Lobau to the east, and the Viennese Basin to the south. Green areas (e.g. forests, agricultural areas, parks) make up 45.1% of the city area, 35.8% are building areas, 14.4% traffic areas (e.g. roads, railway tracks), the remaining 4.7% of the area are water bodies. The percentage of green area varies between 2 and 15% in the inner districts and can reach up to 70% in the districts at the fringe of the city. The green spaces are well connected on the city edges and rather patchy in the centre, however a diverse range of rivers and train tracks connects green areas throughout the city [56]. Since the outskirts of Vienna are rural dominated, the actual research area was not defined by the political border of the city of Vienna, but by all buildings in Vienna surrounded by a 400 m buffer within a connected area, resulting in a 36,594 ha research area. Location of the study area (a). Fox sightings reported between 2010 and 2015 (n = 1179; b) Data collection and verification In the first citizen science project "Wildtiere in Wien", fox sightings were reported by citizens via phone calls, emails, an online questionnaire on the Research Institute of Wildlife Ecology (FIWI) homepage, or at public information days. Through this project 641 sightings were gathered. All of those sightings included information on the species sighted, concerning the date at least the year it was sighted and the location of the sighting. The follow-up project "StadtWildTiere" pursued additional goals and also enabled us to gather more information about wildlife sightings. However, species, date and location are properties of a dataset which were collected in both projects. On the project website citizen scientists are informed about different wildlife species, where animals can be seen, how they can be supported and how conflicts should be managed. The main part of this online platform is designed for easy data entry of wildlife sightings by citizen scientists. Registration is not mandatory to enter data. Different information on sightings is gathered which includes location (either by address or by pinning a point on an implemented Google Map [57]), animal species, number of individuals, date and time of sighting and different kinds of evidence for the presence of an animal (feeding traces, trace mark, scats, den/nest, call, trace marks in snow). Up to three photos can be uploaded for each reported sighting. Location, species and date is mandatory, all other information including photo upload is not mandatory. From 27 May until 15 September 2015, 350 fox sightings were reported via the online form. In addition to citizen science data, we included 188 fox carcasses reported by a service provided by the city of Vienna, which collects animal carcasses (ebs Wien Tierservice, Vienna, Austria). After checking all reports for plausibility and deleting entries due to obvious mistakes in data entry, we used 1179 reports of fox sightings for our analyses. From these reports, 29.5% of the sightings were either documented by a photo or a fox carcass that was found; scientists reported 5.7% of all sightings; 64.8% of the sightings were reported without a photo. We considered reports without a photo sufficient, as the red fox is a well-known species not easily confused with any other wildlife species living in Vienna. Remote sensing data The fox sightings were processed using the geographic information system ArcGIS (ESRI ArcGIS 10.2.2) [58]. Vienna's political borders and district borders were obtained from the platform Open Data Austria [59]. Habitat descriptions were based on 58 land use classes from the official green space monitoring Vienna (Grünraummonitoring Wien). These classes are distinguished due to their vegetation and their potential to serve as a habitat for animals, plants and humans [60, 61]. The land use polygons were intersected with a 400 m × 400 m grid (16 ha) corresponding to the approximate home range of urban foxes calculated for Zurich, Switzerland [25], to obtain land use fractions of possible fox habitats. Zurich was chosen as a reference due to availability of home range data for urban foxes in Central Europe, as well as for similar amount of green spaces in the city and similar lifestyle of people. Overall, the known home ranges of foxes in urban areas vary considerably between seasons, sexes and across the world; individual home ranges range from 5.5 to 70 ha [10, 12, 62]. Sociodemographic influences Sociodemographic census data on the population of Vienna provided and collected by the city of Vienna and the Austrian governmental statistics agency "Statistik Austria", was used to analyse sociodemographic influence factors on fox sightings [63]. Sociodemographic characteristics were available on a district basis. Vienna's 23 districts have areas ranging from 109 ha to 10,231 ha and a population varying between 16,339 and 189,713 inhabitants [56]. Therefore, each grid cell was assigned to a political district of Vienna according to its position. When grid cells were bordering two or more districts, they were assigned to the district in which most of the grid cell area was located. The sociodemographic characteristics used were: district area (District_area), population density (Population_density), number of people with no more than a compulsory education (Edu_compulsory), number of people with a university degree (Edu_university) and average household income (Ave_income). District area was considered to check for a size-effect, thus whether more foxes would simply be seen in districts with a bigger area. Population density accounted for different degrees of urbanisation (districts with low population density have a more rural character, contrary to highly populated districts with plenty of sealed surfaces) or for an effect of quantity (whether the number of people living in an area would explain the number of fox sightings). The two education levels were included to test whether level of education had an influence on reporting. Finally, a relationship between average household income and vegetation cover in urban areas was suggested by many studies (see [20]), so we tested if the average household income has an influence on fox sightings in Vienna. For analysing when and where foxes were observed in Vienna the distribution of the fox sightings was analysed according to years, months, and time of day using Chi squared tests. We subsequently calculated empirical conditional probabilities to analyse the degree to which each land use class is associated with fox sightings [64]. Probabilities of fox sightings on land use classes depend on the size of the area of each land use class. Therefore, it is important to be able to calculate probabilities independently from area sizes. This is done by calculating conditional probabilities through dividing probabilities by the relative share of each land use class in the whole study area. For each land use class Bj, the i subareas Aij of grid cells with sightings (event E) were determined. The conditional probability P(E|Bj) of observing a fox on a specific land use class Bj is then defined as the sum of areal fractions of class Bj of cells with sightings: $$P\left( {{\text{E|B}}_{\text{j}} } \right) = \frac{{{\text{P}}\left( {{\text{E}} \cap {\text{B}}_{\text{j}} } \right)}}{{{\text{P}}\left( {{\text{B}}_{\text{j}} } \right)}} = \frac{{\mathop \sum \nolimits_{\text{i}} ({\text{A}}_{\text{ij}} /{\text{A}}_{\text{tot}} )}}{{{\text{P}}\left( {{\text{B}}_{\text{j}} } \right)}}$$ where \(P\left( {E \cap B_{j} } \right)\) is the relative frequency of fox sightings in land use class \(B_{j}\), \(P\left( {B_{j} } \right)\) is its share in the study area, \(\mathop \sum \nolimits_{i} (A_{ij} /A_{tot} )\) is the sum of areal fractions of class \(B_{j}\) in cells with sightings, and \(A_{tot}\) the total area under investigation. The conditional probabilities \(P\left( {E |B_{j} } \right)\) were finally compared to the overall probability \(P\left( E \right)\) of grid cells having a fox sighting in order to conclude which land use classes favor or hamper fox sightings. To analyze which factors influence fox observations three different generalized linear models (GLMs) with Poisson-distribution were employed [65, 66]. GLM1 had only the percentage of each land use class per grid cell as predictor variables, GLM2 had only sociodemographic values as predictor variables and GLM3 had both percentage of each land use class per grid cell and sociodemographic values as predictors. Stepwise variable selection based on the akaike information criterion (AIC) was used to find the best fitting models. Cox and Snell R2 was calculated as generalized coefficient of determination, and variance inflation factors (VIF) were calculated to assess for multicollinearity of predictor variables of each model. As a rule, VIF for all predictor variables in a model should be less than 10 [22]. All analyses were performed using statistical software R 3.2.1 [67]. AIC: akaike information criterion FIWI: Research Institute of Wildlife Ecology GLM: generalized linear models VIF: variance inflation factors United Nations, Department of Economic and Social Affairs, Population Division. World Urbanization Prospects: the revision, highlights. United Nations: New York; 2014. p. 2014. Bateman PW, Fleming PA. Big city life: carnivores in urban environments. J Zool. 2012;287:1–23. Ikeda T, Yoshimura M, Onoyama K, Oku Y, Nonaka N, Katakura K. Where to deliver baits for deworming urban red foxes for Echinococcus multilocularis control: new protocol for micro-habitat modeling of fox denning requirements. Parasit Vectors. 2014;7:357. Börner K. Untersuchungen zur Raumnutzung des Rotfuchses, Vulpes vulpes (L., 1758), in verschieden anthropogen beeinflussten Lebensräumen Berlins und Brandenburgs. Humboldt - Universität zu Berlin; 2014. Presse- und Informationsdienst (Magistratsabteilung 53). Gesetz über die Regelung des Jagdwesens (Wiener Jagdgesetz). 2013. https://www.wien.gv.at/recht/landesrecht-wien/rechtsvorschriften/html/l9200000.htm. Accessed 27 Oct 2017. Lepczyk CA, Mertig AG, Liu J. Assessing landowner activities related to birds across rural-to-urban landscapes. Environ Manage. 2004;33:110–25. Colding J, Lundberg J, Folke C. Incorporating green-area user groups in urban ecosystem management. AMBIO J Hum Environ. 2006;35:237–44. Cohn JP. Citizen science: can volunteers do real research? Bioscience. 2008;58:192. Weckel ME, Mack D, Nagy C, Christie R, Wincorn A. Using citizen science to map human–coyote interaction in suburban New York, USA. J Wildl Manag. 2010;74:1163–71. Adkins CA, Stott P. Home ranges, movements and habitat associations of red foxes Vulpes vulpes in suburban Toronto, Ontario, Canada. J Zool. 1998;244:335–46. Baker PJ, Funk SM, Harris S, White PCL. Flexible spatial organization of urban foxes, Vulpes vulpes, before and during an outbreak of sarcoptic mange. Anim Behav. 2000;59:127–46. White PCL, Saunders G, Harris S. Spatio-temporal patterns of home range use by foxes (Vulpes vulpes) in urban environments. J Anim Ecol. 1996;65:121–5. Heydon MJ, Reynolds JC, Short MJ. Variation in abundance of foxes (Vulpes vulpes) between three regions of rural Britain, in relation to landscape and other variables. J Zool. 2000;251:253–64. Keuling O, Greiser G, Grauer A, Strauß E, Bartel-Steinbach M, Klein R, et al. The German wildlife information system (WILD): population densities and den use of red foxes (Vulpes vulpes) and badgers (Meles meles) during 2003–2007 in Germany. Eur J Wildl Res. 2011;57:95–105. Contesse P, Hegglin D, Gloor S, Bontadina F, Deplazes P. The diet of urban foxes (Vulpes vulpes) and the availability of anthropogenic food in the city of Zurich, Switzerland. Mamm Biol. 2004;69:81–95. Dickinson JL, Zuckerberg B, Bonter DN. Citizen science as an ecological research tool: challenges and benefits. Annu Rev Ecol Evol Syst. 2010;41:149–72. Silvertown J. A new dawn for citizen science. Trends Ecol Evol. 2009;24:467–71. Bonney R, Cooper CB, Dickinson J, Kelling S, Phillips T, Rosenberg KV, et al. Citizen science: a developing tool for expanding science knowledge and scientific literacy. Bioscience. 2009;59:977–84. Quinn T. Using public sighting information to investigate coyote use of urban habitat. J Wildl Manag. 1995;59:238–45. Wine S, Gagné SA, Meentemeyer RK. Understanding human–coyote encounters in urban ecosystems using citizen science data: what do socioeconomics tell us? Environ Manage. 2015;55:159–70. Heigl F, Stretz RC, Steiner W, Suppan F, Bauer T, Laaha G, et al. Comparing road-kill datasets from hunters and citizen scientists in a landscape context. Remote Sens. 2016;8:832. Kutner MH, Nachtsheim CJ, Dr JN. Applied linear regression models, 4th edition with student CD. 4th ed. Boston: McGraw-Hill Education; 2004. Robertson C, Baker P, Harris S. Ranging behaviour of juvenile red foxes and its implications for management. Acta Theriol Warsz. 2000;45:525–35. Doncaster CP, Macdonald DW. Activity patterns and interactions of red foxes (Vulpes vulpes) in Oxford city. J Zool. 1997;241:73–87. Gloor S. The rise of urban foxes (Vulpes vulpes) in Switzerland and ecological and parasitological aspects of a fox population in the recently colonised city of Zürich. Dissertation. University of Zurich; 2002. Baker PJ, Dowding CV, Molony SE, White PCL, Harris S. Activity patterns of urban red foxes (Vulpes vulpes) reduce the risk of traffic-induced mortality. Behav Ecol. 2007;18:716–24. Etten WK, Wilson KR, Crabtree RL. Habitat use of red foxes in yellowstone national park based on snow tracking and telemetry. J Mammal. 2007;88:1498–507. Goldyn B, Hromada M, Surmacki A, Tryjanowski P. Habitat use and diet of the red fox Vulpes vulpes in an agricultural landscape in Poland. Z Für Jagdwiss. 2003;49:191–200. Janko C, Schröder W, Linke S, König A. Space use and resting site selection of red foxes (Vulpes vulpes) living near villages and small towns in Southern Germany. Acta Theriol (Warsz). 2012;57:245–50. Zuckerberg B, McGarigal K. Widening the circle of investigation—the interface between citizen science and landscape ecology. Citizen science public participation in environmental research. Ithaca: Cornell University Press; 2012. Lakeman-Fraser P, Gosling L, Moffat AJ, West SE, Fradera R, Davies L, et al. To have your citizen science cake and eat it? Delivering research and outreach through Open Air Laboratories (OPAL). BMC Ecol. 2016;16:57–70. Fink D, Hochachka WM. Using data mining to discover biological patterns in citizen science observations. Citizen science: public participation in environmental research. Ithaca, New York: Cornell University Press; 2012. p. 125–38. Harris S, Rayner JMV. Urban fox (Vulpes vulpes) population estimates and habitat requirements in several British cities. J Anim Ecol. 1986;55:575–91. Scott DM, Berg MJ, Tolhurst BA, Chauvenet ALM, Smith GC, Neaves K, et al. Changes in the distribution of red foxes (Vulpes vulpes) in urban areas in great Britain: findings and limitations of a media-driven nationwide survey. PLoS ONE. 2014;9:e99059. Isaac NJB, van Strien AJ, August TA, de Zeeuw MP, Roy DB. Statistics for citizen science: extracting signals of change from noisy ecological data. Methods Ecol Evol. 2014;5:1052–60. van Strien AJ, van Swaay CAM, Termaat T. Opportunistic citizen science data of animal species produce reliable estimates of distribution trends if analysed with occupancy models. J Appl Ecol. 2013;50:1450–8. Miller-Rushing A, Primack R, Bonney R. The history of public participation in ecological research. Front Ecol Environ. 2012;10:285–90. Evans C, Abrams E, Reitsma R, Roux K, Salmonsen L, Marra PP. The neighborhood nestwatch program: participant outcomes of a citizen-science ecological research project. Conserv Biol. 2005;19:589–94. van der Velde T, Milton DA, Lawson TJ, Wilcox C, Lansdell M, Davis G, et al. Comparison of marine debris data collected by researchers and citizen scientists: is citizen science data worth the effort? Biol Conserv. 2017;208:127–38. MacKenzie CM, Murray G, Primack R, Weihrauch D. Lessons from citizen science: assessing volunteer-collected plant phenology data with Mountain Watch. Biol Conserv. 2017;208:121–6. Burgess HK, DeBey LB, Froehlich HE, Schmidt N, Theobald EJ, Ettinger AK, et al. The science of citizen science: exploring barriers to use as a primary research tool. Biol Conserv. 2017;208:113–20. Kosmala M, Wiggins A, Swanson A, Simmons B. Assessing data quality in citizen science. Front Ecol Environ. 2016;14:551–60. Lewandowski E, Specht H. Influence of volunteer and project characteristics on data quality of biological surveys. Conserv Biol. 2015;29:713–23. Hunter J, Alabri A, van Ingen C. Assessing the quality and trustworthiness of citizen science data. Concurr Comput-Pract Exp. 2013;25:454–66. Sullivan BL, Aycrigg JL, Barry JH, Bonney RE, Bruns N, Cooper CB, et al. The eBird enterprise: an integrated approach to development and application of citizen science. Biol Conserv. 2014;169:31–40. Le Rest K, Pinaud D, Bretagnolle V. Volunteer-based surveys offer enhanced opportunities for biodiversity monitoring across broad spatial extent. Ecol Inform. 2015;30:313–7. Gonsamo A, D'Odorico P. Citizen science: best practices to remove observer bias in trend analysis. Int J Biometeorol. 2014;58:2159–63. Newman C, Buesching CD, Macdonald DW. Validating mammal monitoring methods and assessing the performance of volunteers in wildlife conservation—"Sed quis custodiet ipsos custodies?". Biol Conserv. 2003;113:189–97. Kretser HE, Sullivan PJ, Knuth BA. Housing density as an indicator of spatial patterns of reported human–wildlife interactions in Northern New York. Landsc Urban Plan. 2008;84:282–92. Desvars-Larrive A, Baldi M, Walter T, Zink R, Walzer C. Brown rats (Rattus norvegicus) in urban ecosystems: are the constraints related to fieldwork a limit to their study? Urban Ecosyst. 2018;21:951–64. Deplazes P, Hegglin D, Gloor S, Romig T. Wilderness in the city: the urbanization of Echinococcus multilocularis. Trends Parasitol. 2004;20:77–84. König A, Romig T. Fox tapeworm Echinococcus multilocularis, an underestimated threat: a model for estimating risk of contact. Wildl Biol. 2010;16:258–66. Brochier B, De Blander H, Hanosset R, Berkvens D, Losson B, Saegerman C. Echinococcus multilocularis and Toxocara canis in urban red foxes (Vulpes vulpes) in Brussels, Belgium. Prev Vet Med. 2007;80:65–73. Robardet E, Giraudoux P, Caillot C, Augot D, Boue F, Barrat J. Fox defecation behaviour in relation to spatial distribution of voles in an urbanised area: an increasing risk of transmission of Echinococcus multilocularis? Int J Parasitol. 2011;41:145–54. Hegglin D, Bontadina F, Deplazes P. Human–wildlife interactions and zoonotic transmission of Echinococcus multilocularis. Trends Parasitol. 2015;31:167–73. Magistratsabteilung der Stadt Wien MA 23-Wirtschaft, Arbeit und Statistik. Statistisches Jahrbuch der Stadt Wien 2015. Vienna, Austria; 2015. Google LLC. Google Maps. Google Maps. 2018. https://www.google.at/maps/. Accessed 26 Feb 2018. ESRI. ArcGIS Desktop. Redlands, California: Environmental Systems Research Institute, Inc.; 2013. http://www.esri.com/arcgis/about-arcgis. Magistratsabteilung 41-Stadtvermessung. Basic map of Vienna. 2011. https://www.data.gv.at/katalog/dataset/332be1f6-5791-43ae-9e37-90cc48e75d24. Hoffert H, Fitzka G, Stangl E, Lumasegger M. Projekt Grünraummonitoring Wien: Gesamtbericht. Vienna, Austria: Magistrat der Stadt Wien, Magistratsabteilung 22-Umweltschutz; 2008. Magistratsabteilung der Stadt Wien MA 22-Umweltschutz. Begriffserklärung zum Grünraummonitoring. 2017. https://www.wien.gv.at/umweltschutz/naturschutz/gruenraummonitoring/gruenraummonitoring-glossar.html. Accessed 29 Sept 2017. Marks CA, Bloomfield TE. Home-range size and selection of natal den and diurnal shelter sites by urban red foxes (Vulpes vulpes) in Melbourne. Wildl Res. 2006;33:339–47. Heigl F, Horvath K, Laaha G, Zaller JG. Amphibian and reptile road-kills on tertiary roads in relation to landscape structure: using a citizen science approach with open-access land cover data. BMC Ecol. 2017;17:24. Crawley MJ. The R Book 1. Auflage. Hoboken: Wiley; 2007. Zuur AF, Hilbe JM, Leno EN. A Beginner's Guide to GLM and GLMM with R: a frequentist and bayesian perspective for ecologists. Newburgh: Highland Statistics Ltd; 2013. R Development Core Team. R: A language and environment for statistical computing. Vienna, Austria; 2008. http://www.R-project.org. TW, RZ, FH and JGZ conceived and designed the study; TW and RZ performed the study; TW and GL analysed the data; TW, FH and JGZ wrote the paper. All authors read and approved the final manuscript. We are very grateful to all Citizen Scientists who reported their fox sightings for this study. We would like to thank F. Suppan for his assistance in the GIS analysis. We are grateful to the city of Vienna (MA22) for providing us with the data of the greenspace monitoring and the ebs Tierservice Wien for providing us the data on collected fox carcasses. The data that support the findings of this study are available from Magistratsabteilung 22-Umweltschutz, Stadt Wien but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. In order to protect animals, data on fox sightings generated and analysed are not publicly available either. However, the authors share information upon reasonable request. The development and running costs of the online platform http://www.stadtwildtiere.at was funded by the association "Entdecke & Bewahre Natur (EBN)". In addition to funding, EBN did not influence the process of the study. Research Institute of Wildlife Ecology, University of Veterinary Medicine, Vienna, Savoyenstrasse 1, 1160, Vienna, Austria Theresa Walter Austrian Ornithological Centre, Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Savoyenstrasse 1a, 1160, Vienna, Austria Richard Zink Institute for Applied Statistics and Computing, University of Natural Resources and Life Sciences, Vienna, Peter Jordan-Strasse 82, 1190, Vienna, Austria Gregor Laaha Institute of Zoology, University of Natural Resources and Life Sciences, Vienna, Gregor Mendel Strasse 33, 1180, Vienna, Austria Theresa Walter, Johann G. Zaller & Florian Heigl Johann G. Zaller Florian Heigl Correspondence to Florian Heigl. Model-averaged coefficients of the generalised linear model M1 containing only land use classes as explanatory variables that influence fox sightings in the city of Vienna, Austria. Model-averaged coefficients of the generalised linear model M2 containing only sociodemographic values as explanatory variables on fox sightings in the city of Vienna, Austria. Walter, T., Zink, R., Laaha, G. et al. Fox sightings in a city are related to certain land use classes and sociodemographics: results from a citizen science project. BMC Ecol 18, 50 (2018). https://doi.org/10.1186/s12898-018-0207-7 Human–wildlife interaction Vulpes vulpes Urban ecosystems
CommonCrawl
Base-band involved integrative modeling for studying the transmission characteristics of wireless link in railway environment Dawei Li1,2, Junhong Wang1,2, Meie Chen1,2, Zhan Zhang1,2 & Zheng Li1,2 EURASIP Journal on Wireless Communications and Networking volume 2015, Article number: 81 (2015) Cite this article Base-band involved integrative modeling method (BIMM) is proposed for studying the transmission characteristics and bit error rate of the wireless communication, in which the transmitting and receiving antennas, wave propagation environment, and the modulation and demodulation modules are modeled and simulated integratively. Comparing with the conventional wave propagation method used in prediction of field coverage, BIMM is capable of taking the interaction between antenna and environment into consideration, and can give the time-domain waveforms of signals in different places of the wireless link, including those in the base-band modules. Therefore, the distortion reason of signal and in what place it happens can be found, and the bit error rate of the wireless communication can be analyzed. The BIMM in this paper is implemented by finite-difference time-domain (FDTD) method and is applied to the wireless link of railway communication. The effect of the electric spark generated by the power supplying network of the express train on the transmitting property and bit error rate is analyzed. With the rapid development and wide application of wireless communication, the prediction of the wave propagation and field coverage of the wireless link becomes important in characterizing the radio channel of the wireless communication, especially for the case of complicated environment, such as the railway environment. Usually, measurement is taken to get the field coverage of wireless communication systems, but measurement is expensive and time consuming and cannot give realistic results in some cases. For example, the results measured by an antenna in free space (normally used in measuring field strength) cannot reflect the realistic case of an express train system, in which the antenna is installed on top of the locomotive and is influenced by the locomotive body. Besides the measurement, numerical methods are also used in prediction of field coverage. There are basically two kinds of numerical methods used, one is the high-frequency method and the other is the full-wave method. The typical high-frequency method used is the ray-tracing method [1-4], which can give results with acceptable accuracy for simple and regular environments [3,4]. However, ray-tracing method is not suitable for analyzing the field coverage of the wireless link in complicated environments, so full-wave method is used in such environments. The typical full-wave method used is the finite-difference time-domain (FDTD) method [5,6], which has been used in studying the wave propagation in the micro-cell of mobile communication [7], between different floors of buildings [8], and in indoor environment [9]. It has also been combined with ray-tracing method to study the effect of wall on the indoor field coverage [10-12]. But these works did not consider the interaction between the transmitting/receiving antennas and environments and can only give the field distribution (field parameters). In our previous work, integrative modeling method (IMM) is proposed for characterizing the wireless link in complicated environments [13,14]. In IMM, the wireless link is decomposed into three parts, namely the transmitting antenna with its neighboring environment, the wave propagation environment between transmitting and receiving antennas, and the receiving antenna with its neighboring environment. These three parts can be integratively modeled and simulated by full-wave method only (small problem) or by hybrid methods involving full-wave methods and high-frequency methods (large problem). The difference between the IMM and other available methods is that IMM can not only give the field distribution in environment but also give the voltages/currents of input and output signals of the antennas (circuit parameters). With IMM, the effect of the environment on the output signal and the reason of signal distortion can be analyzed. In [13,14], FDTD is used to model and simulate the transmitting/receiving antennas together with their neighboring environments, and ray-tracing method is used to calculate the wave propagation in the rest environment. However, IMM can only deal with the RF link of the wireless system and can give the reason of distortions of RF signals. But how these distortions affect the base-band signal? And how the environment influences the bit error rate of the wireless system? These are the motivations of this paper. In this paper, the modeling and analyzing of RF link (IMM) and base-band modules are combined together, and we call it the base-band involved integrative modeling method (BIMM). By this method, the time-domain waveform of signal at different places of the wireless link can be obtained, including those intermediate signal waveforms in base-band modules. So the reason of signal distortion and in what place it happens can be found, and the bit rate error of the complete wireless link can be studied. To our knowledge, there are no similar available works that can be used to analyze the relationship between distortion of base-band signal and environment. In this paper, the wireless link of the railway communication is studied. Global system for mobile communications railway (GSM-R)-modulating signal, which is extensively used in express train, is adopted as the excitation signal. The detail description of the problem and the exciting signal involving the base-band information are given in Section 2. In Section 3, several examples are given to show the efficiency of the BIMM. Conclusions are drawn in Section 4. Problem description and method The complete wireless communication link defined in this paper consists of modulation and demodulation modules, transmitting antenna, receiving antenna, and radio propagation environment, as shown in Figure 1. Schematic diagram of the wireless communication link. Digital base-band signal modulates the carrier, and the modulated RF signal transmits to the space through the transmission antenna. When the wireless link environment is small enough, the transmitting and receiving antennas and the environment can be directly involved into the FDTD meshes by setting corresponding parameters at different mesh grids. The FDTD iteration will produce full-wave effects including the reflection, diffraction, and refraction between the antennas and environment. These effects will impact on the time-domain waveforms of the output signal of the receiving antenna. When the wireless link environment is too large to deal with by available computer resource, FDTD method could combine with ray-trace method to solve long-range propagation problem as mentioned in [13]. In this paper, we only consider the small-scale wireless link environment, so only FDTD method is used. The RF signal output from the receiving antenna is restituted through the demodulation module. By comparing the difference between input and output base-band signals, the transmission property of the whole wireless link and the error of bit rate can be studied. RF signal generation and base-band signal recover Actually, modulation mode of the base-band signal can be different for different systems. Gaussian-filtered minimum shift keying (GMSK) [15] is now taken as the main modulation mode in the GSM-R system for wireless communication in railway. The GMSK modulation can be realized by a pre-modulation low-pass filter (a general Gaussian low pass filter (LPF)) and a frequency modulation (FM) modulator. The Gaussian low pass filter can be represented by: $$ h(t)= \exp \left(\frac{-{t}^2}{2{\delta}^2{T}^2}\right)/\left(\sqrt{\left(2\pi \right)}\cdot \delta T\right) $$ $$ \delta =\sqrt{ \ln (2)}/\left(2\pi BT\right) $$ where B is the bandwidth of the LPF and T is the symbol period. BT is a typical parameter of GMSK, and different BTs present different correlations between adjacent symbols. Due to the limitation of computer resource, the rate of symbols in this paper is set to 5.4166 × 107 bps. Further, the pulse response of the base-band signal is expressed by the following equation: $$ g(t)={a}_i\mathrm{rect}\left(\frac{t}{T}\right)\ast h(t)\kern1em \left({a}_i=-1,\ 1\right) $$ The output signal from Gaussian LPF is modulated by FM algorithm by: $$ x(t)=\sqrt{\frac{2{E}_c}{T}}\cdot \cos \left(2\pi {f}_0t+{\displaystyle \underset{-\infty }{\overset{t}{\int }}g\left(\tau \right)}d\tau +{\phi}_0\right) $$ where E c is the energy of single bit, f 0 is the center frequency of carrier, and ϕ 0 is the random phase of carrier. In this paper, the orthogonal frequency modulation is used and the center frequency is set to 900 MHz. The received RF signal is demodulated by the orthogonal coherent demodulation method. It is multiplied by the orthogonal carriers respectively first: $$ \begin{array}{l}{x}_{Rx}(t)\cdot \cos \left(2\pi {f}_0t\right)=\frac{1}{2}\left( \cos \left(2\pi {f}_0t\right)+1\right) \cos \left({\displaystyle \underset{0}{\overset{t}{\int }}g\left(\tau \right)}d\tau \right)\\ {}\kern13em -\frac{1}{2} \sin \left(2\pi {f}_0t\right) \sin \left({\displaystyle \underset{0}{\overset{t}{\int }}g\left(\tau \right)}d\tau \right)\end{array} $$ $$ \begin{array}{l}{x}_{Rx}(t)\cdot \sin \left(2\pi {f}_0t\right)=\frac{1}{2} \sin \left(2\pi {f}_0t\right) \cos \left({\displaystyle \underset{0}{\overset{t}{\int }}g\left(\tau \right)}d\tau \right)\\ {}\kern12.5em -\frac{1}{2}\left(1- \cos \left(2\pi {f}_0t\right)\right) \sin \left({\displaystyle \underset{0}{\overset{t}{\int }}g\left(\tau \right)}d\tau \right)\end{array} $$ and then passing through the LPF to extract the phase information in the modulated signal. After then, the signal is divided into two parts, that is, the quadrature signal and the in-phase signal: $$ I(t)=-\frac{1}{2} \sin \left({\displaystyle \underset{0}{\overset{t}{\int }}g\left(\tau \right)}d\tau \right) $$ $$ Q(t)=\frac{1}{2} \cos \left({\displaystyle \underset{0}{\overset{t}{\int }}g\left(\tau \right)}d\tau \right) $$ Finally, the base-band signal can be obtained by the two signals and their differentials using following equation: $$ g(t)=-4\left(I\hbox{'}(t)\right)Q(t)-I(t)Q\hbox{'}(t) $$ Digital signal can be recovered by sampling this signal. Figure 2 shows the modulation and demodulation process from base-band to base-band. RF signal generation structure. Simulation scheme In this paper, a small scale wireless link environment is modeled using FDTD algorithm, which includes the transmitting and receiving antennas and the propagation environment. To further enhance the simulation speed, we adopted the parallel algorithm. Combining message passing interface (MPI) with FDTD to realize parallel computation has been proven to be an efficient way in improving the computing speed. The parallel algorithm of FDTD utilizes a one-cell overlap region to exchange the information between adjacent sub-domains, and only the tangential magnetic fields are exchanged at each time step [16]. After all the parameters are set, the base-band signal is modulated into RF signal which excites the transmission antenna and starts the FDTD computation. The receiving antenna output the voltage amplitude in each time step of FDTD iteration for post-processing. This output RF signal involves the environment effect, such as the multi-path effect and the interference between antennas and nearby materials, and the demodulated base-band signal also involves the environment effect. Using this base-band-involved integrative modeling method, the entire wireless communication link can be simulated accurately and the reason of bit error can be found. Implementation and results Modeling of antennas and environment In order to show the capability of BIMM in analysis of base-band signal transmission and bit rate error of wireless system, an example is given, which involves a locomotive with antenna mounted on top, two steel rails, earth plane, and a conducting wire over the locomotive for power supply. Figure 3 shows the detail description of these models drawn in CAD. The electromagnetic parameters for these models are listed in Table 1. In this example, a dipole antenna is used as the transmitting antenna, and a monopole antenna mounted on the top of locomotive is used as the receiving antenna. These two antennas are located not far away, so they are involved into the FDTD computational domain together with the environment. Modeling of locomotive and its environment. Table 1 Parameters of materials used in the model The center frequency of the antennas is set to 900 MHz, and the frequency band covers the signal bandwidth. The length and cross section of the transmitting half-wavelength dipole are 150 and 25 × 25 mm, respectively, and the feeding gap between two arms of the dipole is 10 mm. The length and cross section of the monopole mounted on the locomotive top is 70 and 25 × 25 mm, respectively, and the feeding gap between locomotive top plane and monopole is 10 mm. The dipole antenna is located 1 m above the locomotive and with a distance of 14.3 m (in the rear of the locomotive) to the monopole antenna. Non-uniform grid technique and parallel FDTD algorithm are utilized. The general grid size of FDTD is set to 30, 30, and 30 mm in the x, y, z directions, respectively, while the non-uniform grid size in the antenna and nearby region is set to 12.5, 12.5, and 10 mm, respectively. The locomotive is located along the x-direction. The surrounding environment is divided into 1,109 × 218 × 312 ΔS grids. The feeding gap of the locomotive antenna is located at the coordinates of (755, 105, 190 ΔS), and the feed gap of the transmitting dipole is located at (105, 172, 254 ΔS). Δt, the time step of FDTD, equals 16.667 ps, and the total computation time steps are set to 11,070. Impact of environment on transmission signal In this paper, a sequence of data is modulated by the GMSK scheme. The data rate of GMSK used in this paper is 5.4166 × 107 bps, and parameter BT in Equation 2 is set to 0.5. The sequence of the digital data and the waveform of the base-band signal after filtering are shown in Figure 4. Generation of the base-band signal and RF signal. (a) shows the base-band digital data sequence. (b) shows the base-band signal after filtering. (c) shows the RF signal after modulation. In order to see the impact of the antenna itself on the transmission of RF signal, the receiving monopole antenna is placed individually in free space (without locomotive and environment) first, and the received signal is shown in Figure 5. From this figure, we can see that the envelope of the received RF signal is slightly undulated as shown in Figure 5a, but this does not have much influence on the demodulated base-band signal as shown in Figure 5b. Therefore, the effect of antenna can be ignored. Received signal in free space. (a) and (b) show the RF signal received by the monopole antenna in free space (without locomotive) and the demodulated base-band signal, respectively. The monopole is then put on the locomotive, and the received signal with the effect of the locomotive and environment is shown in Figure 6. This time, a larger fluctuation of amplitude of the RF signal can be observed. But the demodulated base-band signal is still very fine, as shown in Figure 6b. Received signal by locomotive antenna. (a) and (b) show the RF signal received by locomotive antenna and the demodulated base-band signal. In realistic application, co-channel interference is usually inevitable. In order to simulate the multi-path effect on the receiving signal, another dipole antenna is introduced in the front of the locomotive with a distance of 9.3 m to the locomotive antenna. The two dipole antennas are excited by the modulated signals with the same amplitude. The signal received by the locomotive antenna is shown in Figure 7a. Comparing with that in Figure 6a, the waveform of received RF signal is distorted significantly. The demodulated base-band signal is also distorted, as shown in Figure 7b. This will influence the judgment of the digital signal. Received signal. (a) and (b) show the received RF signal and demodulated base-band signal for the case of two dipoles radiating simultaneously at different places. Impact of EMI on base-band digital signal It is the normal case for the running train system that the power supply network will generate electric spark. This spark actually is an electromagnetic pulse, which interferes with the transmitting signal. In order to find the impact of this interference on the transmitted digital signal, the generation of the electric spark and its propagation are introduced into the FDTD simulation together with the RF signal. In this example, we let the locomotive antenna transmits and the dipole antenna receives. Assuming that the electric spark has the form of modulated Gaussian pulse: $$ {U}_p^{n\hbox{'}}={E}_z^n\varDelta z=-{U}_0^p \cos \left(2\pi {f}_0^pn\hbox{'}\varDelta t\right) \exp \left(-4\pi {\left(n\hbox{'}\varDelta t-{t}_0^p\right)}^2/{\left({\tau}_0^p\right)}^2\right) $$ where n is the iterating time step of FDTD, and n′ represents the time step at which the spark begin to generate. \( {U}_0^p \) and \( {f}_0^p \) are the amplitude and fundamental frequency of the spark, respectively. \( {t}_0^p \) represents the time moment at which pulse peak occurs. \( {\tau}_0^p \) is related to the bandwidth, and bandwidth is equal to 2/\( {\tau}_0^p \). Table 2 gives the values of the parameters. Table 2 Parameters of the electromagnetic spark used in this paper The digital signal input to the simulation system is shown in Figure 8. The amplitude of the modulated signal is set to 1 V. The FDTD mesh size is 30 mm which balances the computing time and accuracy. The influence of the electromagnetic spark on the transmitted signal can be analyzed by comparing the receiving signals without and with electrical spark. Figure 9 shows the received RF signal and the corresponding frequency response of the system without spark, and the demodulated base-band signal and recovered digital signal are shown in Figure 10. Comparing Figure 10b with Figure 8, we can see that although the RF waveform in Figure 9 seems not perfect, the recovered digital sequence is correct. Figure 11 shows the received RF signal and corresponding frequency response of the system with the effect of the electromagnetic spark, and the demodulated base-band signal and recovered digital signal are shown in Figure 12. From Figure 11, we can see that the transmitted RF signal is submerged in the pulse within the interference period of the spark, and the frequency spectrum of the received RF signal is spread. From Figure 12, we can see that the base-band signal waveform in which period the spark appears is distorted seriously, which further influences the sampling and judgment, and results in bit errors, as shown by the dashed line in Figure 12c. Input digital signal of the simulation. Received signal without spark effect. (a) and (b) show the RF signal received by the dipole without effect of electric spark. Base-band signal. (a) and (b) show the demodulated base-band and digital signals of system without effect of electric spark. Received signal affected by spark. (a), (b), and (c) show the RF signal and its frequency spectrum which received by the dipole with effect of electric spark. (b) is the enlargement of (a). Base-band signal affected by spark. (a), (b), and (c) show the demodulated base-band and digital signals of system with effect of electric spark. (b) is the enlargement of (a). It is also found from Figure 12b that the influence of the spark on sampling and judgment lasts more than 200 ns although the spark only lasts 9 ns. This is due to the ringing of the spark pulse on the antenna. In addition, comparing Figure 12c with Figure 8, we can see that the impact of high-intensity electromagnetic pulse on bit error seems not so great (because only two errors occur). However, if comparing base-band waveforms in Figure 12b and Figure 10a carefully, it is found that the judgment of the symbols is at high risk in the period from 200 to 400 ns, because the waveform is significantly distorted. In other words, the accuracy of the symbol judgment cannot be guaranteed between the ninth symbol and the 20th symbol. Estimation of the transmission characteristics of wireless link is an important work especially for the wireless communication in complicated environment, such as the high-speed railway environment. But conventional method can only give the field coverage and cannot give the effect of environment on the transmission property of the wireless link. This paper proposed a base-band involved integrative modeling method which can not only take the interaction between antennas and environment into consideration but also give the effect of environment on the base-band signal and bit error rate of the wireless communication. Railway environments are taken as examples to show the implementation and efficiency of the method, and the results show that the proposed method is effective in finding the reason of waveform distortion of base-band signal and can give the explanation of the occurring of bit errors. MF Iskander, Z Yun, Propagation prediction models for wireless communication systems. IEEE Trans. Microw. Theory Tech. 50(3), 662–673 (2006). doi:10.1109/22.989951 CF Yang, BC Wu, CJ Ko, A ray-tracing method for modeling indoor wave propagation and penetration. IEEE Trans. Antenna Propag. 46(6), 907–919 (1998). doi:10.1109/8.686780 Z Ji, BH Li, HX Wang, HY Chen, Efficient ray-tracing methods for propagation prediction for indoor wireless communications. IEEE Antenna Propag. Magazine 43(2), 41–49 (2001). doi:10.1109/74.924603 MS Sarker, AW Reza, K Dimyati, A novel ray-tracing technique for indoor radio signal prediction. J. Electromagnet. Waves Appl. 25(8–9), 1179–1190 (2011). doi:10.1163/156939311795762222 K Yee, Numerical solution of initial boundary value problems involving Maxwell's equations in isotropic media. IEEE Trans. Antenna and Propag. 14(3), 302–307 (1966). doi:10.1109/TAP.1966.1138693 SD Gedney, An anisotropic perfectly matched layer absorbing medium for the truncation of FDTD lattices. IEEE Trans. Antennas Propag. 44(12), 1630–1639 (1996). doi:10.1109/8.546249 G Rodriguez, Y Miyazaki, N Goto, Matrix-based FDTD parallel algorithm for big areas and its applications to high-speed wireless communications. IEEE Trans. Antennas Propag. 54(3), 785–796 (2006). doi:10.1109/TAP.2006.869895 AC Austin, MMJ Neve, GB Rowe, RJ Pirkl, Modeling the effects of nearby buildings on inter-floor radio-wave Propagation. IEEE Trans. Antennas Propag. 57(7), 2155–2161 (2009). doi:10.1109/TAP.2009.2021965 A Alighanbari, CD Sarris, Rigorous and efficient time-domain modeling of electromagnetic wave propagation and fading statistics in indoor wireless channels. IEEE Trans. Antennas Propag. 55(8), 2373–2381 (2007). doi:10.1109/TAP.2007.901992 M Thiel, K Sarabandi, 3D-wave propagation analysis of indoor wireless channels utilizing hybrid methods. IEEE Trans. Antennas Propag. 57(5), 1539–1546 (2009). doi:10.1109/TAP.2009.2016710 M Thiel, K Sarabandi, A hybrid method for indoor wave propagation modeling. IEEE Trans. Antennas Propag. 56(8), 2703–2709 (2008). doi:10.1109/TAP.2008.927548 Y Wang, SN Safieddin, SK Chaudhuri, A hybrid technique based on combining ray tracing and FDTD methods for site-specific modeling of indoor radio wave propagation. IEEE Trans. Antennas Propag. 48(5), 743–754 (2000). doi:10.1109/8.855493 S Pu, JH Wang, Z Zhang, Estimation for small-scale fading characteristics of RF wireless link under railway communication environment using integrative modeling technique. Prog. Electromagn. Res. 106, 395–417 (2010). doi:10.2528/PIER10042806 S Pu, J-H Wang, Research on the Receiving and Radiating Characteristics of Antennas on High-Speed Train using Integrative Modeling Technique (Proceedings of 11th Asia Pacific Microwave Conference,(APMC), Singapore, 2009), pp. 1072–1075 K Murota, K Hirade, GMSK modulation for digital mobile radio telephony. IEEE Trans. Com. 29(7), 1044–1050 (1981). doi:10.1109/9780470546543.ch20 WH Yu, YJ Liu, T Su, N-T Hunag, R Mittra, A robust parallel conformal finite-difference time-domain processing package using the MPI library, antennas and propagation magazine. IEEE. 47(3), 39–59 (2005). doi:10.1109/MAP.2005.1532540 This work was supported in part by NSFC under grant no. 61331002 and in part by the National Key Basic Research Project under grant no. 2013CB328903. The Key Laboratory of All Optical Network and Advanced Telecommunication Network of Ministry of Education, No.3 Shangyuancun, Haidian District, Beijing, China, 100044 Dawei Li , Junhong Wang , Meie Chen , Zhan Zhang & Zheng Li The Institute of Lightwave Technology, Beijing Jiaotong University, No.3 Shangyuancun, Haidian District, Beijing, China, 100044 Search for Dawei Li in: Search for Junhong Wang in: Search for Meie Chen in: Search for Zhan Zhang in: Search for Zheng Li in: Correspondence to Dawei Li. Li, D., Wang, J., Chen, M. et al. Base-band involved integrative modeling for studying the transmission characteristics of wireless link in railway environment. J Wireless Com Network 2015, 81 (2015) doi:10.1186/s13638-015-0316-3 Integrative modeling method Wireless communication link Base-band signal Railway environment New Technologies and Research Trends for Vehicular Wireless Communications
CommonCrawl
communications chemistry Photogenerated hole traps in metal-organic-framework photocatalysts for visible-light-driven hydrogen evolution Enhanced photocatalytic performance of a Ti-based metal-organic framework for hydrogen production: Hybridization with ZnCr-LDH nanosheets Muhammad Sohail, Hyunuk Kim & Tae Woo Kim Proton-assisted electron transfer and hydrogen-atom diffusion in a model system for photocatalytic hydrogen production Yuanzheng Zhang, Yunrong Dai, … Michael R. Hoffmann In situ photodeposition of platinum clusters on a covalent organic framework for photocatalytic hydrogen production Yimeng Li, Li Yang, … Weiqiao Deng Anisotropic phenanthroline-based ruthenium polymers grafted on a titanium metal-organic framework for efficient photocatalytic hydrogen evolution Spandana Gonuguntla, Saddam Sk, … Ujjwal Pal Efficient photocatalytic hydrogen evolution with ligand engineered all-inorganic InP and InP/ZnS colloidal quantum dots Shan Yu, Xiang-Bing Fan, … Greta R. Patzke Preparation of the heterojunction catalyst N-doping carbon quantum dots/P25 and its visible light photocatalytic activity Ning Xu, Hongqin Huang, … Huigang Wang Photocatalytic hydrogen peroxide splitting on metal-free powders assisted by phosphoric acid as a stabilizer Yasuhiro Shiraishi, Yuki Ueda, … Takayuki Hirai Mechanistic analysis of multiple processes controlling solar-driven H2O2 synthesis using engineered polymeric carbon nitride Yubao Zhao, Peng Zhang, … Wonyong Choi Phase-enabled metal-organic framework homojunction for highly selective CO2 photoreduction Yannan Liu, Chuanshuang Chen, … Dongling Ma Zichao Lian ORCID: orcid.org/0000-0003-2528-35721, Zhao Li1, Fan Wu1, Yueqi Zhong1, Yunni Liu1, Wenchao Wang1, Jiangzhi Zi1 & Weiwei Yang1 Communications Chemistry volume 5, Article number: 93 (2022) Cite this article Metal–organic frameworks Porous materials Efficient electron-hole separation and carrier utilization are key factors in photocatalytic systems. Here, we use a metal-organic framework (NH2-UiO-66) modified with inner platinum nanoparticles and outer cadmium sulfide (CdS) nanoparticles to construct the ternary composite Pt@NH2-UiO-66/CdS, which has a spatially separated, hierarchical structure for enhanced visible-light-driven hydrogen evolution. Relative to pure NH2-UiO-66, Pt@NH2-UiO-66, and NH2-UiO-66/CdS samples, the Pt@NH2-UiO-66/CdS composite exhibits much higher hydrogen yields with an apparent quantum efficiency of 40.3% at 400 nm irradiation and stability over the most MOF-based photocatalysts. Transient absorption measurements reveal spatial charge-separation dynamics in the composites. The catalyst's high activity and durability are attributed to charge separation following an efficient photogenerated hole-transfer band-trap pathway. This work holds promise for enhanced MOF-based photocatalysis using efficient hole-transfer routes. Photocatalytic hydrogen evolution via water splitting is a promising way to mitigate current energy and environmental issues1,2,3,4,5,6,7. Since Fujishima and Honda first reported solar energy conversion for hydrogen evolution using semiconductor-based photocatalysts, we have witnessed the development of artificial photocatalytic systems for enhanced hydrogen evolution reactions (HER)8,9,10. The primary drawback in using a pure semiconductor was fast recombination of photoinduced carriers, which seriously limited photocatalytic efficiency11,12. Meanwhile, co-catalysts, such as Pt nanoparticles (NPs) and RuO2, have been alternatives that provide an active center for redox reductions, reduce overpotentials for HER or oxidation reactions, and promote fast separations of photoinduced electrons and holes13,14,15,16. Thus, complex heterostructures with spatial charge separations via fine control of photoinduced carrier dynamics have been fabricated to improve photocatalytic activity. Metal-organic frameworks (MOFs) are porous crystalline materials with high specific-surface areas, and tunable structures and functionalities. As alternatives to semiconductor photocatalysts that are receiving much interest in a wide range of applications. Unfortunately, MOFs, such as UiO-66 (Zr)17,18, exhibit poor visible-light absorption. The introduction of amino groups into the terephthalic acid ligands of UiO-66 (Zr) broaden light absorption to the visible region, and the presence of amino groups does not affect the structural stability of the UiO-66 host19. Photoinduced electron dynamics in MOFs have been well investigated, but the kinetics of photogenerated holes and their effect on photocatalytic activity remain poorly understood. Cadmium sulfide (CdS) has been the preferred visible-light photocatalyst for HER. However, photoinduced corrosion and fast recombination of photoinduced electron−hole pairs have severely restricted improvements in its catalytic activity. Thus, fine-tuning photoinduced carrier dynamics is necessary to suppress corrosion by accumulated holes3,20,21,22,23,24,25,26,27. Here, we synthesized a spatial charge structure for the MOF-based photocatalyst (Pt@NH2-UiO-66/CdS). Relative to other MOF-based systems (see Supplementary Table 1), it exhibited the highest visible-light photocatalytic activity for HER, with an apparent quantum yield of 40.3% at 400 nm. Direct observation of the carrier dynamics in Pt@NH2-UiO-66/CdS, via transient absorption spectroscopy (TAS), revealed that a unique hole-transfer pathway was responsible for the enhanced and stable HER. The present results could help design spatial, multi-phase, heterostructured photocatalysts with high photocatalytic activity and facile control of photoinduced carrier dynamics. Characterization of materials We synthesized Pt@NH2-UiO-66/CdS heterostructured nanocrystals (HNCs) by in-situ encapsulation of Pt nanoparticles (NPs) into MOFs having regular 100-nm octahedral shapes, and by growing CdS on the outer MOF surface, as shown in Supplementary Fig. 1. Figure 1a shows representative transmission electron microscopy (TEM) images of NH2-UiO-66 NCs with regular octahedral structures, which were consistent with scanning electron microscopy images in Supplementary Fig. 2. The Pt NPs (Supplementary Fig. 3) were encapsulated in situ in the MOF shown in Fig. 1b. By growing multiple CdS satellites of 10.5 ± 2.1 nm in size on the outer layer of Pt@NH2-UiO-66, Pt@NH2-UiO-66/CdS HNCs were formed with a spatial configuration of inner Pt NPs and outer CdS, as shown in Fig. 1c. Figure 1d is a scanning electron microscopy image of Pt@NH2-UiO-66/CdS HNCs that shows the CdS NPs attached to the surface of the MOFs. X-ray diffraction patterns in Supplementary Fig. 4 show that the Pt@NH2-UiO-66/CdS HNCs were composed of NH2-UiO-66 and zinc blend CdS, with Joint Committee on Power Diffraction Standards (JCPDS) no. 10-0454 phases. The Zr/Cd weight ratio was 0.28:14.3, as determined by inductively coupled plasma optical emission spectroscopy (ICP-OES). However, the Pt NPs did not display a diffraction peak due to the low amount of Pt (about 0.023 wt %) determined by ICP-OES (Supplementary Table 2). The high-resolution TEM image in Fig. 1e and Supplementary Fig. 5 revealed the composition of Pt NPs and CdS NPs. The 0.34-nm and 0.20-nm lattice fringes were assigned to zinc blend CdS (111) and Pt (200), respectively, which were consistent with fast Fourier transform patterns [Fig. 1f, g]. High-angle annular dark-field scanning TEM and energy-dispersive spectrometry elemental mapping (Fig. 1h) also verified the formation of ternary heterostructures composed of Pt, NH2-UiO-66, and CdS phases. Fig. 1: Structural characterization of the nanocrystals. a–c Representative transmission electron microscopy (TEM) images of (a) NH2-UiO-66, (b) Pt@NH2-UiO-66, and (c) Pt@NH2-UiO-66/CdS, inset: schematic representation (lower left). d Scanning electron microscopy image of Pt@NH2-UiO-66/CdS. e High-resolution TEM image of a single Pt@NH2-UiO-66/CdS. f, g Fast Fourier transform patterns of the CdS phase from <111> direction at the left region of (e) and the Pt phase from <001> direction at the right region of (e), respectively. h, High-angle annular dark-field (HAADF) scanning TEM and energy-dispersive spectrometry elemental mapping images of Pt@NH2-UiO-66/CdS. Furthermore, the Brunauer–Emmett–Teller surface area and pore structures analysis were performed, as shown in Supplementary Fig. 6 and Supplementary Table 3. By comparing the specific-surface area of the NH2-UiO-66 (820 m2 g−1), Pt@NH2-UiO-66 (722 m2 g−1), Pt@NH2-UiO-66/CdS (612 m2 g−1) HNCs, the smaller specific-surface area of Pt@NH2-UiO-66/CdS indicated that the CdS occupies the surface sites of the MOF. In addition, compared with that of NH2-UiO-66 and Pt@NH2-UiO-66, the porous volume of Pt@NH2-UiO-66/CdS was increased due to the effects of CdS NPs. We also further verified the composition and interface characteristics of Pt@NH2-UiO-66/CdS composites by the X-ray photoelectron spectroscopy (XPS) measurements (Supplementary Fig. 7). First, the XPS spectrum of Supplementary Fig. 7a shows that the Pt@NH2-UiO-66/CdS composites contain the elements such as C, N, O, Zr, Pt, Cd, and S, and there are no other impurity elements. Secondly, the binding energies of Cd 3d3/2 and 3d5/2 of the sample Pt@NH2-UiO-66/CdS are around 411.9 and 405.2 eV, respectively, indicating that the Cd in the composite material exists in the +2 valence28. In addition, the binding energies of S 2p1/2 and 2p3/2 are around 162.7 and 161.5 eV, respectively, indicating that S is −2 valence29 (Supplementary Fig. 7d, e). Therefore, these XPS results can fully demonstrate the existence of CdS in the Pt@NH2-UiO-66/CdS composites. Compared with NH2-UiO-66 and Pt@NH2-UiO-66, the Zr 3d binding energy of Pt@NH2-UiO-66/CdS composite was shifted by 0.1 eV (Supplementary Fig. 7b). It suggested that there was a strong interaction between CdS and NH2-UiO-66, rather than a simple physical contact30. As shown in Supplementary Fig. 7c, the binding energies of Pt 4f5/2 and 4f7/2 in the Pt@NH2-UiO-66 are around 74.4 and 71.1 eV, respectively, which are assigned to the metallic Pt31. However, the binding energies of Pt f5/2 and 4f7/2 in Pt@NH2-UiO-66/CdS were shifted to around 74.7 and 71.3 eV, respectively, indicating that the loading of CdS in the composite caused the binding energy of Pt to be shifted by 0.3 eV, but the valence state of Pt has not changed. Optical properties and photocatalytic activity The visible-light-driven photocatalytic activity and stability of Pt@NH2-UiO-66/CdS HNCs for HER were investigated. Figure 2a shows the ultraviolet-visible absorption spectra of NH2-UiO-66, Pt@NH2-UiO-66, Pt@NH2-UiO-66/CdS, and CdS NPs (also, see Supplementary Fig. 8). Pt@NH2-UiO-66/CdS featured the absorption characteristics of MOF at 370 nm and the band-edge absorption of the CdS phase at 500 nm. Figure 2b shows the energy levels of NH2-UiO-66 and CdS that were determined in Supplementary Fig. 9; they were consistent with the previous reports3,32. As shown in Fig. 2c, the HER visible-light-driven photocatalytic activity (using lactic acid as a sacrificial agent) for Pt@NH2-UiO-66/CdS was 37.76 mmol h−1 g−1, which was higher than that of NH2-UiO-66 (0.011 mmol h−1 g−1), Pt@NH2-UiO-66 (0.12 mmol h−1 g−1), NH2-UiO-66/CdS (1.65 mmol h−1 g−1, Supplementary Fig. 10), and CdS NPs (0.41 mmol h−1 g−1). The photocatalytic H2 evolution has increased with the increase of the amount of catalyst, then decreased when the dosage of catalyst was 10 mg (Supplementary Fig. 11). Thus, the 5 mg of the optimized photocatalyst dosage displays the best catalytic performance. When the photocatalytic H2 evolution was normalized by the quality of the catalyst, however, with the increase of the catalyst dosage, the hydrogen evolution rate gradually decreases, possibly due to blocked or scattered light by the excess suspended photocatalysts in the reaction medium21,22,23. To validate the benefits of spatial separation of the co-catalysts in the MOFs, we examined the HER photocatalytic activity for physical mixtures of materials with normalized Pt contents. Supplementary Fig. 12 shows that Pt@NH2-UiO-66/CdS exhibited higher activity than the others, which was consistent with better photocatalytic activity of Pt implanted in the MOF33. Furthermore, the cascade photoinduced carrier transfer with the spatial charge separation between the ternary phases with strong interactions contributed much to the high activity. The Pt@NH2-UiO-66/CdS maintained good stability after seven cycles over twenty-one hours and its morphology had no significant changes, as shown in Supplementary Fig. 13. Various sacrificial agents from small to large molecules were used to reveal the roles in HER photocatalytic activity by consuming photogenerated holes. Supplementary Fig. 14 shows that the lactic acid was the best sacrificial agent, where the carboxyl groups played a more important role than the hydroxyl groups. The large molecules could be oxidized by photogenerated holes in Pt@NH2-UiO-66/CdS. In the photocatalytic degradation of macromolecules, the generation of •OH radicals could be detected by fluorescence emission and electron-paramagnetic-resonance spectra. Larger signals were trapped by 5,5-dimethyl-1-pyrroline N-oxide (DMPO)32 (see Supplementary Fig. 15), which restricted the HER photocatalytic activity. As shown in Supplementary Fig. 16, with the increase of lactic acid contents (decrease of the pH) in the system, the H2 evolution rate of the catalyst gradually increases, which further indicates that the timely removal of holes is beneficial to improve the H2 evolution rate. The wavelength-dependent apparent quantum yields (AQYs) of Pt@NH2-UiO-66/CdS shown in Fig. 2d reproduced its absorption spectrum, indicating that HER proceeded via photoexcitation. The AQY at 400 nm was 40.3%, which was the highest efficiency relative to previous reports (Supplementary Table 1). These results demonstrated that Pt@NH2-UiO-66/CdS had the highest photocatalytic activity and stability, due to the spatial separation of the photoinduced electrons and holes. To reveal the separation and recombination of these charge carriers, we characterized the photoluminescence (PL), transient photocurrent spectra and electrochemical impedance spectroscopy. Pt@NH2-UiO-66/CdS exhibited a very low fluorescence intensity relative to that for NH2-UiO-66, Pt@NH2-UiO-66 and NH2-UiO-66/CdS (Supplementary Fig. 17). This suggested greatly inhibited recombination of electron and hole pairs in the HNCs34. However, the enhanced photocurrent density of the Pt@NH2-UiO-66/CdS promoted transfer kinetics of the photoexcited charge carriers (Supplementary Fig. 17). Meanwhile, the interfacial properties between the electrode and the electrolyte were also investigated using electrochemical impedance spectroscopy measurements. As known, the semicircle at a high frequency in the Nyquist plot may illustrate the charge-transfer process. The diameter of the semicircle is dependent on the charge-transfer resistance, as shown in Supplementary Fig. 17, the Pt@NH2-UiO-66/CdS exhibits the smallest arc due to its lowest charge-transfer resistance. Meanwhile, the characteristic frequency in the Bode phase plot of the Pt@NH2-UiO-66/CdS film could be estimated a little shift to a lower frequency relative to this of the Pt@NH2-UiO-66 film (Supplementary Fig. 17), further indicating that the charge-recombination rate is reduced in the Pt@NH2-UiO-66/CdS35. These photo-electrochemical properties indicated that Pt@NH2-UiO-66/CdS enhanced separation and migration of the photoinduced charge carriers, which was attributed to its unique spatial-phase separation and hierarchical structure. Fig. 2: Optical properties, band diagram, and photocatalytic hydrogen evolution. a Ultraviolet–visible absorption spectra of NH2-UiO-66, Pt@NH2-UiO-66, Pt@NH2-UiO-66/CdS and CdS nanoparticles (NPs) in dimethyl formamide. b Energy levels for Pt, NH2-UiO-66, and CdS NPs. The black solid lines represent conduction bands (CBs) and valence bands (VBs) for CdS, NH2-UiO-66, and the work function for Pt NPs, respectively. c Comparison of the hydrogen evolution rate for NH2-UiO-66, Pt@NH2-UiO-66, Pt@NH2-UiO-66/CdS, NH2-UiO-66/CdS and CdS NPs under visible-light (>420 nm), using a lactic acid acetonitrile solution as a sacrificial agent. d Apparent quantum yields (AQY) of Pt@NH2-UiO-66/CdS at several wavelengths under identical conditions. The blue line shows the absorption spectrum. Photoinduced carrier dynamics Time-resolved transient absorption (TA) measurements were performed to track photoinduced carrier dynamics to investigate electron and hole transfers in the mechanism of photocatalytic HER. Figure 3a shows time-resolved TA spectra of NH2-UiO-66 after selective excitation with a 400-nm laser. Band-edge bleaching at less than 450 nm and broad absorption at 600 nm were observed. Fast and slow recovery monitoring at 650 nm is shown in Fig. 3d. Supplementary Table 4 lists the two recovery time constants of 9.8 ps (τ1) and 225 ps (τ2), respectively, which may be attributed to electron-trap states in the intermediate band16,33,36,37. For Pt@NH2-UiO-66, similar features were observed in Fig. 3b, with two decay components of 6.2 ps and 69 ps, respectively. The fast component could be assigned to fast electron transfer from the conduction-band minimum of the MOF to the trap state and then to the Pt sites. The TA characteristics for Pt@NH2-UiO-66/CdS shown in Fig. 3c were bleaching at the 480-nm peak and a broad absorption at 600 nm that was weaker than that of CdS NPs and NH2-UiO-66/CdS (Supplementary Fig. 18). The first component (29 ps) was the fast-trap process, from possible coupled interactions with the hole transfer from NH2-UiO-66 to CdS, and an electron-trap state in NH2-UiO-66, where CdS NPs have no hole-trap states. The latter of CdS NPs is indicated by featureless absorption from the TAS and kinetic profiles at 650 nm in the previous report1,3. The bleaching at 480 nm for Pt@NH2-UiO-66/CdS with the fast-quenching component is shown in Fig. 3e, indicating electron transfers from CdS to NH2-UiO-6638. This feature was also observed in NH2-UiO-66/CdS, with about a 64% decrease in ultrafast quenching (<100 fs), suggesting electron transfers from CdS to NH2-UiO-66. The fast recovery with the 9.3-ps time constant indicated that the electron from NH2-UiO-66 transferred to the Pt phase. The fs TAS suggested that the electron transferred from CdS to NH2-UiO-66, and was then trapped at NH2-UiO-66 before finally transferring to the Pt phase. The hole from NH2-UiO-66 was transferred to CdS. Time-resolved PL spectroscopy was used to investigate the lifetimes of photoinduced carriers in Pt@NH2-UiO-66/CdS33. Figure 3f shows the 450-nm PL kinetics for each sample following excitation at the band-edge absorption. The mean PL lifetimes were 1.97 ns, 0.99 ns, 1.05 ns, and 0.95 ns for NH2-UiO-66, Pt@NH2-UiO-66, NH2-UiO-66/CdS and Pt@NH2-UiO-66/CdS, respectively (Supplementary Table 5). The shorter PL lifetime of Pt@NH2-UiO-66/CdS indicated that the spatial charge-separation of the ternary composites was suppressed by accelerating the radiative recombination of the photoinduced electrons and holes. This strongly indicated that photogenerated carriers transferred in Pt@NH2-UiO-66/CdS could have other pathways of nonradiative recombination for high photocatalytic activity. Fig. 3: Transient absorption spectra (TAS), kinetic profiles, and time-resolved photoluminescence (PL) spectra. a–c TAS of a NH2-UiO-66, (b) Pt@NH2-UiO-66, and (c) Pt@NH2-UiO-66/CdS with TA signal, given in optical density (OD) units, upon 400-nm laser excitation. d, e Kinetic profiles of NH2-UiO-66, Pt@NH2-UiO-66, Pt@NH2-UiO-66/CdS, CdS NPs, and NH2-UiO-66/CdS at (d) 650 nm, and (e) 480 nm, respectively. f Time-resolved PL decay profiles for NH2-UiO-66, Pt@NH2-UiO-66, Pt@NH2-UiO-66/CdS and NH2-UiO-66/CdS, (400 nm excitation, 450 nm emission). All the olive lines are best fits with a biexponential function. g–i TAS of (g) NH2-UiO-66, (h) NH2-UiO-66/CdS, and (i) Pt@NH2-UiO-66/CdS with triphenylamine (TPA) in dimethyl formamide (DMF) solution, in the nanosecond to microsecond region. A new species was produced with an absorption at 430 nm upon 355-nm laser excitation. j, k Kinetic profiles of NH2-UiO-66, NH2-UiO-66/CdS and Pt@NH2-UiO-66/CdS containing TPA DMF solution probing at (j) 600 nm to investigate the dynamics of cation radicals (TPA•+), and (k) 430 nm for the absorption of a photogenerated hole transfer transition band trap to monitor the dynamics of hole transfer, respectively. The best fits using biexponential function are in magenta lines. To understand the hole dynamics in the HER photocatalytic activity, we used triphenylamine (TPA) as a hole indicator. It was oxidized by photogenerated holes to form a cation radical (TPA•+) with an absorption feature at 600 nm39. Figure 3g shows the μs TAS of NH2-UiO-66 upon 355-nm laser excitation in the presence of TPA; the TPA was not excited by the laser (Supplementary Fig. 19). Thus, the decay signal observed at 600 nm was because of TPA•+ radical formation. It had a long lifetime of 67.3 μs [Fig. 3g, j, Supplementary Table 6]. A new absorption peak at 430 nm had a short lifetime along with the redshift of the TPA•+ absorption peak. The 430-nm peak could be attributed to the photogenerated hole-trap state, which exhibited a high oxidation ability. This result was consistent with the degradation of pollutants via •OH radicals. This also occurred for NH2-UiO-66/CdS and Pt@NH2-UiO-66/CdS in Fig. 3h, i, respectively. The increasing redshift at 600 nm was from the absorption of TPA for TPA•+ cation radical formation, and the absorption of the hole-transfer-transition band-trap state with a short lifetime of nanoseconds. In particular, the lifetime of the TPA•+ cation radicals in Pt@NH2-UiO-66/CdS was 74.6 μs longer than that of pure NH2-UiO-66 and NH2-UiO-66/CdS. This revealed hole transfer from NH2-UiO-66 to CdS via a trap state. In addition, the kinetic profiles of the new 430-nm absorption of the trap state are shown in Fig. 3k. The 1.16-μs decay component (Supplementary Table 5) could be assigned to the lifetime of the fast hole-transfer via the mediated hole trap. The results demonstrated that efficient hole transfer from the trap state in NH2-UiO-66 strongly supports high HER photocatalytic activity. The HER mechanism The photoinduced carrier dynamics for photocatalytic HER by Pt@NH2-UiO-66/CdS HNC are illustrated in Fig. 4. When visible-light irradiated the Pt@NH2-UiO-66/CdS, both phases were excited, the electron from the CdS conduction band was transferred to the conduction band of NH2-UiO-66, and the electron was trapped. Then it was transferred to the Pt surface for HER under visible-light irradiation. The photoinduced hole in NH2-UiO-66 was extracted from the trapped state and transferred to the CdS valence band. Then it migrated to the surface, where it was oxidized by the sacrificial reagent. Thus, the TAS and quenching experiments using hole tracers strongly supported the mechanism of a photogenerated hole-transfer band-trap and spatial charge separation for photocatalytic HER. Fig. 4: Mechanism of photocatalytic HER. a Mechanism of photoinduced carrier dynamics for Pt@NH2-UiO-66/CdS, with a hole-trap transfer pathway for enhanced HER photocatalytic activity. b Band diagram of photocatalytic processes in Pt@NH2-UiO-66/CdS. We reported a hole-trap transfer pathway in MOF-based ternary HER photocatalysts with spatial-separation structures. The Pt@NH2-UiO-66/CdS photocatalyst separated photogenerated electrons and holes by combining Pt NPs and CdS NPs, which greatly prolonged the lifetime of the hole-trap-mediated pathway and improved the HER photocatalytic efficiency. This work provides a deeper understanding of electron and hole transfer in co-catalyst-NH2-UiO-66-semiconductor ternary composites with spatial-separation structures. Considering their synergistic enhancement of the photocatalytic activity, the results highlight the benefits of fabricating these structures and also advance the development of highly efficient photocatalytic composites. Synthesis of NH2-UiO-66 The procedures for synthesizing NH2-UiO-66 were reported previously21. Briefly, a mixture of 5 mL of a N,N-dimethyl formamide (DMF) solution of ZrCl4 (18.64 mg) and 5 mL of a DMF solution of 2-amino-1,4-benzenedicarboxylic acid (NH2-BDC) (14.5 mg) were mixed in a beaker. Then, 1.2 mL of acetic acid was added to the solution and transferred to a 50-mL Teflon-lined stainless-steel autoclave at 120 °C for 24 h. The product was purified via centrifugation and washed with ethanol and hexane. The NH2-UiO-66 was dried overnight at 60 °C under vacuum. Synthesis of Pt@NH2-UiO-66 The mixture containing ZrCl4 (20 mM, 10 mL) and NH2-BDC (20 mM, 10 mL) was shaken to form a homogeneous solution. Then, a Pt NP solution (70 μL, 30 mg mL−1, ~2.1 mg Pt) and acetic acid (2.74 mL) were added. After sonication for 10 min, the mixed solution was transferred to a Teflon-lined stainless-steel autoclave at 120 °Cfor 24 h. Finally, the products were collected by centrifugation and dried overnight at 60 °C under vacuum. Synthesis of Pt@NH2-UiO-66/CdS Typically, 24.3 mg of Cd(CH3COO)2·2H2O was dissolved in 10 mL of ethanol forming a homogeneous solution. Then, 40 mg of Pt@NH2-UiO-66 was added to the solution that was then sonicated for 10 min. The suspension was heated to 80 °C at 6 °C min−1. At this point, 10 mL of an aqueous solution of thioacetamide (6.9 mg) was slowly injected into the flask with the rate of 0.3 mL min−1. It was kept at 80 °C for another 30 min. The precipitates were filtered and washed with water and ethanol several times. Finally, the product was dried at 60 °C under vacuum for 8 h. TEM characterization was performed with a HT7820 (Hitachi, Japan) electron microscope at an acceleration voltage of 120 kV. High-resolution TEM, high-angle annular dark-field scanning TEM, and energy-dispersive spectrometry mapping were performed with a TalosF200x (FEI, USA, equipped with Super-X EDS) electron microscope at an acceleration voltage of 200 kV. An X-ray diffractometer (RigakuD/MAX-2000) with Cu Kα radiation (1.5406 Å) at 40 kV and 30 mA was used to record powder X-ray diffraction patterns. The patterns were collected over a 2θ range of 3°–80° at a scanning speed of 5° min−1. Optical absorbances of all samples were acquired with a Shimadzu UV-1900i spectrophotometer. PL spectra and time-resolved PL spectra were performed on a Hitachi F-77800 fluorescence spectrophotometer. The room temperature PL spectra were recorded with an excitation wavelength of 380 nm. The Pt content was determined via ICP-OES (Vista-MPX, Varian). Electron-paramagnetic-resonance spectra were acquired with a JEOL FA300 spectrometer with a 9.05-GHz magnetic field modulation at a microwave power of 0.998 mW. 5,5-dimethyl-1-pyrroline N-oxide (DMPO) was used as the spin-trapping reagent. Catalyst (5 mg) was suspended in the mixture of water (500 μL) and DMPO (50 μL) for the detection of •OH radicals. After ultrasonication, the detection of •OH in a N2 atmosphere was performed under irradiation with a 300-W Xe lamp with a 420-nm filter for 5 min. Time-resolved fluorescence decay spectra were acquired with an EI FLS-1000 fluorescence spectrometer. Photocatalytic activity for HER For photocatalytic HER, the photocatalyst (5 mg) was dispersed in 25 mL of acetonitrile and 3 mL of deionized water with 3 mL of lactic acid. Then the suspension was stirred in a photocatalytic reactor and purged with N2 for 30 min to remove dissolved oxygen, followed by 300-W Xe light irradiation with a UV cutoff filter (>420 nm). Gas chromatography (Shimadzu GC-2014, N2 as a carrier gas) using a thermal conductivity detector was used to measure the HER. The apparent quantum yield (AQY) in the above photocatalytic reaction conditions was determined. The excitation light was regulated by a bandpass filter. The intensity of the light was measured with a power meter. The AQY was calculated according to the following equation: $${{{{{\rm{AQY}}}}}} =\frac{{{{{{\rm{number}}}}}}\,{{{{{\rm{of}}}}}}\,{{{{{\rm{reacted}}}}}}\,{{{{{\rm{electrons}}}}}}}{{{{{{\rm{number}}}}}}\,{{{{{\rm{of}}}}}}\,{{{{{\rm{incident}}}}}}\,{{{{{\rm{photons}}}}}}}\times 100 \% \\ \, =\frac{{{{{{\rm{number}}}}}}\,{{{{{\rm{of}}}}}}\,{{{{{\rm{evolved}}}}}}\,{{{{{{\rm{H}}}}}}}_{2}\,{{{{{\rm{molecules}}}}}}\,\times \,2}{{{{{{\rm{number}}}}}}\,{{{{{\rm{of}}}}}}\,{{{{{\rm{incident}}}}}}\,{{{{{\rm{photons}}}}}}}\times 100 \%$$ The datasets within the article and Supplementary Information are available from the authors upon request. Lian, Z. et al. Plasmonic p–n junction for infrared light to chemical energy conversion. J. Am. Chem. Soc. 141, 2446–2450 (2019). Lian, Z. et al. Near infrared light induced plasmonic hot hole transfer at a nano-heterointerface. Nat. Commun. 9, 2314 (2018). Lian, Z. et al. Anomalous photoinduced hole transport in Type I core/mesoporous-shell nanocrystals for efficient photocatalytic H2 evolution. ACS Nano 13, 8356–8363 (2019). Huang, Y. et al. All-region-applicable, continuous power supply of graphene oxide composite. Energy Environ. Sci. 12, 1848–1856 (2019). Brown, P. T. & Caldeira, K. Greater future global warming inferred from Earth's recent energy budget. Nature 552, 45–50 (2017). Turner, J. A. Sustainable hydrogen production. Science 305, 972 (2004). Zou, Z., Ye, J., Sayama, K. & Arakawa, H. Direct splitting of water under visible light irradiation with an oxide semiconductor photocatalyst. Nature 414, 625–627 (2001). Kang, D. et al. Printed assemblies of GaAs photoelectrodes with decoupled optical and reactive interfaces for unassisted solar water splitting. Nat. Energy 2, 17043 (2017). Dresselhaus, M. S. & Thomas, I. L. Alternative energy technologies. Nature 414, 332–337 (2001). Fujishima, A. & Honda, K. Electrochemical photolysis of water at a semiconductor electrode. Nature 238, 37–38 (1972). Yan, H. et al. Visible-light-driven hydrogen production with extremely high quantum efficiency on Pt–PdS/CdS photocatalyst. J. Catal. 266, 165–168 (2009). Li, J., Yang, J., Wen, F. & Li, C. A visible-light-driven transfer hydrogenation on CdS nanoparticles combined with iridium complexes. Chem. Commun. 47, 7080–7082 (2011). Ding, M., Flaig, R. W., Jiang, H.-L. & Yaghi, O. M. Carbon capture and conversion using metal–organic frameworks and MOF-based materials. Chem. Soc. Rev. 48, 2783–2828 (2019). Yang, J., Wang, D., Han, H. & Li, C. Roles of cocatalysts in photocatalysis and photoelectrocatalysis. Acc. Chem. Res. 46, 1900–1909 (2013). Zhang, L. et al. Multivariate relationships between microbial communities and environmental variables during co-composting of sewage sludge and agricultural waste in the presence of PVP-Ag NPs. Bioresour. Technol. 261, 10–18 (2018). Sun, D. & Li, Z. Double-solvent method to Pd nanoclusters encapsulated inside the cavity of NH2–UiO-66(Zr) for efficient visible-light-promoted suzuki coupling reaction. J. Phys. Chem. C 120, 19744–19750 (2016). Su, Y., Zhang, Z., Liu, H. & Wang, Y. Cd0.2Zn0.8S@UiO-66-NH2 nanocomposites as efficient and stable visible-light-driven photocatalyst for H2 evolution and CO2 reduction. Appl. Catal. B: Environ. 200, 448–457 (2017). Yang, Q., Xu, Q. & Jiang, H.-L. Metal–organic frameworks meet metal nanoparticles: synergistic effect for enhanced catalysis. Chem. Soc. Rev. 46, 4774–4808 (2017). Gomes Silva, C., Luz, I., Llabrési Xamena, F. X., Corma, A. & García, H. Water stable Zr–benzenedicarboxylate metal–organic frameworks as photocatalysts for hydrogen generation. Chem. Eur. J. 16, 11133–11138 (2010). Lian, Z. et al. Durian-shaped CdS@ZnSe core@mesoporous-shell nanoparticles for enhanced and sustainable photocatalytic hydrogen evolution. J. Phys. Chem. Lett. 9, 2212–2217 (2018). Reddy, D. A. et al. Few layered black phosphorus/MoS2 nanohybrid: a promising co-catalyst for solar driven hydrogen evolution. Appl. Catal. B 241, 491–498 (2019). Reddy, D. A. et al. Enhanced photocatalytic hydrogen evolution by integrating dual co-catalysts on heterophase cds nano-junctions. ACS Sustain. Chem. Eng. 6, 12835–12844 (2018). Reddy, D. A. et al. Designing CdS mesoporous networks on Co-C@Co9S8 double-shelled nanocages as redox-mediator-free Z-scheme photocatalyst. ChemSusChem 11, 245–253 (2018). Reddy, D. A., Park, H., Hong, S., Kumar, D. P. & Kim, T. K. Hydrazine-assisted formation of ultrathin MoS2 nanosheets for enhancing their co-catalytic activity in photocatalytic hydrogen evolution. J. Mater. Chem. A 5, 6981–6991 (2017). Reddy, D. A. et al. Heterostructured WS2-MoS2 ultrathin nanosheets integrated on CdS nanorods to promote charge separation and migration and improve solar-driven photocatalytic hydrogen evolution. ChemSusChem 10, 1563–1570 (2017). Reddy, D. A. et al. Multicomponent transition metal phosphides derived from layered double hydroxide double-shelled nanocages as an efficient non-precious co-catalyst for hydrogen production. J. Mater. Chem. A 4, 13890–13898 (2016). Reddy, D. A. et al. Hierarchical dandelion-flower-like cobalt-phosphide modified CdS/reduced graphene oxide-MoS2 nanocomposites as a noble-metal-free catalyst for efficient hydrogen evolution from water. Catal. Sci. Technol. 6, 6197–6206 (2016). Ge, L. et al. Synthesis and efficient visible light photocatalytic hydrogen evolution of polymeric g-C3N4 coupled with CdS quantum dots. J. Phys. Chem. C 116, 13708–13714 (2012). Huang, Y. et al. 'Bridge' effect of CdS nanoparticles in the interface of graphene-polyaniline composites. J. Mater. Chem. 22, 10999–11002 (2012). Xu, H.-Q. et al. Unveiling charge-separation dynamics in CdS/metal−organic framework composites for enhanced photocatalysis. ACS Catal. 8, 11615–11621 (2018). Zhang, W. et al. Exploring the fundamental roles of functionalized ligands in platinum@metal–organic framework catalysts. ACS Appl. Mater. Interfaces 12, 52660–52667 (2020). Zhang, J. et al. Metal–organic-framework-based photocatalysts optimized by spatially separated cocatalysts for overall water splitting. Adv. Mater. 32, 2004747 (2020). Xiao, J.-D. et al. Boosting photocatalytic hydrogen production of a metal–organic framework decorated with platinum nanoparticles: the platinum location matters. Angew. Chem. Int. Ed. 55, 9389–9393 (2016). Milliron, D. J. et al. Colloidal nanocrystal heterostructures with linear and branched topology. Nature 430, 190–195 (2004). Li, Z. et al. Versatile nanobead-scaffolded N-SnO2 mesoporous microspheres: one-step synthesis and superb performance in dye-sensitized solar cell, gas sensor, and photocatalytic degradation of dye. J. Mater. Chem. A 1, 524–531 (2013). Sun, K. et al. Incorporating transition-metal phosphides into metal-organic frameworks for enhanced photocatalysis. Angew. Chem. Int. Ed. 59, 22749–22755 (2020). Zhu, H., Song, N. & Lian, T. Controlling charge separation and recombination rates in CdSe/ZnS type I core−shell quantum dots by shell thicknesses. J. Am. Chem. Soc. 132, 15038–15045 (2010). Wu, K., Zhu, H., Liu, Z., Rodríguez-Córdoba, W. & Lian, T. Ultrafast charge separation and long-lived charge separated state in photocatalytic CdS–Pt nanorod heterostructures. J. Am. Chem. Soc. 134, 10337–10340 (2012). Hu, K. et al. Kinetic pathway for interfacial electron transfer from a semiconductor to a molecule. Nat. Chem. 8, 853–859 (2016). This work was supported by the National Nature Science Foundation of China (22109097), Natural Science Foundation of Shanghai (No. 20ZR1472000) and Shanghai Pujiang Program (No. 20PJ1411800). School of Materials and Chemistry, University of Shanghai for Science and Technology, 200093, Shanghai, P. R. China Zichao Lian, Zhao Li, Fan Wu, Yueqi Zhong, Yunni Liu, Wenchao Wang, Jiangzhi Zi & Weiwei Yang Zichao Lian Zhao Li Fan Wu Yueqi Zhong Yunni Liu Wenchao Wang Jiangzhi Zi Weiwei Yang Z.L. (Lian) conceived and designed all the experiments. Z.L. performed material fabrication and catalytic reaction experiments. F.W., Y.Z., Y.L., J.Z., and W.Y. provided technical guidance. W.W. carried out the μs-TA experiments. Z.L. (Lian) wrote the manuscript. All authors participated in the discussion of the research. Correspondence to Zichao Lian. Communications Chemistry thanks the anonymous reviewers for their contribution to the peer review of this work. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Lian, Z., Li, Z., Wu, F. et al. Photogenerated hole traps in metal-organic-framework photocatalysts for visible-light-driven hydrogen evolution. Commun Chem 5, 93 (2022). https://doi.org/10.1038/s42004-022-00713-4 Energy storage and conversion Coordination Chemistry Communications Chemistry (Commun Chem) ISSN 2399-3669 (online)
CommonCrawl
Frequency modulation synthesis From formulasearchengine {{ safesubst:#invoke:Unsubst||$N=Citation style |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} FM synthesis using 2 operators A 220 Hz carrier tone fc modulated by a 440 Hz modulating tone fm, with various choices of frequency modulation index, β. The time domain signals are illustrated above, and the corresponding spectra are shown below (spectrum amplitudes in dB). Waveforms for each β Spectra for each β In audio and music, Frequency Modulation Synthesis (or FM synthesis) is a form of audio synthesis where the timbre of a simple waveform (such as a square, triangle, or sawtooth) is changed by modulating its frequency with a modulator frequency that is also in the audio range, resulting in a more complex waveform and a different-sounding tone that can also be described as "gritty" if it is a thick and dark timbre. The frequency of an oscillator is altered or distorted, "in accordance with the amplitude of a modulating signal." Template:Harv FM synthesis can create both harmonic and inharmonic sounds. For synthesizing harmonic sounds, the modulating signal must have a harmonic relationship to the original carrier signal. As the amount of frequency modulation increases, the sound grows progressively more complex. Through the use of modulators with frequencies that are non-integer multiples of the carrier signal (i.e. non harmonic), atonal and tonal bell-like and percussive sounds can easily be created. FM synthesis using analog oscillators may result in pitch instability, however, FM synthesis can also be implemented digitally, the latter proving to be more 'reliable' and is currently seen as standard practice. As a result, digital FM synthesis (using the more frequency-stable phase modulation variant) was the basis of Yamaha's groundbreaking DX7, which brought FM to the forefront of synthesis in the mid-1980s. 2 Spectral analysis 3 Footnote The technique of the digital implementation of frequency modulation, which was developed by John Chowning (Template:Harvnb, cited in Template:Harvnb) at Stanford University in 1967-68, was patented in 1975 and later licensed to Yamaha. The implementation commercialized by Yamaha (US Patent 4018121 Apr 1977 or U.S. Patent 4,018,121) is actually based on phase modulation, but the results end up being equivalent mathematically, with phase modulation simply making the implementation resilient against undesirable drift in frequency of carrier waves due to self-modulation or due to DC bias in the modulating wave.[1] As noted earlier, FM synthesis was the basis of some of the early generations of digital synthesizers from Yamaha, with Yamaha's flagship DX7 synthesizer being ubiquitous throughout the 1980s and several other models by Yamaha providing variations and evolutions of FM synthesis. Yamaha had patented its hardware implementation of FM in the 1980s, allowing it to nearly monopolize the market for that technology until the mid-1990s. Casio developed a related form of synthesis called phase distortion synthesis, used in its CZ range of synthesizers. It had a similar (but slightly differently derived) sound quality to the DX series. Don Buchla implemented FM on his instruments in the mid-1960s, prior to Yamaha's patent. His 158, 258 and 259 dual oscillator modules had a specific FM control voltage input,[2] and the model 208 (Music Easel) had a modulation oscillator hard-wired to allow FM as well as AM of the primary oscillator.[3] These early applications used analog oscillators, and this capability was also followed by other modular synthesizers and portable synthesizers including Minimoog and ARP Odyssey. With the expiration of the Stanford University FM patent in 1995, digital FM synthesis can now be implemented freely by other manufacturers. The FM synthesis patent brought Stanford $20 million before it expired, making it (in 1994) "the second most lucrative licensing agreement in Stanford's history".[4] FM today is mostly found in software-based synths such as FM8 by Native Instruments or Sytrus by Image-Line, but it has also been incorporated into the synthesis repertoire of some modern digital synthesizers, usually coexisting as an option alongside other methods of synthesis such as subtractive, sample-based synthesis, additive synthesis, and other techniques. The degree of complexity of the FM in such hardware synths may vary from simple 2-operator FM, to the highly flexible 6-operator engines of the Korg Kronos and Alesis Fusion, to creation of FM in extensively modular engines such as those in the latest synthesisers by Kurzweil Music Systems. New hardware synths specifically marketed for their FM capabilities have not been seen since the Yamaha SY99 and FS1R, and even those marketed their highly powerful FM abilities as counterparts to sample-based synthesis and formant synthesis respectively. However, well-developed FM synthesis options are a feature of Nord Lead synths manufactured by Clavia, the Alesis Fusion range, and the Korg Oasys and Kronos. Various other synthesizers offer limited FM abilities to supplement their main engines. Spectral analysis Template:Cleanup The spectrum generated by FM synthesis with one modulator is expressed as following:[5][6] For modulation signal m⁡(t)=B⁢sin⁡(ωm⁢t){\displaystyle m(t)=B\,\sin(\omega _{m}t)\,} , carrier signal is F⁢M⁡(t)=A⁢sin⁡(ωc⁢t+∫0tB⁢s⁢i⁢n⁡(ωm⁢τ)d⁣τ)=A⁢sin⁡(ωc⁢t−Bωm⁢(cos⁡(ωm⁢t)−1))=A⁢sin⁡(ωc⁢t+Bωm⁢(sin⁡(ωm⁢t−π/2)+1)){\displaystyle {\begin{aligned}FM(t)&\ =\ A\,\sin \left(\omega _{c}\,t+\,\int _{0}^{t}B\,sin(\omega _{m}\,\tau )d\tau \right)\\&\ =\ A\,\sin \left(\omega _{c}\,t-{\frac {B}{\omega _{m}}}\left(\cos(\omega _{m}\,t)-1\right)\right)\\&\ =\ A\,\sin \left(\omega _{c}\,t+{\frac {B}{\omega _{m}}}\left(\sin(\omega _{m}\,t-\pi /2)+1\right)\right)\\\end{aligned}}} If we could ignore the constant phase terms on the carrier ϕc=B/ωm{\displaystyle \phi _{c}=B/\omega _{m}\,} and the modulator ϕm=−π/2{\displaystyle \phi _{m}=-\pi /2\,} , finally we got the following expression seen on Template:Harvnb: F⁢M⁡(t)≈A⁢sin⁡(ωc⁢t+β⁢sin⁡(ωm⁢t))=A⁢(J0⁡(β)⁢sin⁡(ωc⁢t)+∑n=1∞Jn⁡(β)⁢[sin⁡((ωc+n⁢ωm)⁢t)+(−1)n⁢sin⁡((ωc−n⁢ωm)⁢t)]){\displaystyle {\begin{aligned}FM(t)&\ \approx \ A\,\sin \left(\omega _{c}\,t+\beta \,\sin(\omega _{m}\,t)\right)\\&\ =\ A\left(J_{0}(\beta )\sin(\omega _{c}\,t)+\sum _{n=1}^{\infty }J_{n}(\beta )\left[\,\sin((\omega _{c}+n\,\omega _{m})\,t)\ +\ (-1)^{n}\sin((\omega _{c}-n\,\omega _{m})\,t)\,\right]\right)\end{aligned}}} where ωc,ωm{\displaystyle \omega _{c}\,,\,\omega _{m}\,} are angular frequencies (ω=2⁢π⁢f{\displaystyle \,\omega =2\pi f\,} ) of carrier and modulator, β=B/ωm{\displaystyle \beta =B/\omega _{m}\,} is frequency modulation index, and amplitudes Jn⁡(β){\displaystyle J_{n}(\beta )\,} is n{\displaystyle n\,} -th Bessel function of first kind, respectively.[note 1] ↑ above expression is transformed using trigonometric addition formulas sin⁡(x±y)=sin⁡x⁢cos⁡y±cos⁡x⁢sin⁡y{\displaystyle {\begin{aligned}\sin(x\pm y)&=\sin x\cos y\pm \cos x\sin y\end{aligned}}} and a lemma of Bessel function Template:Harv, cos⁡(β⁢sin⁡θ)=J0⁡(β)+2⁢∑n=1∞J2⁢n⁡(β)⁢cos⁡(2⁢n⁢θ)sin⁡(β⁢sin⁡θ)=2⁢∑n=0∞J2⁢n+1⁡(β)⁢sin⁡((2⁢n+1)⁢θ){\displaystyle {\begin{aligned}\cos(\beta \sin \theta )&=J_{0}(\beta )+2\sum _{n=1}^{\infty }J_{2n}(\beta )\cos(2n\theta )\\\sin(\beta \sin \theta )&=2\sum _{n=0}^{\infty }J_{2n+1}(\beta )\sin((2n+1)\theta )\end{aligned}}} as following: sin⁡(θc+β⁢sin⁡(θm))=sin⁡(θc)⁢cos⁡(β⁢sin⁡(θm))+cos⁡(θc)⁢sin⁡(β⁢sin⁡(θm))=sin⁡(θc)⁢[J0⁡(β)+2⁢∑n=1∞J2⁢n⁡(β)⁢cos⁡(2⁢n⁢θm)]+cos⁡(θc)⁢[2⁢∑n=0∞J2⁢n+1⁡(β)⁢sin⁡((2⁢n+1)⁢θm)]=J0⁡(β)⁢sin⁡(θc)+J1⁡(β)⁢2⁢cos⁡(θc)⁢sin⁡(θm)+J2⁡(β)⁢2⁢sin⁡(θc)⁢cos⁡(2⁢θm) {{#invoke:citation/CS1|citation |CitationClass=book }} Additive synthesis Digital synthesizer Sound chip |CitationClass=journal }} (also available in PDF as digital version 2/13/2007) ↑ Template:Cite web ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ Stanford University News Service (06/07/94), Music synthesis approaches sound quality of real instruments ↑ Template:Harvnb An Introduction To FM, by Bill Schottstaedt FM tutorial Synth Secrets, Part 12: An Introduction To Frequency Modulation, by Gordon Reid Synth Secrets, Part 13: More On Frequency Modulation, by Gordon Reid Paul Wiffens Synth School: Part 3 F.M. Synthesis including complex operator analysis Part 1 of a 2-part YouTube tutorial on FM synthesis with numerous audio examples Template:Sound synthesis types Retrieved from "https://en.formulasearchengine.com/index.php?title=Frequency_modulation_synthesis&oldid=284340" Sound synthesis types
CommonCrawl
8.5 Evaluation of Definite Integrals by Substitution Single Variable Calculus> 8. Definite Integrals> Show the introductory explanations Hide the introductory explanations In the previous chapter, we learned the substitution rule for indefinite integrals. In summary, if we have an indefinite integral of the form \[ \int f(g(x))g'(x)dx \] we make the substitution \[ u=g(x),\qquad du=g'(x)dx \] and transform the integral to the form \[ \int f(\underbrace{g(x)}_{u})\underbrace{g'(x)dx}_{du}=\int f(u)du. \] In this section, we will learn how to evaluate definite integral of the form \[ \int _{a}^{b}f(g(x))g'(x)dx, \] by substitution. For example, suppose that we want to evaluate \[ \int _{1}^{e}\frac{\ln x}{x}dx. \] It follows from the second part of the Fundamental Theorem of Calculus that \[ \int _{1}^{e}\frac{\ln x}{x}dx=\left [\int \frac{\ln x}{x}dx\right ]_{1}^{e}. \] To evaluate the indefinite integral \(\int \frac{\ln x}{x}dx\), we make the substitution \[ u=\ln x,\qquad du=\frac{1}{x}dx \] which converts the indefinite integral to \begin{align*} \int \overbrace{\ln x}^{u}\overbrace{\frac{1}{x}dx}^{du} & =\int udu\\ & =\frac{1}{2}u^{2}+C\\ & =\frac{1}{2}(\ln x)^{2}+C. \end{align*} Therefore, \begin{align*} \int _{1}^{e}\frac{\ln x}{x}dx & =\left [\int \frac{\ln x}{x}dx\right ]_{1}^{e}\\ & =\bigg [\frac{1}{2}(\ln x)^{2}\bigg ]_{1}^{e}\\ & =\frac{1}{2}\left [(\ln e)^{2}-(\ln 1)^{2}\right ]=\frac{1}{2}\left [1^{2}-0^{2}\right ]\\ & =\frac{1}{2}. \end{align*} When integrating by the substitution of a new variable (\(u\) in the above example), it is sometimes rather troublesome to express the result in terms of the original variable (\(x\) in the above example). We may avoid the process of restoring the original variable if we change the limits of integration to correspond with the new variable. In the above example, the lower limit of integration is \(x=1\), so in terms of \(u\), the lower limit becomes \(u=\ln 1=0\). The upper limit of integration is \(x=e\), so the upper limit of integration in terms of \(u\) becomes \(u=\ln e=1\). Therefore, \[ \int _{1}^{e}\frac{\ln x}{x}dx=\int _{0}^{1}udu=\left [\frac{1}{2}u^{2}\right ]_{0}^{1}=\frac{1}{2}, \] and we get the same result as Geometrically, the equation \[ \int _{1}^{e}\frac{\ln x}{x}dx=\int _{0}^{1}udu \] means that the two different regions shown in Figure 1 have the same area. Figure 1: The area of each shaded region is \(1/2\). There are two methods to evaluate a definite integral by substitution: First, find the corresponding indefinite integral by substitution, and then apply the second part of the Fundamental Theorem of Calculus. Directly use a substitution in the definite integral by changing both the variable and the limits of integration in one step, as stated in the following theorem: Theorem 1. (Substitution in Definite Integrals): If the function \(u=g(x)\) has a continuous derivative on the interval \([a,b]\) and \(f\) is continuous on the range of \(g\), then \[ \int _{a}^{b}f(g(x))g'(x)dx=\int _{u=g(a)}^{u=g(b)}f(u)du.\qquad \left (u=g(x)\right ) \] Proof. Let \(F\) be an integral (or an antiderivative) of \(f\). Then it follows from the chain rule that \begin{align*} \frac{d}{dx}F(g(x)) & =F'(g(x))g'(x)\\ & =f(g(x))g'(x); \end{align*} that is, \(F(g(x))\) is an antiderivative of \(f(g(x))g'(x)\). Therefore, it follows from the second part of the Fundamental Theorem of Calculus that \begin{align*} \int _{a}^{b}f(g(x))g'(x)dx & =F(g(x)\bigg ]_{x=a}^{x=b}\\ & =F(g(b))-F(g(a))\\ & =F(u)\bigg ]_{u=g(a)}^{u=g(b)}\\ & =\int _{g(a)}^{g(b)}f(u)du. \end{align*} Evaluate \[ \int _{0}^{3}x\sqrt{1+x}\ dx. \] Method (a): An appropriate substitution is \(u=1+x\) or \(x=u-1\). Then \[ du=dx \] and \begin{align*} \int \overbrace{x}^{u-1}\overbrace{\sqrt{1+x}}^{\sqrt{u}}\ \overbrace{dx}^{du} & =\int (u-1)\sqrt{u}\ du\\ & =\frac{2}{5}u^{5/2}-\frac{2}{3}u^{3/2}\\ & =\frac{2}{5}(1+x)^{5/2}-\frac{2}{3}(1+x)^{3/2}. \end{align*} Therefore, \begin{align*} \int _{0}^{3}x\sqrt{1+x}\ dx & =\left [\frac{2}{5}(1+x)^{5/2}-\frac{2}{3}(1+x)^{3/2}\right ]_{x=0}^{x=3}\\ & =\frac{2}{5}(4^{5/2})-\frac{2}{3}(4^{3/2})-\frac{2}{5}+\frac{2}{3}\\ & =\frac{116}{15}. \end{align*} Method (b): Change the variable of integration to \(u=1+x\) and the limits of integration to correspond with \(u\) simultaneously: The lower limit: When \(x=0\), \(u=1+0=1\). The upper limit: When \(x=3\), \(u=1+3=4\) . Therefore, \begin{align*} \int _{0}^{3}x\sqrt{1+x}\ dx & =\int _{1}^{4}(u-1)\sqrt{u}\ du\\ & =\left [\frac{2}{5}u^{5/2}-\frac{2}{3}u^{3/2}\right ]_{u=1}^{u=4}\\ & =\left (\frac{2}{5}(\sqrt{4})^{5}-\frac{2}{3}(\sqrt{4})^{3}\right )-\left (\frac{2}{5}(1)-\frac{2}{3}(1)\right )\\ & =\frac{116}{15}. \end{align*} The previous example has been solved by two different methods. In method (a), first the indefinite integral is evaluated using the Substitution Rule or \(u\)-substitution (this theorem), and then original limits are used. In method (b), the transformed definite integral with transformed limits is evaluated using the Substitution in Definite Integrals (Theorem 1). There is no general rule to say which method is easier. Evaluate \(\displaystyle \int _{0}^{\pi /2}\frac{\cos x}{1+\sin ^{2}x}dx.\) Method (a): Because both \(\sin x\) and its derivative \(\cos x\) appear in the integrand, we may try the substitution \(u=\sin x\). So \[ u=\sin x\Rightarrow du=\cos x\,dx \] and \begin{align*} \int \frac{\overbrace{\cos x\:dx}^{du}}{1+\underbrace{\sin ^{2}x}_{u^{2}}} & =\int \frac{du}{1+u^{2}}\\ & =\arctan u+C\\ & =\arctan (\sin x)+C.\tag{$u=\sin x$} \end{align*} Therefore, \[ \int _{0}^{\pi /2}\frac{\cos x}{1+\sin ^{2}x}dx=\arctan (\sin x)\bigg ]_{x=0}^{x=\pi /2}=\arctan \big (\underbrace{\sin \frac{\pi }{2}}_{1}\big )-\arctan (\sin 0)=\frac{\pi }{4}. \] Method (b): Change both the variable and limits of integration: \[ u=\sin x,\qquad du=\cos x\,dx \] The lower limit: When \(x=0\), \(u=\sin 0=0\). The upper limit: When \(x=\frac{\pi }{2}\), \(u=\sin \frac{\pi }{2}=1\). Therefore \[{\displaystyle \int _{0}^{\pi /2}\frac{\cos x}{1+\sin ^{2}x}dx=\int _{0}^{1}\frac{du}{1+u^{2}}=\arctan u\bigg ]_{u=0}^{u=1}=\arctan 1-\arctan 0=\frac{\pi }{4}-0=\frac{\pi }{4}.} \] Evaluate \({\displaystyle \int _{0}^{5}\frac{x}{\sqrt [3]{x^{2}+2}}}dx\). Method (a): Let \[ u=x^{2}+2,\qquad du=2xdx. \] Then \begin{align*} \int \frac{x}{\sqrt [3]{x^{2}+2}}dx & =\int \frac{1}{\sqrt [3]{u}}\overbrace{\frac{du}{2}}^{xdx}=\frac{1}{2}\int u^{-1/3}du\\ & =\frac{1}{2}\times \frac{3}{2}u^{2/3}\\ & =\frac{3}{4}(x^{2}+2)^{2/3}+C\tag{$u=x^{2}+2$} \end{align*} Therefore, \begin{align*}{\displaystyle \int _{0}^{5}x\sqrt [3]{x^{2}+2}}dx & =\frac{3}{4}(x^{2}+2)^{2/3}\bigg ]_{0}^{5}\\ & =\frac{3}{4}\left (27^{2/3}-2^{2/3}\right )=\frac{3}{4}\left (9-\sqrt [3]{4}\right ) \end{align*} Method (b): Let \[ u=x^{2}+2,\qquad du=2xdx. \] The lower and upper limits: When \(x=0\), \(u=2\), and when \(x=5\), \(u=27\). Therefore, \begin{align*} \int _{0}^{5}\frac{x}{\sqrt [3]{x^{2}+2}}dx & =\frac{1}{2}\int _{2}^{27}u^{-1/3}du\\ & =\frac{1}{2}\times \frac{3}{2}u^{2/3}\bigg ]_{2}^{27}\\ & =\frac{3}{4}\left (27^{2/3}-2^{2/3}\right )\\ & =\frac{3}{4}\left (9-\sqrt [3]{4}\right ) \end{align*} Evaluate \[ \int _{0}^{3}\frac{x-2}{\sqrt{5x+1}}dx. \] Let \(u=5x+1\) or \(x=(u-1)/5\). Then \[ dx=\frac{1}{5}du. \] The lower limit: When \(x=0\), we have \[ u=5(0)+1=1, \] The upper limit: when \(x=3\), we have \[ u=5(3)+1=16. \] Therefore, \begin{align*} \int _{0}^{3}\frac{x-2}{\sqrt{5x+1}}dx & =\int _{1}^{16}\frac{\frac{u-1}{5}-2}{\sqrt{u}}\underbrace{\frac{du}{5}}_{dx}\\ \\ & =\frac{1}{25}\int _{1}^{16}\frac{u-11}{\sqrt{u}}du\\ \\ & =\frac{1}{25}\int _{1}^{16}(u^{1/2}-11u^{-1/2})du\\ & =\frac{1}{25}\left [\frac{2}{3}u^{3/2}-22u^{1/2}\right ]_{1}^{16}\\ & =\frac{1}{25}\left [\left (\frac{2}{3}(4^{3})-22(4)\right )-\left (\frac{2}{3}-22\right )\right ]\\ & =-\frac{24}{25} \end{align*} A different substitution: Another substitution that also works is \(u=\sqrt{5x+1}\) or \[ u^{2}=5x+1. \] Taking the differential of each side we get \[ 2u\ du=5dx \] or \[ dx=\frac{2}{5}u\ du. \] The numerator can be written in terms of \(u\) as \[ x-2=\frac{1}{5}(u^{2}-1)-2=\frac{u^{2}-11}{5} \] The upper and lower limits in terms of \(u\) are \[ x=3\Rightarrow u=\sqrt{5(3)+1}=4 \] \[ x=0\Rightarrow u=\sqrt{5(0)+1}=1 \] Therefore \begin{align*} \int _{0}^{3}\frac{x-2}{\sqrt{5x+1}}dx & =\int _{1}^{4}\frac{u^{2}-11}{5u}\left (\frac{2}{5}u\ du\right )\\ & =\frac{2}{25}\int _{1}^{4}(u^{2}-11)du\\ & =\frac{2}{25}\left [\frac{1}{3}u^{3}-11u\right ]_{u=1}^{u=4}\\ & =\frac{2}{25}\left [\left (\frac{1}{3}(4^{3})-11(4)\right )-\left (\frac{1}{3}(1^{3})-11(1)\right )\right ]\\ & =-\frac{24}{25}. \end{align*} When evaluating a definite integral by substitution, it is possible that the upper limit in terms of the new variable becomes smaller than the lower limit. See the following example: Evaluate \({\displaystyle \int _{0}^{1}\frac{x}{\sqrt{1-x^{2}}}dx}.\) Let \(u=1-x^{2}\). Then \[ du=-2xdx\qquad \text{or}\qquad xdx=-\frac{1}{2}du \] The lower limit: When \(x=0\), \(u=1\). The upper limit: When \(x=1\), \(u=0\). Therefore, \begin{align*} \int _{0}^{1}\frac{x}{\sqrt{1-x^{2}}}dx & =\int _{1}^{0}\frac{\overbrace{-\frac{1}{2}du}^{xdx}}{\sqrt{u}}\\ & =-\frac{1}{2}\int _{1}^{0}u^{-1/2}du\\ & =-\frac{1}{2}\times 2u^{1/2}\bigg ]_{1}^{0}\\ & =-(0-1)=1. \end{align*} The original variable of integration is not always \(x\), and we do not have to call the new variable \(u\). See the following example: Evaluate \(\displaystyle \int _{0}^{16}\frac{\sqrt [4]{z}}{1+\sqrt{z}}dz.\) To get rid of the roots, we let \[ t=\sqrt [4]{z},\qquad \text{or}\qquad z=t^{4}. \] Taking the differential of each side, we obtain \[ dz=4t^{3}dt. \] When \(z=0\), \(t=\sqrt [4]{0}=0\), and when \(z=16\), \(t=\sqrt [4]{16}=2\). Therefore, \begin{align*} \int _{0}^{16}\frac{\sqrt [4]{z}}{1+\sqrt{z}}dz & =\int _{0}^{2}\frac{t}{1+\sqrt{t^{4}}}\overbrace{4t^{3}dt}^{du}\\ & =4\int _{0}^{2}\frac{t^{4}}{1+t^{2}}dt\\ & =4\int _{0}^{2}\frac{t^{4}-1+1}{1+t^{2}}dt\\ & =4\int _{0}^{2}\left [\frac{(t^{2}-1)(t^{2}+1)}{1+t^{2}}+\frac{1}{1+t^{2}}\right ]dt\\ & =4\int _{0}^{2}\left [t^{2}-1+\frac{1}{1+t^{2}}\right ]dt\\ & =4\left [\frac{t^{3}}{3}-t+\arctan t\right ]_{0}^{2}\\ & =\frac{4}{3}\times 8-4\times 2+4\arctan 2\\ & =\frac{8}{3}+4\arctan 2. \end{align*} Evaluate \(\displaystyle \int _{0}^{2}\frac{r^{3}}{\sqrt [3]{r^{2}+8}}dr.\) To get rid of the cubic root let \(u^{3}=r^{2}+8\) . Taking the differential of each side of this substitution, we obtain \[ 3u^{2}du=2r\,dr. \] The upper limit: When \(r=0\), \(u^{3}=8\) or \(u=2\). The lower limit: When \(r=2\), \(u^{3}=12\) or \(u=\sqrt [3]{12}\). \begin{align*} \int _{0}^{2}\frac{r^{2}}{\sqrt [3]{r^{2}+8}}rdr & =\int _{2}^{\sqrt [3]{12}}\frac{\overbrace{u^{3}-8}^{r^{2}}}{{\underbrace{u}_{\sqrt [3]{r^{2}+8}}}}\overbrace{\frac{3}{2}u^{2}du}^{rdr}\\ & =\frac{3}{2}\int _{2}^{\sqrt [3]{12}}(u^{4}-8u)udu\tag{simplify}\\ & =\frac{3}{2}\left [\frac{1}{5}u^{5}-4u^{2}\right ]_{2}^{\sqrt [3]{12}}\\ & =\frac{3}{10}\left [u^{2}(u^{3}-20)\right ]_{2}^{\sqrt [3]{12}}\\ & =\frac{3}{10}\left (12^{2/3}(12-20)-4(8-20)\right )\\ & =\frac{3}{10}\left (48-8\times 12^{2/3}\right )\\ & =\frac{3}{10}\left (48-16\times 2^{1/3}\times 3^{2/3}\right ) \end{align*} Different substitution: Another substitution: that also works is \(u=r^{2}+8\): \[ u=r^{2}+8\qquad \Longrightarrow \qquad du=2r\,dr \] The lower limit: When \(r=0\), \(u=8\). The upper limit: When \(r=2\), \(u=12.\) \begin{align*} \int _{0}^{2}\frac{r^{2}}{\sqrt [3]{r^{2}+8}}rdr & =\int _{8}^{12}\frac{u-8}{u^{1/3}}\cdot \frac{1}{2}du\\ & =\frac{1}{2}\int _{8}^{12}\left (u^{2/3}-8u^{-1/3}\right )du\\ & =\frac{1}{2}\left [\frac{3}{5}u^{5/3}-12u^{2/3}\right ]_{8}^{12}\\ & =\frac{3}{10}\left [u^{2/3}(u-20)\right ]_{8}^{12}\\ & =\frac{3}{10}\left (12^{2/3}\times (-8)-\overset{4}{\cancel{8^{2/3}}}\times (-12)\right )\\ & =\frac{3}{10}\left (48-16\times 2^{1/3}\times 3^{2/3}\right ). \end{align*} Geometric Interpretation Show the geometric interpretation Hide the geometric interpretation So far, we have learned the substitution for definite integrals is \[ \int _{a}^{b}f(g(x))g'(x)dx=\int _{g(a)}^{g(b)}f(u)du.\tag{i} \] Instead of thinking of this rule as an algebraic consequence of the chain rule, let's see what it means geometrically. For convenience, assume that: \(a<b\), \(f(x)>0\), and \(g\) is an increasing function; that is, \(g'(x)>0\). Let's represent \(g\) with a separate domain and range, as in Figure 2. The function \(g\) maps the interval \([a,b]\) on the \(x\)-axis to the interval \([g(a),g(b)]\) on the \(u\)-axis. Now let's subdivide the interval \([a,b]\) by \(a=x_{0},x_{1},\dots ,x_{n}=b\). The images of the points \(x_{0},\dots ,x_{n}\) under \(g\) are \[ u_{0}=g(x_{0}),u_{1}=g(x_{1}),\dots ,u_{n}=g(x_{n}) \] on the \(u\)-axis. Specifically, the image of the subinterval \([x_{i-1},x_{i}]\) under \(g\) is the subinterval \([g(x_{i-1}),g(x_{i})]\). Even if the division points \(x_{0},\dots ,x_{n}\) are equally spaced, those image subintervals on the \(u\)-axis will not all have the same width. In fact, the width of \([u,u+\Delta u]\) is approximately \(g'(x)\) times the width of \([x,x+\Delta x]\) because by using linear approximation (Figure 3) we have \[ \Delta u\approx g'(x)\Delta x. \] The approximation becomes equality and \(\Delta u\to du=g'(x)dx\) as \(\Delta x=dx\to 0\). If \(g'(x)>1\), then the image of each subinterval on the \(x\)-axis is stretched on the \(u\)-axis and if \(0<g'(x)<1\), the image of each subinterval is compressed on the \(u\)-axis. Now let's compare the graphs of \(f\circ g\) and \(f\) (Figure 4). Notice that the value of \(f\circ g\) at \(x_{i}\) is the same as the value of \(f\) at \(u_{i}\) \begin{align*} f\circ g(x_{i}) & =f(g(x_{i}))\\ & =f(u_{i}).\tag{$u_{i}=g(x_{i})$} \end{align*} Therefore, the rectangle with the corners \((x_{i},0),(x_{i}+\Delta x,0),(x_{i},f\circ g(x_{i})),\) and \((x_{i}+\Delta x,f\circ g(x_{i}))\) under the graph of \(f\circ g\) and the rectangle with the corners \((u_{i},0),(u_{i}+\Delta u,0),(u_{i},f(u_{i}))\) and \((u_{i}+\Delta u,f(u_{i}))\) under the graph of \(f\) have the same height but the width of the second one is approximately \(g'(x_{i})\Delta x\) (Figure 5). So if we multiply the height of the first rectangle by \(g'(x_{i})\) then both rectangles will have the same area (Figure 5). In other words, the role of \(g'(x)\) in the formula \[ \int _{a}^{b}f(g(x))g'(x)dx=\int _{g(a)}^{g(b)}f(u)du \] is to cancel the change in the width by the change in the height such that the net signed areas remain the same (Figure 6). For example, let \(f(u)=1.5\) for \(0\leq u\leq 4\) and \(u=g(x)=x^{2}\). Then \(g\) maps the interval \([0,2]\) to \([0,4]\). Figure 8.5.7 shows the graphs of \(f(u),\) \(f\circ g(x)\), and \((f\circ g)(x)g'(x)\). Because \(f\) is a constant function, then \[ \int _{0}^{4}f(u)du=1.5\times 4=6 \] and \[ \int _{0}^{2}f(g(x))dx=1.5\times 2=3. \] We notice that \(g'(x)=2x\) and \[ 0\leq g'(x)\leq 1,\qquad \text{when }0\leq x\leq \frac{1}{2}. \] The function \(g\) maps the interval \([0,\frac{1}{2}]\) on the \(x\)-axis to a smaller interval \([0,\frac{1}{4}]\) on the \(u\)-axis. In fact, \(g\) maps any subinterval of \([0,\frac{1}{2}]\) to a smaller interval because by the Mean-Value Theorem we have \[ |u_{2}-u_{1}|=|g(x_{2})-g(x_{1})|=\underbrace{|g'(c)|}_{\leq 1}\ |x_{2}-x_{1}|\leq |x_{2}-x_{1}|, \] for every \(x_{1},x_{2}\in [0,\frac{1}{2}]\) and some \(c\) between \(x_{1}\) and \(x_{2}\). The area under graph of \(f\circ g\) on the interval \([0,\frac{1}{2}]\) is twice the area under the graph of \(f\) on the interval \([0,\frac{1}{4}]\) (compare the shaded areas in Figure 7(a) and (b)). If we multiply \(f\circ g\) by \(g'(x)=2x\), then the graph \((f\circ g)g'(x)\) is a straight line whose area on the interval \([0,\frac{1}{2}]\) is exactly the same as the area under the graph of \(f\) on the interval \([0,\frac{1}{4}]\) (compare the shaded areas in Figure 7(a) and (c)). On the other hand, because \[ 1<g'(x)\qquad \text{when }x>\frac{1}{2}, \] for every \(x_{1},x_{2}\) in \([\frac{1}{2},2]\), the image of \([x_{1},x_{2}]\) under \(g\) is a larger interval \([g(x_{1}),g(x_{2})]\) \[ |u_{2}-u_{1}|=|g(x_{2})-g(x_{1})|=\underbrace{\left |g'(c^{*})\right |}_{>1}\,\left |x_{2}-x_{1}\right |\geq |x_{2}-x_{1}|. \] Therefore, the area under the graph of \(f\circ g\) on the interval \([x_{1},x_{2}]\) is less than the area under the graph of \(f\) on the interval \([g(x_{1}),g(x_{2})]\). Again multiplying \(f\circ g\) by \(g'(x)\) makes the area under \((f\circ g)g'\) on \([x_{1},x_{2}]\) equal to the area under \(f\) on \([g(x_{1}),g(x_{2})]\). If \(g\) is a decreasing function, then \(g'(x)<0\), and \(g(b)<g(a)\). Therefore, if \(f\) is a positive function, then both of the integrals on the left-hand side and on the right-hand side of Equation (i) are negative. The integral on the left hand-side is negative because (\underbrace{f(g(x))}_{>0}\underbrace{g'(x)}_{<0}<0\) and the integral on the right-hand side is negative because \(g(a)>g(b),\) and \[ \int _{{\color{blue}g(a)}}^{{\color{red}g(b)}}f(u)du=-\underbrace{\int _{{\color{red}g(b)}}^{{\color{blue}g(a)}}f(u)du}_{>0}. \] Note that in Theorem 1, \(g\) does not have to be a one-to-one function. Let's see what will happen if \(g\) is not a one-to-one function through an example. Suppose \(f(u)=1\) for \(1\leq u\leq 4\). Then \[ \int _{1}^{4}f(u)du=\int _{1}^{4}du=3. \] Now consider \(g:[-1,2]\to \mathbb{R}\) and \(h:[1,2]\to \mathbb{R}\) with \[ u=g(x)=h(x)=x^{2}. \] Figure 8: Both \(g\) and \(h\) map the endpoints of their domains to \(u=1\) and \(u=4\). Notice that \(g(-1)=h(1)=1\) and \(g(2)=h(2)=4\) (see Figure 8). Unlike \(h\), the function \(g\) is not one-to-one. If we integrate \(f(g(x))g'(x)\) on the domain of \(g\) and \(f(h(x))h'(x)\) on the domain of \(h\), both integrals will be equal to \(\int _{1}^{4}f(u)du\): \[ \int _{-1}^{2}f(g(x))g'(x)dx=\int_{-1}^{2}(1)(2x)dx=x^{2}\big |_{-1}^{2}=2^{2}-(-1)^{2}=3, \] \[ \int _{1}^{2}f(h(x))h'(x)dx=\int _{1}^{2}(1)(2x)dx=x^{2}\big |_{1}^{2}=2^{2}-1^{2}=3. \] We notice that \[ Dom(g)=Dom(h)\cup [-1,1], \] the function \(g\) is decreasing on \([-1,0]\) and increasing on \([0,1]\), and \[ \int _{-1}^{0}f(g(x))g'(x)dx=-\int _{0}^{1}f(g(x))g'(x)dx \] \[ \Rightarrow \int _{-1}^{1}f(g(x))g'(x)dx=\int _{-1}^{0}f(g(x))g'(x)dx+\int _{0}^{1}f(g(x))g'(x)dx=0. \] Therefore, \[ \int _{-1}^{2}f(g(x))g'(x)dx=\overset{0}{\cancel{\int _{-1}^{1}f(g(x))g'(x)dx}}+\underbrace{\int _{1}^{2}f(g(x))g'(x)dx}_{g(x)=h(x)\text{ on }[1,2]}. \] In general, in Theorem 1, if \(g\) is not a one-to-one function, then the value of the integral on the intervals in which \(g\) is increasing and on the intervals in which \(g\) is decreasing balance each other such that the total value of the integral of \(f(g(x))g'(x)\) on the entire interval \([a,b]\) will be equal to \(\displaystyle \int _{g(a)}^{g(b)}f(u)du\). See Figure 9. \(\int _{0}^{1}f(g(x))g'(x)dx\) cancel out \(\int _{-1}^{0}f(g(x))g'(x)dx\) (b) (c) Figure 8.5.9: All of the shaded regions have the same net area. 1. Review of Fundamentals 1.1 What Is Algebra? 1.2 Sets 1.3 Sets of Numbers 1.4 Inequalities 1.5 Absolute Value 1.6 Intervals 1.7 Laws of Exponents 1.8 The Logarithm 1.9 Equations and Identities 1.10 Polynomials 1.11 Dividing Polynomials 1.12 Special Product Formulas 1.13 Factorization 1.14 Fractions 1.15 Rationalizing Binomial Denominators 1.16 Coordinates in a Plane 1.17 Graphs of Equations 1.18 Straight Lines 1.19 Solutions and Roots 1.20 Other Types of Equations 1.21 Solving Inequalities 1.22 Absolute Value Equations and Inequalities 1.23 Sigma Notation 2.1 Constants, Variables, and Parameters 2.2 The Concept of a Function 2.3 Natural Domain and Range of a Function 2.4 Graphs of Functions 2.5 Vertical Line Test 2.6 Domain and Range Using Graph 2.7 Piecewise-Defined Functions 2.8 Equal Functions 2.9 Even and Odd Functions 2.10 Examples of Elementary Functions 2.11 Transformations of Functions 2.12 Algebraic Combination of Functions 2.13 Composition of Functions 2.14 Increasing or Decreasing Functions 2.15 One-to-One Functions 2.16 Inverse Functions 3. Transcendental functions 3.1 Exponential Functions 3.2 Logarithmic Functions 3.3 Trigonometric Functions 3.3.1 Angles 3.3.2 Basics of Trigonometric Functions 3.3.3 Trigonometric Identities 3.3.4 Periodicity and Graphs of Trigonometric Functions 3.4 Inverse Trigonometric Functions 4. Limits and Continuity 4.1 The Concept of a Limit 4.2 The Precise Definition of a Limit 4.3 One-Sided Limits 4.4 Theorems for Calculating Limits 4.5 The indeterminate Form 0/0 4.6 Infinite Limits 4.7 Limits at Infinity 4.8 Continuity 4.9 Properties of Continuous Functions 4.10 The Other Indeterminate Forms 4.11 Asymptotes 4.12 The number e 5. Differentiation 5.1 The Derivative Concept 5.2 Geometric Interpretation of the Derivative as a Slope 5.3 Graphing the Derivative 5.4 Differentiability and Continuity of Functions 5.5 One-sided Derivatives 5.6 When a Function Is Not Differentiable at a Point 5.7 Differentiation Rules 5.8 Derivatives of The Trigonometric Functions 5.9 Higher Derivatives 5.10 More About the Leibniz Notation for Higher Derivatives 5.11 Implicit Differentiation 5.12 Derivatives of Inverse Functions 5.13 Derivatives of the Inverse Trigonometric Functions 5.14 Derivatives of Logarithmic Functions 5.15 Derivatives of Exponential Functions 5.16 Hyperbolic Functions and Their Derivatives 5.17 Linear Approximations 5.18 Differentials 6. Applications of Differentiation 6.3 Related rates 6.4 Extreme values of functions 6.5 Rolle's theorem 6.6 The Mean-Value Theorem for Derivatives 6.7 Increasing and Decreasing Functions 6.8 Concavity and Points of Inflection 6.9 First Derivative Test for Local Extrema 6.10 Second Derivative Test for Local Extrema 6.11 L'Hôpital's Rule for Indeterminate Limits 6.12 Curve Sketching 6.13 Optimization 7. Indefinite Integrals 7.2 Basic Integration Formulas 7.3 Constant of integration 7.4 Integration by Substitution 8. Definite Integrals 8.1 Definition of the Definite Integral 8.2 Properties of definite integrals 8.3 The Fundamental Theorem of Calculus 8.4 Derivatives of Integrals 9. Applications of Integration 9.1 The Area Between Two Curves 9.2 Volumes of Solids of Revolution: the Disk and Washer Methods 9.3 Volumes of Solids with Known Cross-Sections: The Slice Method 9.4 Volumes of Solids of Revolution: The Shell Method 9.5 Arc Length 9.6 Work
CommonCrawl
Definition and theoretical properties of the metrics Exploration of metrics through case scenarios Synthesis of metric analysis Further perspectives on collective behaviour Metrics for describing dyadic movement: a review Rocio Joo1, 2Email authorView ORCID ID profile, Marie-Pierre Etienne3, Nicolas Bez4 and Stéphanie Mahévas2 Received: 17 August 2018 Accepted: 2 December 2018 In movement ecology, the few works that have taken collective behaviour into account are data-driven and rely on simplistic theoretical assumptions, relying in metrics that may or may not be measuring what is intended. In the present paper, we focus on pairwise joint-movement behaviour, where individuals move together during at least a segment of their path. We investigate the adequacy of twelve metrics introduced in previous works for assessing joint movement by analysing their theoretical properties and confronting them with contrasting case scenarios. Two criteria are taken into account for review of those metrics: 1) practical use, and 2) dependence on parameters and underlying assumptions. When analysing the similarities between the metrics as defined, we show how some of them can be expressed using general mathematical forms. In addition, we evaluate the ability of each metric to assess specific aspects of joint-movement behaviour: proximity (closeness in space-time) and coordination (synchrony) in direction and speed. We found that some metrics are better suited to assess proximity and others are more sensitive to coordination. To help readers choose metrics, we elaborate a graphical representation of the metrics in the coordination and proximity space based on our results, and give a few examples of proximity and coordination focus in different movement studies. Collective behaviour Dyadic movement Spatio-temporal dynamics Collective behaviour has been the object of study of many disciplines, such as behavioural ecology, psychology, sports, medicine, physics and computer sciences [7, 13, 19, 56, 57]. In multiple contexts, individuals – in a very wide sense of the word – adapt their behaviour as a function of their interaction with others. In movement ecology, where movement is regarded as an expression of behaviour [43], collective behaviour should be considered as a key element given that collective dynamics and individual movement are intricately intertwined [7]. Accordingly, mechanistic movement models should account for these dynamics. The vast majority of movement models neglect this aspect, with a few exceptions (e.g., [29, 44, 47, 53]). The consequence has been that the forms that these dynamics take in the few existing works rely on very simple theoretical assumptions. Collective behaviour can be produced at large group scales (flocks, colonies, schools) but also at small group scales (triads, dyads). Regardless of the actual group scale, global patterns of collective behaviour originate from local interactions among neighbouring members [11], so analysing dyad interaction as a first step is a pertinent choice. Concerning dyadic interaction, here we focus on what we call 'joint movement', where two individuals move together during the total duration or a partial segment of their paths. Dyadic movement behaviour has been mostly studied in a data-driven approach, using several metrics to quantify it. In movement ecology, few works have applied and compared some of these metrics [38, 41]. However, their theoretical properties, and thus the similarities and differences in their construction and in what they actually assess, have not been thoroughly analysed yet. This manuscript reviews a series of metrics used to assess pairwise joint-movement and proposes some modifications when appropriate (Table 1). Two criteria are taken into account for the review of these metrics: practical use and dependence on parameters; they are evaluated through both a theoretical (conceptual) as well as a practical approach. Metrics found in the literature essentially measured two aspects of joint movement: proximity and coordination. Proximity refers to closeness in space-time, as in how spatially close simultaneous fixes (individual locations recorded) are in a dyad; a point pattern perspective. The notion of proximity is thus subjective, since a judgement on proximity involves a threshold in distance whether local or global, or the definition of a reference zone (where encounters may be observed). Coordination, on the other hand, refers to synchrony in movement, which can be assessed through measures of similarity or correlation in movement patterns such as speed or direction. There might be a thin line between proximity and coordination, and some metrics may be associated with both at some degree, as we show through the description of their theoretical properties and the practical analysis of case scenarios. Metrics for measuring dyad joint movement Parameters fixed ad hoc and Assumptions \(Prox = K_{\delta }^{+}/T\) i) δ: distance threshold, ii) K : kernel \(Cs = \frac { D_{chance} - \left (\sum _{t=1}^{T} d_{t}^{A,B} \right)/T}{D_{chance} + \left (\sum _{t=1}^{T} d_{t}^{A,B} \right)/T} \) ]−1,1] Dchance definition \(HAI = \frac {K_{\delta }^{+}}{K_{\delta }^{+} + (n_{A0} + n_{0B})/2}\) [0,1] i) Reference area, ii) δ: distance threshold \(L_{ixn}T = \text {logistic}\left (\ln {\left (\frac {n_{AB}/p_{AB} + n_{00}/p_{00}}{n_{A0}/p_{A0} + n_{0B}/p_{0B} }\right)}\right)\) Reference area; \(jPPA = \frac {S\left \{ \bigcup \limits _{t=1}^{T-1} \left (E_{\phi ^{A}}\left (X_{t}^{A},X_{t+1}^{A}\right) \cap E_{\phi ^{B}}\left (X_{t}^{B},X_{t+1}^{B}\right) \right) \right \}}{S\left \{ \bigcup \limits _{t=1}^{T-1} \left (E_{\phi ^{A}}\left (X_{t}^{A},X_{t+1}^{A}\right) \cup E_{\phi ^{B}}\left (X_{t}^{B},X_{t+1}^{B}\right) \right) \right \}}\) i) Every zone within ellipse has same odd of being transited, ii) ϕ: maximum velocity \(CSEM = \frac {\max \left \{m; N_{m} >0\right \} }{T-1}\) distance threshold \(r_{V} = \frac {\sum _{t=1}^{T}\left (V^{A}_{t} - \bar {V}^{A}\right)\left (V^{B}_{t} - \bar {V}^{B}\right)}{\sqrt {\sum _{t=1}^{T}\left (V^{A}_{t} - \bar {V}^{A}\right)^{2}}\sqrt {\sum _{t=1}^{T}\left (V^{B}_{t} - \bar {V}^{B}\right)^{2}}}\) [-1, 1] \(DI_{d} = \left (\sum _{t=1}^{T-1} \left [1 - \left (\frac {\mid d_{t,t+1}^{A}-d_{t,t+1}^{B}\mid }{d_{t,t+1}^{A}+d_{t,t+1}^{B}}\right)^{\beta }\right ]\right)/(T-1)\) β: scaling parameter \(DI_{\theta } = \left (\sum _{t=1}^{T-1} \cos \left (\theta _{t,t+1}^{A} - \theta _{t,t+1}^{B}\right)\right)/(T-1)\) \(DI = \frac {\sum _{t=1}^{T-1} \cos (\theta _{t,t+1}^{A} - \theta _{t,t+1}^{B})\left [1 - \left (\frac {\mid d_{t,t+1}^{A}-d_{t,t+1}^{B}\mid }{d_{t,t+1}^{A}+d_{t,t+1}^{B}}\right)^{\beta }\right ]}{T-1}\) Note: The formulas assume simultaneous fixes. \(K_{\delta }^{+} = \sum _{t=1}^{T} K_{\delta }\left (X^{A}_{t},X^{B}_{t}\right)\); T is the number of (paired) fixes in the dyad; δ is a distance-related parameter. K is a kernel function. A, B: the two individuals in the dyad; T: number of fixes in the dyad; Dchance is the chance-expected distance between A and B; nAB: number of observed fixes where A and B are simultaneously in the reference area (when a subscript is 0, it represents the absence of the corresponding individual from the reference area); pAB: probability of finding A and B simultaneously in the reference area (same interpretation as for n when a subscript is 0); \(E_{\phi ^{A}}\left (X_{t}^{A},X_{t+1}^{A}\right)\) is the ellipse formed with positions Xt and Xt+1, and maximum velocity ϕ from individual A (analogous for B); S represents the surface of the spatial object between braces; VA (and VB, resp.) represents the analysed motion variable of A (and B); \(\bar {V}^{A}\) (and \(\bar {V}^{B}\)) represent their average; β is a scale parameter; θ, the absolute angle; Nm is the number of m-similar consecutive segments within the series of analysed steps The manuscript is thus organized as follows. We first describe the criteria used to evaluate the metrics as indices of dyadic joint movement. We then present the different metrics and their theoretical properties with special attention to their dependence towards parameters. Next, we define case scenarios to evaluate the practical properties of the metrics. In the last section, we discuss the overall suitability of the metrics for assessing joint movement in ecology and give some practical guidelines for their use. We categorized the desirable properties of metrics for assessing dyadic joint movement into three criteria: practical use, considered the most important one; dependence on parameters; and computational cost: Practical use [50, 52, 58]: 1) A metric is useful if it is interpretable and reflects a marked property of collective behaviour. 2) It should also be sensitive to changes in patterns of joint movement (e.g. higher values for high joint movement and lower values for independence in movement). 3) Being able to attain the theoretical range of values would also be important, as not doing so makes it harder to interpret empirical values. C1 is therefore a three dimensional criterion comprising interpretation, sensitivity and attainable range. Attainable range is covered in the theoretical properties section; we highlight the difficulties or implausibility to attain minimum and maximum values for the metrics when this is true. How to interpret each metric is also explained in this section; evidently, a metric without an attainable range is difficult to interpret. Sensitivity is addressed in the case-scenario section. Dependence on parameters: A metric that depends on few parameters and hypotheses is more robust and generic than one that strongly relies on many parameters and hypotheses, since the former can produce more easily comparable results and interpretations. In addition, an ideal metric can be defined in such a way that the user can easily see how a change in the values of the parameters or in the components related to movement assumptions conditions the metric derivations and interpretations. In the next section, we describe the assumptions underlying each metric and the parameters needed to be fixed by the user. This description will allow distinguishing user-tractable parameter-dependent metrics from those that are not. In the following subsections the metrics are defined and their theoretical properties are described. A summary is proposed in Table 1. Considering two individuals named A and B, the position of A (resp. B) at time t is denoted by \(X_{t}^{A}\) (resp. \(X_{t}^{B}\)). The distance between A at time t1 and B at time t2 will be referred to as \(d_{t_{1},t_{2}}^{A,B}\). When the distance between two individuals is regarded at simultaneous time, this will be shortened to \(d_{t}^{A,B}\). Whenever possible, metrics introduced by different authors but that are actually very similar in their definition, are grouped under a unified name and a general definition. Proximity index (Prox) The proximity index (Prox in [5]) is defined as the proportion of simultaneous pairs of fixes within a distance below an ad hoc threshold (Fig. 1). Other metrics in the literature are actually analogous to Prox: the coefficient of association (Ca) [12] and the IAB index [4]. Denoting by T the number of pairs of fixes in the dyad, we propose a unified version of those metrics using a kernel K (formula 1): $$ Prox_{K,\delta} = \frac{1}{T} \sum\limits_{t=1}^{T} K_{\delta}\left(X_{t}^{A}, X_{t}^{B}\right), $$ Example of Prox for δ=3 (left panel) and Cs (right panel). Circles and squares represent locations of two different individuals. Left panel: The numbers inside as well as the arrows represent the time sequence of both tracks. Grey lines correspond to the distances between simultaneous fixes; their values are shown. At the bottom: a dummy variable indicating if distances are below δ for each pair of simultaneous fixes, then the derived Prox and DO (average of observed distances). Right panel: Grey lines represent the distances of all permuted fixes; DE is their average where δ is a distance threshold parameter. Choosing \(K_{\delta }(x,y) =\mathbbm {1}_{\{ \| x-y \| < \delta \}}\) (\(\mathbbm {1}_{\{\}}\) represents the indicator function) as a kernel leads to the Prox metric in [5], denoted by \(Prox_{\mathbbm {1},\delta }\) henceforward. Instead, choosing Kδ(x,y)= exp(−∥x−y∥2/(2δ2)) gives the IAB index. Regarding Ca, for simultaneous fixes, its definition becomes exactly the same as \(Prox_{\mathbbm {1},\delta }\) (using Ca's adaptation to wildlife telemetry data shown in [38]). Most of the proximity-related metrics are based on symmetric kernels and depend only on the distance between A and B; therefore, the formula notation (1) can be simplified as: $$ Prox_{K,\delta} = \frac{1}{T} \sum\limits_{t=1}^{T} K_{\delta}\left(X_{t}^{A}, X_{t}^{B}\right) = \frac{1}{T} K_{\delta}^{+}. $$ If the distance between two individuals is below the threshold δ during their whole tracks, \(Prox_{\mathbbm {1},\delta }\) will be 1 (and 0 in the opposite case). \(Prox_{\mathbbm {1},\delta }\) might be interpreted as the proportion of time the two individuals spent together. This interpretation is, of course, threshold dependent. The IAB index provides a smoother measure of the average proximity between two individuals along the trajectory. Proximity is thus dependent on the choice of a δ parameter and of a kernel function. Graphical examples illustrating the differences in \(K_{\delta }(x,y) =\mathbbm {1}_{\{ \| x-y \| < \delta \}}\) and Kδ(x,y)= exp(−∥x−y∥2/(2δ2)) are in Additional file 1. Coefficient of Sociality (Cs) The Coefficient of Sociality (Cs) [26] compares the mean (Euclidean) distance between simultaneous pairs of fixes (DO) against the mean distance between all permutations of all fixes (DE). $$ Cs = \frac{D_{E} - D_{O}}{D_{E} + D_{O}} = 1 - 2 \frac{D_{O}}{D_{E} + D_{O}}, $$ $$D_{O} = \left(\sum\limits_{t=1}^{T} d_{t}^{A,B} \right)/T, $$ $$D_{E} = \left(\sum\limits_{t_{1}=1}^{T}\sum\limits_{t_{2}=1}^{T}d_{t_{1},t_{2}}^{A,B}\right)/T^{2}. $$ Kenward et al. [26] stated that Cs belongs to [−1,1], and it has been used as a symmetrical index since. Nevertheless, that is not true. Cs equals 1 if and only if DO=0 and DE≠0, which occurs only when the two individuals always share the exact same locations. However, Cs equals −1, if and only if DE=0 and DO≠0, which is impossible (it could asymptotically approach to 1 for very large series when DO approaches infinity). Cs equals 0 when DO=DE. If all simultaneous fixes are very proximal but not in the same locations, Cs would approach 1 (how close to 1 would depend on the value of DE as illustrated in the right hand side of Eq. 3). Moreover, only if DE<DO, Cs can take a negative value. For Cs to take a largely negative value, the difference in the numerator should be very large compared to the sum in the denominator; in Additional file 2 we show how implausible that situation is and how sensitive it is to the length of the series. The latter makes Cs from dyads of different length difficult to compare, because their real range of definition would differ. This fact is neither evoked in the work that introduced the metric [26] nor in the ones that evaluated this and other metrics [38, 41], despite the fact that in those works no value lower than −0.1 was obtained. Indeed, [26] assumed that the permutation of all fixes is a way to represent locations of independent individuals. While this is questionable, some modified versions, as the one proposed by [62], use correlated random walks as null models and simulated independent trajectories under these models to replace DE by a more realistic reference value. Thus, a generalized version of Cs would be: $$ Cs = \frac{ D_{chance} - D_{O}}{D_{chance} + D_{O}}, $$ where Dchance is defined through a user-chosen movement model for independent trajectories. The Half-weight Association Index (HAI) The Half-weight Association Index (HAI) proposed by [10] measures the proportions of fixes where individuals are close to each other (within a user-defined threshold). By that definition, HAI is exactly the same as \(Prox_{\mathbbm {1},\delta }\). However, HAI was popularized by [2] in another form that did not consider all fixes for the computation of the metric, but used counts with respect to a reference area (called overlapping zone in the original paper): $$ HAI = \frac{K_{\delta}^{+}}{K_{\delta}^{+} + \frac{1}{2}(n_{A0}+n_{0B})} $$ where nAB (resp nA0; n0B; n00) is the number of simultaneous occurrences of A and B in the reference area SAB (resp. simultaneous presence of A and absence of B; simultaneous absence of A and presence of B; simultaneous absence of A and absence of B), and where \(K_{\delta }^{+}\) is computed over the reference area. It is worth noticing that the HAI adaptation proposed by [2] does not correctly account for spatial joint movement, as would do a \(Prox_{\mathbbm {1},\delta }\) version constraint to the reference area; i.e. the denominator should be equal to nAB + nA0 + n0B, which is the total number of simultaneous fixes where at least one individual is in the reference area. The dependence to the definition of an overlapping zone or reference area is discussed in the following subsection dedicated to LixnT, which also relies on the definition of a static reference area. If the individuals remain together (i.e. in the reference area and closer than δ) all the time, HAI is close to 1, and 0 in the opposite case. An example of the computation of HAI under [2]'s definition is given in Fig. 2. Two examples of the derivation of LixnT and HAI. LixnT was computed using expected frequencies. HAI was computed with \(K_{\delta }(t) = \mathbbm {1}{\{ d_{t}^{A,B} < 5 \}}\). Circles and squares represent locations of two different individuals. The numbers inside as well as the arrows represent the time sequence of both tracks. Grey lines correspond to the distances between simultaneous fixes; their values are shown. The dashed lines circle an arbitrary reference area Coefficient of Interaction (L ixn and L ixn T) Minta [42] proposed a Coefficient of Interaction (Lixn) that assesses how simultaneous are the use and avoidance of a reference area SAB by two individuals: $$ L_{ixn} = \ln{\left(\frac{n_{AB}/p_{AB} + n_{00}/p_{00}}{n_{A0}/p_{A0} + n_{0B}/p_{0B} }\right)}, $$ where pAB is the probability, under some reference null model, of finding A and B simultaneously in SAB (the same interpretation as for n when a subscript is 0; see HAI subsection). Attraction between individuals would cause greater simultaneous use of SAB than its solitary use, which would give positive values of Lixn. Conversely, avoidance would translate into negative values of Lixn, since use of SAB would be mostly solitary. A logistic transformation of the metric (LixnT) produces values between 0 (avoidance) and 1 (attraction), making the interpretation easier: $$ L_{ixn}T = logistic(L_{ixn})= \frac{1}{1+e^{-L_{ixn}}}. $$ Minta [42] proposed two different approaches for computing the associated probabilities conditionally to the fact that the reference area is known (see examples in Fig. 2 and Table in Additional file 3). In both cases, the probabilities are estimated under the assumptions of independence in movement among the individuals and of uniform utilization of the space. Indeed this latter assumption can be relaxed and pAB can be derived from any kind of utilization distribution (see for instance [20] for the estimation of utilization distribution). HAI and LixnT (thus Lixn as well) rely heavily on a static reference area – either known or estimated – and on the probabilities of presence within this reference area. The static reference area could be defined, for instance, as the intersection of the respective home ranges of A and B. However, there are many approaches for estimating home ranges, each one relying on particular assumptions about the spatial behaviour of the studied populations [9]. Thus, SAB is not a simple tuning parameter. The way it is defined may completely modify the output. If the reference area is equal to the whole area of movement of the two individuals, then both the numerator and the denominator in the logarithm are equal to infinity and LixnT cannot be derived. That problem could arise for extremely mobile individuals, such as tuna, turtles and seabirds [8], or fishing vessels [6], and avoiding it would require the computation of multiple dynamic reference areas. Therefore, LixnT may be better used for specific cases where the definition of the reference area relies on a deep knowledge of the spatial behaviour of the populations. Joint Potential Path Area (jPPA) Long et al. [39] computed the relative size of the potential encounter area at each time step of two individuals' tracks. Assuming a speed limit ϕ, the potential locations visited between two consecutive fixes define an ellipse (Additional file 4). Then, the potential encounter area corresponds to the intersection between the ellipses of the two individuals (at simultaneous time steps; see Fig. 3). The overall potential meeting area is given by the spatial union of all those potential encounter areas. This area is then normalized by the surface of the spatial union of all the computed ellipses to produce the joint Potential Path Area (jPPA) metric ranging from 0 to 1 (see formula in Table 1). jPPA values close to 0 indicate no potential spatio-temporal overlap, while values close to 1 indicate a strong spatio-temporal match. Example of the derivation of the joint potential path area (when ϕ=10). Circles and squares represent locations of two different individuals; the numbers inside represent the time sequence. The grey scales of the ellipses correspond to the time intervals used for their computation: from light grey for the [1,2] interval to dark grey for the [3,4] interval. The black regions with white dashed borders correspond to the potential meeting areas Several issues can be discussed here. First, no movement model is assumed and therefore the method confers the same probabilities of presence to every subspace within the ellipse regions. This is clearly unrealistic as individuals are more likely to occupy the central part of the ellipse because they cannot always move at ϕ, i.e. maximal speed. Second, the computation of the ellipses relies strongly on the ϕ parameter. If ϕ is unrealistically small, it would be impossible to obtain the observed displacements and the ellipses could not be computed. By contrast, if ϕ is too large, the ellipses would occupy such a large area that the intersected areas would also be very large (hence a large jPPA value). Alternatively, [36] proposed a dynamic computation of ϕ as a function of the activity performed by the individual at each fix. Within this approach, additional information or knowledge (i.e. other data sources or models) would be required for the computation of ϕ. Cross sampled entropy (CSE and CSEM) Cross sampled entropy (CSE) [51] comes from the time series analysis literature and is used for comparing pairs of motion variables [3; 18, e.g.]. It evaluates the similarity between the dynamical changes registered in two series of any given movement measure. Here we present a simplification of the CSE for simultaneous fixes and position series. A segment of track A would be said to be m-similar to a segment of track B if the distance between paired fixes from A and B remain below a certain threshold during m consecutive time steps. If we define Nm as the number of m-similar segments within the series, then CSE can be defined as (the negative natural logarithm of) the ratio of Nm+1 over Nm and might be understood as (the negative natural logarithm of) the probability for an m-similar segment to also be (m+1)-similar. Formally, CSE is defined as: $$ \begin{aligned} CSE_{\delta}(m) &= -\ln\left\{\frac{ \sum\nolimits_{t=1}^{T-m} \mathbbm{1}{\left\{ \left(\max_{k \in [0,m]} \left| X_{t+k}^{A} - X_{t+k}^{B} \right| \right) < \delta \right\} } }{\sum\nolimits_{t=1}^{T-m} \mathbbm{1}{\left\{ \left(\max_{k \in [0,m-1]} \left| X_{t+k}^{A} - X_{t+k}^{B} \right| \right) < \delta \right\} }}\right\}\\ &=-\ln\frac{N_{m+1}}{N_{m}}, \end{aligned} $$ A large value of CSE corresponds to greater asynchrony between the two series, while a small value corresponds to greater synchrony. CSE relies on an ad hoc choice of both m and δ. In practice, it is expected that the movement series of A and B will not be constantly synchronous and that, for a large value of m, Nm could be equal to 0, in which case CSE would tend to ∞. Therefore, the largest value of m such that Nm>0, i.e. the length of the longest similar segment, could be an alternative indicator of similarity between the series (do not confuse with the longest common subsequence LCSS; see [60]). We propose to use this measure (standardized by T−1 to get a value between 0 and 1) as an alternative index of joint movement (formula 9), which we denote by CSEM. An example of a dyad and the computation of its CSEs and CSEM is shown in Fig. 4. $$ CSEM = \frac{\max \left\{m; N_{m} >0\right\} }{T-1}, $$ Example of the derivation of CSE and CSEM when the compared features correspond to the positions of the individuals and δ=3. Circles and squares represent positions of two different individuals. The grey scales and arrows represent the time sequence of both tracks. Dotted lines represent the distances between simultaneous fixes; their values are shown. Values for all steps for CSEM computation are also shown with the convention that max{∅}=0. Correlations (r V) Pearson and Spearman correlations between variables such as longitude, latitude, distance, velocity, acceleration and turning angles from pairs of tracks, have been used as measures of synchrony in several studies (e.g. [16],). Correlations are easy to interpret. Pearson correlation coefficients (Table 1) assess linear correlations, while Spearman correlation coefficients based on ranks statistics capture any functional correlation. The correlation in a given V variable between dyads is denoted by rV. Dynamic Interaction (DI, D I d and D I θ) Long and Nelson [37] argued that it is necessary to separate movement patterns into direction and displacement (i.e. distance between consecutive fixes or step length), instead of computing a correlation of locations [55] which may carry a mixed effect of both components. To measure interaction in displacement, at each time step, the displacements of simultaneous fixes are compared (formula 10). $$ g_{t}^{\beta} = 1- \left(\frac{\left| d_{t,t+1}^{A} - d_{t,t+1}^{B} \right|}{d_{t,t+1}^{A} + d_{t,t+1}^{B}}\right)^{\beta} $$ where β is a scaling parameter meant to give more or less weight to similarity in displacement when accounting for dynamic interaction. As β increases, \(g_{t}^{\beta }\) is less sensitive to larger differences in displacement. Its default value is 1. When \(d_{t,t+1}^{A} = d_{t,t+1}^{B}, g_{t}^{\beta }=1\); and when the difference in displacement between A and B at time t is large, \(g_{t}^{\beta }\) approaches zero. For \(g_{t}^{\beta }\) to be 0, one (and only one) of the individuals in the dyad should not move; for a sum of \(g_{t}^{\beta }\) to be equal to zero, at every time t one of the two individuals should not move. Interaction in direction is measured by $$ f_{t} = \cos\left(\theta_{t,t+1}^{A} - \theta_{t,t+1}^{B}\right) $$ where θt,t+1 is the direction of an individual between time t and t+1. ft is equal to 1 when movement segments have the same orientation, 0 when they are perpendicular and −1 when they go in opposite directions. Long and Nelson [37] proposed 3 indices of dynamic interaction: 1) DId, dynamic interaction in displacement (average of all \(g_{t}^{\beta }\)); 2) DIθ, dynamic interaction in direction (average of all ft); and 3) DI, overall dynamic interaction, defined as the average of \(g_{t}^{\beta } \times f_{t}\) (Table 1). DId ranges from 0 to 1, DIθ from -1 to 1, and DI from -1 (opposing movement) to 1 (cohesive movement). Figure 5 shows an example of the three indices. Example of a dyad for which correlations in longitude, latitude and an average of both (rLon,rLat and rLonlat, respectively), DId,DIθ and DI are derived. Circles and squares represent locations of two different individuals; the numbers inside represent the time sequence. Displacement lengths and absolute angle values are also shown Conclusions on the theoretical properties of the metrics Practical use (C1): While each metric concerns a concrete aspect of joint-movement behaviour, some of them, such as Cs and DI, are harder to interpret. DI mixes up the coordination in displacement and direction. When DI is close to 1, it is certainly explained by high values in both components. When it is close to −1, it is an indication of overall high displacement coordination but in opposite directions. With values around zero, however, it is impossible to know if it is because of displacement or direction or both. For Cs, because obtaining values close to −1 is extremely rare, values around zero and, more particularly, slightly negative values are difficult to interpret. In addition, the maximum attainable value depends on the length of the series, which is likely to vary from dyad to dyad (Additional file 2). Dependence on parameters (C2): Almost every metric depends on the ad hoc definition of a parameter or component, as summarized in Table 1. This is consistent with the fact that, since there is no consensus on the definition of behaviour [34], and much less on that of collective behaviour, its study depends heavily on the definition that the researcher gives to it. It should be noted that behind each choice of a parameter value, there is also an underlying assumption (e.g. that a distance below a δ value means proximity); the difference is that parameters can be tuned, and a variety of values can be easily tested. HAI and LixnT make a critical assumption of a static reference area, and its definition, which may be tricky for highly mobile individuals, is a key issue for the computation of both metrics. On the other hand, rV and DIθ are the only metrics that do not depend on parameter tuning or assumptions for its derivation; except for the assumptions of correlations being linear, or of linear movement between two successive positions when deriving directions, respectively. In this section we used schematic, simple and contrasting case scenarios to evaluate the ability of the metrics to assess joint movement, in terms of proximity and coordination. To build the case scenarios, we considered three levels of dyad proximity (high, medium and low); coordination was decomposed into two aspects, direction (same, independent and opposite) and speed (same or different). Eighteen case scenarios were thus built, with one example of dyad per scenario (Fig. 6; metrics in Additional file 5). The dyads for each case scenario were deliberately composed of a small number of fixes (∼10 simultaneous fixes, as in [37],) to facilitate interpretation of the metric values and the graphical representation of the arbitrarily constructed tracks (online access to tracks in github repository; see Availability of data and materials section). To assess the sensitivity of the metrics to changes in patterns of proximity and coordination, the case scenarios were grouped according to the categories in Table 2. One example of dyad for each case scenario representing contrasting patterns of proximity and coordination (in direction and speed, CDirection and CSpeed, respectively). Numbers correspond to scenario ID in Table 2. Solid lines represent the two trajectories, the solid points correspond to the start of the trajectories. The black dashed circumferences represent arbitrary reference areas; two circumferences correspond to an absence of a common reference area Due to the simplicity for its interpretation, Prox was defined as \(Prox_{\mathbbm {1},\delta }\). Three distance thresholds \(Prox_{\mathbbm {1},\delta }\) of 1, 2 and 3 distance units were used for Prox, HAI and CSEM, thus denoted for instance Prox1, Prox2 and Prox3. For Cs, the original definition (Eq. 3) was used. jPPA, ϕ was arbitrarily fixed to 10. Regarding dynamic interaction, β was fixed to 1. The v variables for Pearson correlations (Table 1) were longitude (rLon), latitude (rLat) and speed (rSpeed). An average of correlations in longitude and latitude, denoted by rLonlat, was also computed. Boxplots of each metric were derived for each proximity and coordination category (Figs. 7, 8 and 9). Boxplots of each metric by category of proximity. Green, orange and purple correspond to case scenarios of high, medium and low proximity. For each category, the solid horizontal bar corresponds to the median, the lower and upper limit of the box correspond to the first and the third quartiles, while the solid vertical line joins the minimum to the maximum values. The green and purple boxplots are shifted to the left and right, respectively, to distinguish them better in case of overlap. X-axis: The metrics ranging from 0 to 1 are on the left (up to DId) while those ranging from -1 to 1 are on the right Boxplots of each metric by category of direction coordination. Green, orange and purple correspond to case scenarios of same, independent and opposite direction. For each category, the solid horizontal bar corresponds to the median, the lower and upper limit of the box correspond to the first and the third quartiles, while the solid vertical line joins the minimum to the maximum values. The green and purple boxplots are shifted to the left and right, respectively, to distinguish them better in case of overlap. X-axis: The metrics ranging from 0 to 1 are on the left (up to DId) while those ranging from -1 to 1 are on the right Boxplots of each metric by category of speed coordination. Green and orange correspond to case scenarios of same and different speed. For each category, the solid horizontal bar corresponds to the median, the lower and upper limit of the box correspond to the first and the third quartiles, while the solid vertical line joins the minimum to the maximum values. The green boxplots are shifted to the left to distinguish them better in case of overlap. X-axis: The metrics ranging from 0 to 1 are on the left (up to DId) while those ranging from -1 to 1 are on the right The values taken by Prox, jPPA, CSEM and, to a lesser degree, Cs, showed sensitivity to the level of proximity (Fig. 7). Conversely, no association was revealed between the proximity scenarios and the metrics based on correlation, dynamic interaction and reference area occupation. Changes in direction were reflected in values taken by correlation metrics on location (rLonlat,rLon and rLat) and two dynamic interaction metrics, DI and DIθ (Fig. 8). Cs took lower values in scenarios of opposite direction, but independent and same direction scenarios reflected no distinction for this metric. High correlation in speed was found for scenarios of opposite and same direction, while a large variability was found when direction was independent. rspeed showed differences when direction was independent between dyads, but no distinction was caught by the metric between same and opposite direction scenarios. The other metrics did not show distinguishable patterns related to changes in direction coordination. Concerning coordination in speed, the most sensitive metric was DId, which measures similarity in the distances covered by individuals at simultaneous fixes (Fig. 9). rSpeed took a wide range of values when speed was not coordinated, while it was equal to 1 when perfectly coordinated. DId is more sensitive to changes in the values of speed (similar to step length because of the regular step units) than rspeed which characterizes variations in the same sense (correlation), rather than correspondence in values. HAI and LixnT showed slight differences in their ranges of values with changes in speed-coordination scenarios. When analysing combined categories of proximity and speed-coordination, and proximity and direction-coordination, less distinctive patterns were found, probably due to the higher number of categories, each containing fewer observations (Figure in Additional file 6). Overall, Prox, jPPA, CSEM, rLonlat,rSpeed,DId,DIθ and DI were highly sensitive to changes in patterns of either proximity or coordination. For proximity scenarios, the variance of some metrics for each category was also sensitive to the δ chosen; i.e. for larger δ, the variance of Prox and CSEM decreased in high proximity, while it increased for low proximity cases. This pattern does not hold for HAI, probably due to the strong dependence of this metric on the arbitrary choice of the reference area. Cs showed a slight sensitivity to changes in direction and proximity scenarios, although the values taken for each type of case scenario did not show a clear separation. Table 3 summarizes the theoretical and case-scenario analyses. Most metrics reflected marked properties of dyadic joint movement, evidenced both theoretically and through the case scenario assessment. Exceptions were Cs, HAI and LixnT. Cs was sensitive to the null model for the distance expected by chance (Dchance; formula 4), it did not attain its whole range of definition, turned out to be asymmetric and dependent on the length of the series (Additional file 2), and was less sensitive than the other metrics to changes in patterns of joint movement. Perhaps a change in the null model for Dchance could improve Cs's power to assess joint movement, though the new null model should be justified. HAI and LixnT, dependent on the reference area definition, were even less sensitive to changes in joint movement patterns. This supports our earlier statement that LixnT and HAI should only be used when a reference area exists and is known. Alternatively, Prox works as a simpler metric and is highly sensitive to changes in proximity. The only drawback of Prox is the need to choose a distance threshold parameter, eventually based on prior knowledge of the spatial dynamics of the population. Otherwise, a set of values can be tested, as shown here. jPPA presents the advantage of not requiring the knowledge of a reference area, but still relies on assumptions related to equal probability of presence in an ellipse, which strongly depends on a ϕ parameter whose tuning is not obvious. Evaluation of the two criteria for each metric C1: Practical use C2: Dependence on parameters / assumptions Attainable range Interpretation for joint movement Sensitivity to C Direction C Speed From always distant (0) to always close (1) User tractable (ad hoc definition of distance threshold) Difficult: i) negative value close to 0 difficult to interpret; ii) series-length dependent Not user tractable (null hypothesis of independent movement) From always distant and out of SAB at least for one individual (0) to always close and in SAB (1) Not user tractable (reference area and distance threshold) L ixn T Same as HAI Not user tractable (reference area) j P P A From no (0) to permanent (1) potential overlap User tractable (maximum velocity) C S E M From highly synchronous (0) to asynchronous (1) User tractable (distance threshold) From anticorrelated (-1) to correlated (1) High* No dependence D I d From opposite (-1) to cohesive (1) movement in displacement User tractable (weighting coefficient for similarity in displacement) D I θ From opposite (-1) to cohesive (1) movement in azimuth From opposite (-1) to cohesive (1) movement in both mixed displacement and azimuth effects Note: P =Proximity, Cspeed= coordination in speed, Cdirection= coordination in direction, S = reference area. *Depending on v (see section on case scenarios). Text in bold correspond to positive attributes CSEM evaluates the similarity between the dynamical changes in movement patterns within a δ bandwidth, and, because of that, was expected to be more sensitive to changes in proximity than in coordination. It should be further assessed if using other variables for deriving CSEM (i.e. using [51] generic definition) could make it more sensitive to coordination than proximity. As with Prox, it is in the hands of the user to tune the threshold parameter. Because we were using locations as the analysed series (so the dynamical changes assessed were in fact changes in distance), we used exactly the same threshold values as for Prox. By contrast, correlations in location (rLon,rLat,rLonlat) did show sensitivity to changes in coordination, as expected. The same occurred with DIθ and DI. Correlation in speed was sensitive to changes in both coordination components, showing high variance when there was no coordination (independent direction or speed). DId, on the other hand, was only sensitive to changes in speed. Because the time-step was regular, identical speed was equivalent to identical covered distance (at simultaneous fixes), which explained how in those scenarios DId was equal to 1. While DI behaved more similarly to DIθ, its definition makes it impossible to separate the effects of coordination in displacement and in azimuth, which makes the interpretation of the metric more difficult than interpreting DId and DIθ independently. We also analysed the computational cost associated to these metrics. We simulated 50000 dyads with trajectories following a Brownian motion, each one composed of 100 fixes. Using a parallelization procedure, we found low CPU times for all metrics (<1 s) except jPPA (∼68 s). CPU time for jPPA and CSEM increased when we increased the number of fixes to 1000, to ∼161 and ∼94 s, respectively. It should be noted that for jPPA, the areas of intersection and union of the ellipses were approximated by grid cells, so for smaller cell sizes (i.e. more accurate jPPA estimation), the computational cost would increase. Researchers with long series of trajectories and a large amount of dyads should take this into consideration (results for the computational cost and more details on its calculation are in Additional file 7). Although this review is directed at trajectory data (i.e. time series of locations that allow for movement path reconstruction) and the metrics presented here were defined for simultaneous fixes at regular time steps, technically speaking, some of these metrics could be computed only based on the identification of individuals simultaneously observed in a certain area (e.g. LixnT). These cases, which may be extremely sensitive to the spatial accuracy and the time intervals between observed fixes, are out of the scope of this review. For the case scenarios built to illustrate the metrics, we assumed that the granularity was correct, i.e. that the temporal and spatial resolution of the data were coherent in respect to the dyadic behavioural patterns under scope. Likewise, for practical uses of the metrics, researchers should 1) make sure that the spatiotemporal data that they are analysing allow reconstructing the movement paths of a dyad and 2) that the sampled (discretised) version of these paths are characterized by locations estimated with high precision, and that the time steps are small enough so that movement between two points could be assumed to be linear, so that the derivation of distances, speed and turning angles could be reliable. Further discussions on the importance of scale and granularity in the analysis of movement patterns can be found in [14, 30, 31]. We expected to obtain a binary classification of the metrics into proximity and coordination, based on the theoretical and case scenario evaluations. This was not so straightforward and we ended up instead with a 3-dimensional space representation (Fig. 10). Prox and CSEM are the most proximity-like indices. jPPA would be the third one due to its sensitivity to changes in proximity in the case scenario evaluation. Cs would be somewhere between Prox and direction coordination because it showed certain sensitivity to both HAI and LixnT are almost at the origin but slightly related to speed coordination. Theoretically, both metrics should account for proximity, since when two individuals are together in the same area, they are expected to be at a relative proximity; in practice, this was not reflected in sensitivity to proximity from HAI and LixnT. Still, HAI is represented in the graphic slightly above LixnT since its formulation specifically accounts for proximity in solitary use of the reference area. They are both graphically represented in association with the speed coordination axis because of the case scenario results which reflected that being in the same area only simultaneously requires some degree of synchrony. DId was the most sensitive metric to speed coordination, followed by rSpeed. DIθ and rLonlat are the most strongly linked to direction coordination, seconded by DI, which is also related to speed coordination. A principal component analysis (PCA) using the values obtained for the case scenarios gave very similar results to those in Fig. 10 (Additional file 8), but this schematic representation is more complete because: 1) the theoretical and case-scenario assessment were both taken into account; 2) the PCA was performed without LixnT and HAI that had missing values for case scenarios with no common reference area (data imputation as in [25] was not appropriate for this case). Representation of metrics in terms of their distance relative to proximity and coordination Figure 10 and Table 3 could be used as guidelines to choose the right metrics depending on the user's case study. For instance, in an African lion joint-movement study [4], proximity was the focus of the study; in that case, the IAB (Prox) metric was used. For similar studies several proximity-related metrics could be chosen; the choice would depend on the assumptions that the researcher is willing to make. In other cases, researchers may want to assess collective behaviour in tagged animals (e.g. birds or marine mammals) that do not remain proximal during their foraging/migration trips. Then, the collective behaviour component that could be evaluated would be coordination. Whether it is in direction or speed would depend on the researcher's hypotheses. Coordination, or synchrony, has already been observed in some animal species such as northern elephant seals [17, e.g.] and bottlenecked sea turtles (e.g. [46]), among others. The use of the metrics presented here would allow a quantification of the pairwise behavioural patterns observed, a first step towards a quantitative analysis of the factors explaining those behaviours (e.g. physiological traits, personality or environmental conditions). The metrics presented here are applicable to any organism with tracking data (not necessarily georeferenced). If the aim is to evaluate all three joint-movement dimensions, we advice to consider for each dimension at least one metric that is highly sensitive to it, rather than a metric that is weakly related to two or three. The complementarity of the metrics (i.e. multivariate approach) has not been studied here, and should be the focus of a future study. The assessment of a 'lagged-follower' behaviour, where one individual would follow the other, was out of the scope of this work and should be addressed in the future. The study of this type of interactions is rather challenging, since the lag in the following behaviour is probably not static, and could vary between tracks and also within tracks. A few works use entropy-based measures similar to CSE (transfer entropy [54] or a causation entropy [40]), to measure how much the movement dynamics of an individual (called the source individual, or the leader) influences the transition probabilities in the movement dynamics of another individual [45, 61]. Some other works have focused on this type of interaction regarding it as a delay between trajectories and transforming the problem into one of similarity between trajectories, where one is delayed from the other [22, 27]. Metrics based on the Fréchet distance [1, 21] or the Edit distance [33] are common choices for measuring those similarities in computer science studies. In terms of computational cost, assessing following behaviour should be much more expensive than assessing joint movement. This study focused on dyadic joint movement. The next step would be to identify metrics to characterize collective behaviour with more than two individuals. A pragmatical approach to investigate this more complex issue could be to identify, within large groups of individuals, the ones that move together for each given segment of trajectories (as dyads, triads or larger groups), and to study those dynamics. A similar procedure could then be used to spot following behaviour and leadership. Movement could be then regarded as spatio-temporal sequences of joint, following, hybrid and independence movement with one or more partners. Dhanjal-Adams et al. [15] present a Hidden Markov modelling approach to identify joint-movement states using metrics of direction and amplitude of flight synchronization in long-distance migratory birds (and assuming proximity between individuals). A similar approach could be used to identify more stages of collective behaviour, using several metrics as observed variables in the movement process. Finally, a robust assessment of the different patterns of collective behaviour (e.g. proximal joint movement, coordination movement, follower movement) at multiple scales would provide realistic inputs for including group dynamic into movement models, which until now have relied on strong assumptions on collective behaviour in the few cases where it was taken into account [23, 29, 44, 47, 53], mostly due to the lack of understanding of collective motion. The increasing availability of telemetry data for movement studies allow exploring patterns of collective movement. Here we reviewed metrics for assessing dyadic joint movement. We showed that some of the metrics were more suited for assessing proximity, others for coordination in direction or speed, and some others were not very sensitive to any of those aspects of joint movement. The results shown in this review offer guidelines to readers for choosing the metrics depending on which aspect of joint movement they would like to either describe or incorporate into movement models. This study also contributes to highlighting the movement assumptions behind each metric as well as the parameters that need tuning. Users need to be able to decide whether these assumptions are realistic for their case studies, and to understand the consequences of their choice of parametrization. An accurate interpretation of movement patterns (here dyadic movement) relies on understanding the tools – in this case, metrics – used for obtaining those patterns. Though the present work only concerns dyadic movement, further studies should concern the identification of larger groups moving together, where the size of the group would change in time, and metrics that would account for more than two individuals. C1: Practical use criterion Dependence on parameters criterion Coefficient of association Cs: Coefficient of sociality CSE: Cross sampled entropy CSEM: Standardized cross sampled entropy Dynamic interaction index D I θ : Dynamic interaction in direction D I d : Dynamic interaction in displacement HAI: Half-weight association index jPPA: Joint potential path area L ixn : Coefficient of interaction L ixn T : Transformed coefficient of interaction Prox: Proximity index r V : Correlation (on variable V) S AB : Reference area The authors would like to warmly thank Mathieu Basille for constructive comments to the manuscript, Angela Blanchard for proofreading and Criscely Lujan for help with github. This work has received funding from French region Pays de la Loire and the research project COSELMAR, the French research network PathTIS and the European Union's Horizon 2020 research and innovation program under grant agreement No 633680, Discardless. Codes with an example, and the dyad tracks arbitrarily created for the case scenarios are accessible from: https://github.com/rociojoo/MetricsDyadJM/ All analyses were performed in R [48]. Distances between fixes were computed using the pdist package [63]. For jPPA calculations, the ellipses were computed as in [35] and intersection and union areas were approximated by gridding the space via packages polyclip [24] and geoR [49]. For LixnT and HAI, SDMTools [59] was used to identify points in and out of the reference area. Parallel calculations to simulate and account for CPU time were done using packages parallel and pbmcapply [28]. The PCA in Additional file 8 were performed with the FactoMineR package [32]. RJ conceived of the research idea and performed the case scenario evaluation. The review of theoretical properties of the metrics was mostly performed by RJ and MPE, as well as the computational cost assessment. RJ leaded the manuscript writing, but all authors actively participated in discussions and editing. All authors read and approved the final manuscript. None of the authors have any competing interests in the manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1 Graphical examples of two kernel functions for Proximity metrics. (PDF 89 kb) Additional file 2 Cs1 requirements to take large negative values. (PDF 359 kb) Additional file 3 Lixn: Table for computing probabilities. (PDF 119 kb) Additional file 4 How to define the ellipse of the potential path area. (PDF 62 kb) Additional file 5 Metrics derived for each case scenario. (PDF 63 kb) Additional file 6 Summary figures for proximity-speed and proximity-coordination scenarios. (PDF 59 kb) Additional file 7 Computational cost of each metric. (PDF 91 kb) Additional file 8 Principal component analysis of the metrics for the case scenarios. (PDF 190 kb) Department of Wildlife Ecology and Conservation, Fort Lauderdale Research and Education Center, University of Florida, 3205 College Avenue, Davie, Florida, 33314, USA IFREMER, Ecologie et Modèles pour l'Halieutique, BP 21105, Nantes Cedex 03, 44311, France Univ Rennes, Agrocampus Ouest, CNRS, IRMAR - UMR 6625, F-35000, Rennes, France MARBEC, IRD, Ifremer, CNRS, Univ Montpellier, Sète, France Aronov B, Har-peled S, Knauer C, Wang Y, Wenk C. Frechet Distance for Curves, Revisited. Algorithms - ESA 206. 2006;:52–63. 1504.07685. Accessed 03 2019. Atwood TC, Weeks HP. Spatial home-range overlap and temporal interaction in eastern coyotes: the influence of pair types and fragmentation. Can J Zool. 2003; 81(9):1589–97. https://doi.org/10.1139/z03-144.View ArticleGoogle Scholar Barnabe L, Volossovitch A, Duarte R, Ferreira AP, Davids K. Age-related effects of practice experience on collective behaviours of football players in small-sided games. Hum Mov Sci. 2016; 48:74–81. https://doi.org/10.1016/j.humov.2016.04.007.View ArticleGoogle Scholar Benhamou S, Valeix M, Chamaillé-Jammes S, Macdonald DW, Loveridge AJ. Movement-based analysis of interactions in African lions. Anim Behav. 2014; 90:171–80. https://doi.org/10.1016/j.anbehav.2014.01.030.View ArticleGoogle Scholar Bertrand MR, DeNicola AJ, Beissinger SR, Swihart RK. Effects of parturition on home ranges and social afficiliations of female white-tailed deer. J Wildl Manag. 1996; 60(4):899–909.View ArticleGoogle Scholar Bertrand S, Diaz E, Lengaigne M. Patterns in the spatial distribution of Peruvian anchovy (Engraulis ringens) revealed by spatially explicit fishing data. Prog Oceanogr. 2008; 79(2-4):379–89. https://doi.org/10.1016/j.pocean.2008.10.009.View ArticleGoogle Scholar Biro D, Sasaki T, Portugal SJ. Bringing a Time – Depth Perspective to Collective Animal Behaviour. Trends Ecol Evol. 2016; 31(7):550–62. https://doi.org/10.1016/j.tree.2016.03.018.View ArticleGoogle Scholar Block BA, Jonsen ID, Jorgensen SJ, Winship AJ, Shaffer SA, Bograd SJ, Hazen EL, Foley DG, Breed GA, Harrison A-L, Ganong JE, Swithenbank A, Castleton M, Dewar H, Mate BR, Shillinger GL, Schaefer KM, Benson SR, Weise MJ, Henry RW, Costa DP. Tracking apex marine predator movements in a dynamic ocean. Nature. 2011; 475:86–90. https://doi.org/10.1038/nature10082.View ArticleGoogle Scholar Börger L, Dalziel BD, Fryxell JM. Are there general mechanisms of animal home range bevhaviour? A review and prospects for future research,. Ecol Lett. 2008; 11(6):637–50. https://doi.org/10.1111/j.1461-0248.2008.01182.x.View ArticleGoogle Scholar Brotherton PN, Pemberton JM, Komers PE, Malarky G. Genetic and behavioural evidence of monogamy in a mammal, Kirk's dik-dik (Madoqua kirkii). Proc Biol Sci R Soc. 1997; 264(1382):675–81. https://doi.org/10.1098/rspb.1997.0096.View ArticleGoogle Scholar Camazine S, Deneubourg J-L, Franks NR, Sneyd J, Bonabeau E, Theraula G. Self-organization in Biological Systems, vol 7. United States of America: Princeton University Press; 2003.Google Scholar Cole LC. The Measurement of Interspecific Association. Ecology. 1949; 30(4):411–24.View ArticleGoogle Scholar Conradt L, List C. Group decisions in humans and animals: A survey. Philos Trans R Soc B Biol Sci. 2009; 364(1518):719–42. https://doi.org/10.1098/rstb.2008.0276.View ArticleGoogle Scholar De Solla SR, Bonduriansky R, Brooks RJ. Eliminating autocorrelation reduces biological relevance of home range estimates. J Anim Ecol. 1999; 68(2):221–34. https://doi.org/10.1046/j.1365-2656.1999.00279.x.View ArticleGoogle Scholar Dhanjal-Adams KL, Bauer S, Emmenegger T, Hahn S, Lisovski S, Liechti F. Spatiotemporal Group Dynamics in a Long-Distance Migratory Bird. Curr Biol. 2018; 28(17):2824–303. https://doi.org/10.1016/j.cub.2018.06.054. Accessed 14 Nov 2018.View ArticleGoogle Scholar Dodge S, Weibel R, Forootan E. Revealing the physics of movement: Comparing the similarity of movement characteristics of different types of moving objects. Comput Environ Urban Syst. 2009; 33(6):419–34. https://doi.org/10.1016/j.compenvurbsys.2009.07.008.View ArticleGoogle Scholar Duarte CM, Riker P, Srinivasan M, Robinson PW, Gallo-Reynoso JP, Costa DP. Sonification of Animal Tracks as an Alternative Representation of Multi-Dimensional Data: A Northern Elephant Seal Example. Frontiers Mar Sci. 2018; 5. https://doi.org/10.3389/fmars.2018.00128. Accessed 10 June 2018. Duarte R, Araújo D, Correia V, Davids K, Marques P, Richardson MJ. Competing together: Assessing the dynamics of team-team and player-team synchrony in professional association football. Hum Mov Sci. 2013; 32(4):555–66. https://doi.org/10.1016/j.humov.2013.01.011.View ArticleGoogle Scholar Duranton C, Gaunet F. Behavioural synchronization from an ethological perspective: overview of its adaptive value. Adapt Behav. 2016. https://doi.org/10.1177/1059712316644966. Fleming CH, Fagan WF, Mueller T, Olson KA, Leimgruber P, Calabrese JM. Estimating where and how animals travel: an optimal framework for path reconstruction from autocorrelated tracking data. Ecology. 2016; 97(3):576–82.PubMedGoogle Scholar Frechet M. Sur L'Ecart de Deux Courbes et Sur Les Courbes Limites. Trans Am Math Soc. 1905; 6(4):435–49.View ArticleGoogle Scholar Giuggioli L, McKetterick TJ, Holderied M. Delayed Response and Biosonar Perception Explain Movement Coordination in Trawling Bats. PLoS Comput Biol. 2015; 11(3):1–21. https://doi.org/10.1371/journal.pcbi.1004089.View ArticleGoogle Scholar Haydon DT, Morales JM, Yott A, Jenkins DA, Rosatte R, Fryxell JM. Socially informed random walks: incorporating group dynamics into models of population spread and growth. Proc Biol Sci R Soc. 2008; 275(1638):1101–9. https://doi.org/10.1098/rspb.2007.1688.View ArticleGoogle Scholar Johnson A. Polyclip: Polygon Clipping. 2015. Ported to R by Adrian Baddeley and Brian Ripley. R package version 1.3-2. http://CRAN.R-project.org/package=polyclip. Josse J, Husson F, Pagès J. Gestion des données manquantes en Analyse en Composantes Principales. J Soc Fr Stat. 2009; 150(2):28–51.Google Scholar Kenward RE, Marcström V, Karlbom M. Post-nestling behaviour in goshawks, Accipiter gentilis: II. Sex differences in sociality and nest-switching. 1993. https://doi.org/10.1006/anbe.1993.1199. http://dx.doi.org/10.1006/anbe.1993.1199%5Cnhttp://linkinghub.elsevier.com/retrieve/pii/S0003347283711991. Konzack M, McKetterick T, Ophelders T, Buchin M, Giuggioli L, Long J, Nelson T, Westenberg MA, Buchin K. Visual analytics of delays and interaction in movement data. Int J Geogr Inf Sci. 2017; 31(2):320–45. https://doi.org/10.1080/13658816.2016.1199806.View ArticleGoogle Scholar Kuang K, Napolitano F. Pbmcapply: Tracking the Progress of Mc*pply with Progress Bar. 2018. R package version 1.3.0. https://CRAN.R-project.org/package=pbmcapply. Langrock R, Hopcraft JGC, Blackwell PG, Goodall V, King R, Niu M, Patterson TA, Pedersen MW, Skarin A, Schick RS. Modelling group dynamic animal movement. Methods Ecol Evol. 2014; 5(2):190–9. https://doi.org/10.1111/2041-210X.12155. arXiv:1308.5850v1.View ArticleGoogle Scholar Laube P, Purves RS. How fast is a cow? Cross-Scale Analysis of Movement Data. Trans GIS. 2011; 15(3):401–18. https://doi.org/10.1111/j.1467-9671.2011.01256.x.View ArticleGoogle Scholar Laube P, Dennis T, Forer P, Walker M. Movement beyond the snapshot - Dynamic analysis of geospatial lifelines. Comput Environ Urban Syst. 2007; 31(5):481–501. https://doi.org/10.1016/j.compenvurbsys.2007.08.002.View ArticleGoogle Scholar Lê S, Josse J, Husson F. FactoMineR: A package for multivariate analysis. J Stat Softw. 2008; 25(1):1–18. https://doi.org/10.18637/jss.v025.i01.View ArticleGoogle Scholar Levenshtein VI. Binary codes capable of correcting deletions, insertions, and reversals. Sov Phys Dokl. 1966; 10(8):707–10. https://doi.org/citeulike-article-id:311174. arXiv:1011.1669v3.Google Scholar Levitis DA, Lidicker WZ, Freund G. Behavioural biologists don't agree on what constitutes behaviour,. Anim Behav. 2009; 78(1):103–10. https://doi.org/10.1016/j.anbehav.2009.03.018.View ArticleGoogle Scholar Long J. wildlifeDI: Calculate Indices of Dynamic Interaction for Wildlife Telemetry Data. 2014. R package version 0.2. https://CRAN.R-project.org/package=wildlifeDI. Long J, Nelson T. Home range and habitat analysis using dynamic time geography. J Wildl Manag. 2015; 79(3):481–90. https://doi.org/10.1002/jwmg.845.View ArticleGoogle Scholar Long JA, Nelson TA. Measuring Dynamic Interaction in Movement Data. Trans GIS. 2013; 17(1):62–77. https://doi.org/10.1111/j.1467-9671.2012.01353.x.View ArticleGoogle Scholar Long JA, Nelson TA, Webb SL, Gee KL. A critical examination of indices of dynamic interaction for wildlife telemetry studies. J Anim Ecol. 2014; 83:1216–33. https://doi.org/10.1111/1365-2656.12198.View ArticleGoogle Scholar Long JA, Webb SL, Nelson TA, Gee KL. Mapping areas of spatial-temporal overlap from wildlife tracking data. Mov Ecol. 2015; 3(1):38. https://doi.org/10.1186/s40462-015-0064-3.View ArticleGoogle Scholar Lord WM, Sun J, Ouellette NT, Bollt EM. Inference of Causal Information Flow in Collective Animal Behavior. IEEE Trans Mol Biol Multi-Scale Commun. 2016; 2(1):107–16. https://doi.org/10.1109/TMBMC.2016.2632099. Accessed 21 Nov 2018.View ArticleGoogle Scholar Miller JA. Using Spatially Explicit Simulated Data to Analyze Animal Interactions: A Case Study with Brown Hyenas in Northern Botswana. Trans GIS. 2012; 16(3):271–91. https://doi.org/10.1111/j.1467-9671.2012.01323.x.View ArticleGoogle Scholar Minta SC. Tests of Spatial and Temporal Interaction Among Animals. Ecol Appl. 1992; 2(2):178–88.View ArticleGoogle Scholar Nathan R, Getz WM, Revilla E, Holyoak M, Kadmon R, Saltz D, Smouse PE. A movement ecology paradigm for unifying organismal movement research. PNAS. 2008; 105(49):19052–9.View ArticleGoogle Scholar Niu M, Blackwell PG, Skarin A. Modeling interdependent animal movement in continuous time. Biometrics. 2016; 72(2):315–24. https://doi.org/10.1111/biom.12454.View ArticleGoogle Scholar Orange N, Abaid N. A transfer entropy analysis of leader-follower interactions in flying bats. Eur Phys J Spec Top. 2015; 224(17):3279–93. https://doi.org/10.1140/epjst/e2015-50235-9. Accessed 14 Nov 2018.View ArticleGoogle Scholar Plot V, de Thoisy B, Blanc S, Kelle L, Lavergne A, Roger-Bérubet H, Tremblay Y, Fossette S, Georges J-Y. Reproductive synchrony in a recovering bottlenecked sea turtle population. J Anim Ecol. 2012; 81(2):341–51. https://doi.org/10.1111/j.1365-2656.2011.01915.x.View ArticleGoogle Scholar Potts JR, Mokross K, Lewis Ma. A unifying framework for quantifying the nature of animal interactions. J R Soc Interface. 2014; 11(96):20140333. https://doi.org/10.1098/rsif.2014.0333. 1402.1802.View ArticleGoogle Scholar R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2015. R Foundation for Statistical Computing. https://www.R-project.org/.Google Scholar Ribeiro Jr PJ, Diggle PJ. geoR: Analysis of Geostatistical Data. 2015. R package version 1.7-5.1. http://CRAN.R-project.org/package=geoR. Rice JC, Rochet MJ. A framework for selecting a suite of indicators for fisheries management. ICES J Mar Sci. 2005; 62(3):516–27. https://doi.org/10.1016/j.icesjms.2005.01.003.View ArticleGoogle Scholar Richman JS, Moorman JR. Physiological time-series analysis using approximate entropy and sample entropy,. Am J Physiol Heart Circ Physiol. 2000; 278(6):2039–49.View ArticleGoogle Scholar Rochet M-J, Trenkel VM. Which community indicators can measure the impact of fishing? A review and proposals. Can J Fish Aquat Sci. 2003; 60(1):86–99. https://doi.org/10.1139/f02-164.View ArticleGoogle Scholar Russell JC, Hanks EM, Haran M. Dynamic Models of Animal Movement with Spatial Point Process Interactions. J Agric Biol Environ Stat. 2016; 21(1):22–40. https://doi.org/10.1007/s13253-015-0219-0. Accessed 07 June 2018.View ArticleGoogle Scholar Schreiber T. Measuring Information Transfer. Phys Rev Lett. 2000; 85(2):461–4. https://doi.org/10.1103/PhysRevLett.85.461. Accessed 19 Nov 2018.View ArticleGoogle Scholar Shirabe T. Correlation Analysis of Discrete Motions In: Raubal M, Miller HJ, Frank AU, Goodchild MF, editors. Geographic Information Science. Volume 4197 of the Series Lecture Notes in Computer Science. Münster: Springer-Verlag: 2006. p. 370–82.Google Scholar Sumpter DJT, Mann RP, Perna A. The modelling cycle for collective animal behaviour. Interface Focus. 2012; 2:764–73. https://doi.org/10.1098/rsfs.2012.0031.View ArticleGoogle Scholar Travassos B, Davids K, Araujo D, Esteves PT. Performance analysis in team sports: Advances from an Ecological Dynamics approach. Int J Perform Anal Sport. 2013; 13(1):83–95.View ArticleGoogle Scholar Van Strien AJ, Soldaat LL, Gregory RD. Desirable mathematical properties of indicators for biodiversity change. Ecol Indic. 2012; 14(1):202–8. https://doi.org/10.1016/j.ecolind.2011.07.007.View ArticleGoogle Scholar VanDerWal J, Falconi L, Januchowski S, Shoo L, Storlie C. SDMTools: Species Distribution Modelling Tools: Tools for Processing Data Associated with Species Distribution Modelling Exercises. 2014. R package version 1.1-221. https://CRAN.R-project.org/package=SDMTools. Vlachos M, Kollios G, Gunopulos D. Discovering similar multidimensional trajectories. In: Proceedings 18th International Conference on Data Engineering. Washington, DC: IEEE Computer Society: 2002. p. 673–684. https://doi.org/10.1109/ICDE.2002.994784.Google Scholar Wang XR, Miller JM, Lizier JT, Prokopenko M, Rossi LF. Quantifying and Tracing Information Cascades in Swarms. PLoS ONE. 2012; 7(7):40084. https://doi.org/10.1371/journal.pone.0040084. Accessed 14 Nov 2018.View ArticleGoogle Scholar White PCL, Harris S. Encounters between Red Foxes (Vulpes vulpes): Implications for Territory Maintenance, Social Cohesion and Dispersion. J Anim Ecol. 1994; 63(2):315–27.View ArticleGoogle Scholar Wong J. Pdist: Partitioned Distance Function. 2013. R package version 1.2. http://CRAN.R-project.org/package=pdist.
CommonCrawl
A longitudinal assessment of retinal function and structure in the APP/PS1 transgenic mouse model of Alzheimer's disease Dana Georgevsky1, Stephanie Retsas1, Newsha Raoufi1,2, Olga Shimoni2 & S. Mojtaba Golzan ORCID: orcid.org/0000-0002-4479-39171 Translational Neurodegeneration volume 8, Article number: 30 (2019) Cite this article A great body of evidence suggests that there are retinal functional and structural changes that occur in Alzheimer's disease (AD). However, whether such changes are primary or secondary remains to be elucidated. We studied a range of retinal functional and structural parameters in association with AD- specific pathophysiological markers in the double transgenic APP/PS1 and control mice across age. Electroretinogram (ERG) and optical coherence tomography (OCT) was performed in APP/PS1 and wild type (WT) control mice every 3 months from 3 to 12 months of age. For functional assessment, the a- and b-wave of the ERG, amplitude of oscillatory potentials (OP) and the positive scotopic threshold response (pSTR) were quantified at each time point. For structural assessment, the inner and outer retinal thickness was segmented and measured from OCT scans. Episodic memory was evaluated at 6, 9 and 12 months of age using the novel object recognition test. Amyloid beta (Aβ) distribution in the hippocampus and the retina were visualised at 3, 6 and 12 months of age. Inter- and intra- group analysis was performed to study rate of change for each parameter between the two groups. Inter-group analysis revealed a significant difference in b-wave and OPs of APP/PS1 compared to WT controls starting from 3 months (p < 0.001). There was also a significant difference in the amplitude of pSTR between the two groups starting from 6 months (p < 0.001). Furthermore, a significant difference in the inner retinal thickness, between the two groups, was observed starting from 9 months (p < 0.001). We observed an age-related decline in retinal functional and structural parameters in both APP/PS1 and WT controls, however, inter-group analysis revealed that inner retinal functional and structural decline is exacerbated in APP/PS1 mice, and that retinal functional changes precede structural changes in this strain. Further studies are required to confirm whether such phenomenon occurs in humans and if studying retinal functional changes can aid-in early assessment of AD. Alzheimer's disease (AD) is a chronic and progressive neurodegenerative disease affecting nearly 44 million people worldwide. It is the most common form of dementia with cognitive impairment as the main clinical manifestation [1]. Currently, diagnosis is primarily based on patient history, neuropsychological testing and in some cases neuroimaging, including positron emission tomography (PET) scans and magnetic resonance imaging (MRI). Neuroimaging methods are expensive, invasive and still not completely conclusive [2]. The challenge with a pre –mortem diagnosis is the current inability to determine whether the pathological hallmarks of AD are firstly detectable. Furthermore, the pathological changes occurring in the brain from early stages through to disease progression cannot be monitored. Extracellular amyloid-beta (Aβ) plaques and intracellular neurofibrillary tangles are the hallmark proteins of AD and currently are only visible on post mortem histopathological analysis [3]. However, patients presenting with complaints of cognitive decline have a substantial amount of damage that has already occurred for a long time. There is evidence in the literature that initial pathophysiological processes commence as early as 20 years prior to clinical symptoms, indicating a large latency period from pre-symptomatic to symptomatic AD [4]. As a result, it has been reported that up to 15% of patients have received an incorrect diagnosis of "probable" AD [5]. Clearly, having a definite diagnosis that is only possible following a post-mortem histological analysis, urgently warrants further investigation into identifying potential early biomarkers of pre-symptomatic AD [6]. A growing body of evidence suggests sensory disturbances as a common complaint in patients with AD [7]. The visual system and more specifically the retina, has received much attention in identifying AD-specific pathogenesis that can aid-in staging the very early phases of the disease [8,9,10,11]. Retinal dysfunction in both human studies of AD patients as well as animal models of AD suggests a plausible link between the retina and brain [12,13,14,15]. We have previously reported retinal changes in 75 participants with subjective memory complaint and found a significant correlation between thinning of the retinal nerve fibre layer (RNFL), vascular dysfunction and higher neocortical Aβ scores [16]. These results were consistent with other reports which have also shown retinal structural changes, such as RNFL thinning and retinal ganglion cell (RGC) loss in AD patients compared with aged matched controls [17, 18]. A study using 13–16 month old APP/PS1 mice reported an increase in amyloid deposition in the retina, specifically in close proximity to retinal micro vessels, which corresponded with the levels detected in the brain [15]. The same study reported an inner retinal functional deficit in the AD mice compared to control mice. The retinal abnormalities seen in AD and its' origin or impact are still unclear. However, it has been suggested that the structural and functional changes observed in the retina, such as RNFL thinning, RGC loss, and compromised inner retinal function, may be due to neurotoxicity effects from the Aβ plaques [19, 20]. With the majority of these studies reporting their findings in participants/animals with established disease pathology, it remains unclear whether retinal manifestations precede cerebral pathology or occur in parallel or following AD onset. There is limited evidence in the literature with regards to the timing of retinal changes observed in AD, with one study showing amyloid plaques in the retina as early as 2.5 months of age compared with 5 months in the brain [8]. The current study aims to explore this further by investigating markers of retinal structure and function in association with behavioural and neuropathological changes of the amyloid precursor protein- Presenilin 1 (APP/PS1) mice from 3 to 12 months of age. More specifically, we studied an array of electroretiongram parameters and retinal thickness (both inner and outer retina) in association with progressive AD-specific neuropathological changes in Aβ levels. Furthermore, we investigated the potential link between these parameters and levels of cognitive impairment, assessed by the novel object recognition test. Ultimately, our aim is to clarify whether retinal changes are observed prior, parallel or post neuropathological changes in the APP/PS1 mouse model of AD. APP/PS1 mice were re-derived at the Australian phonemics facility (ANU Canberra). Genotyping was performed, and wild type (WT) littermates were used as controls. The genotype ratio was approximately 50:50 with mixed genders. At 12 weeks of age, the mice were transported to the University of Technology Sydney and housed in a temperature-controlled facility. Animals were on a 12-h light/dark cycle and were given ad lib access to food and water as per the facilities standard operating procedures. All animal experiments were conducted in accordance with the Australian code of practice for the care and use of animals for scientific purposes. Experimental work was approved by UTS Animal Ethics Committee prior to the commencement of experiments (ETH16–0246). APP/PS1 and WT littermates (n = 70, mixed sexes roughly 50:50) were followed longitudinally from 3 to 12 months with time-point data captured every 3 months (ie 3, 6, 9, and 12). For all experiments, animals were anaesthetised with an intraperitoneal injection of ketamine (75 mg/kg) and xylazine (10 mg/kg). The effects of xylazine were reversed with a subcutaneous injection of atipamezole (0.75 mg/kg). Retinal structural changes were assessed using OCT at each time-point (spectral domain OCT (Wasatch Photonics, USA- field of view: > 40o, central wavelength: 800 nm, axial resolution: 3.9 μm, transverse resolution: 4 μm and imaging rate: 20 Hz). Animals were anaesthetised as described previously and a drop of 1% tropicimide (Alcon laboratories, Australia) was applied to each eye to dilate the pupil. Animals were placed on a heating pad during imaging to maintain ambient body temperature. Ophthalmic gel was applied to the surface of the eyes to prevent dryness from anaesthesia as well as maintaining contact between the cornea and the camera lens. Scanning protocol The scanning protocol included a 3 × 3 mm perimeter square image of the fundus. The device or the animal were adjusted in such way that the optic nerve head (ONH) was centred on the image. To obtain a raster scan of the ONH, the fundus image was transected across the centre of the ONH to produce a B-scan image of the retinal layers. A further two parallel B-scans on either side of the original B-scan (125 μm apart) was obtained (Fig. 1a). OCT of the mouse optic nerve head. Top Left) five B-scans obtained across the optic nerve head, Top Right) a sample radial OCT scan, Lower Right) results of segmenting the OCT scan into inner and outer retina A modified version of the segmentation algorithm developed by Chiu SJ et al [21] based on graph theory was used to segment the retina into two sections; inner and outer retina. We observed a segmentation error of less than 5% across all the scans in which we corrected manually. The mean thickness of the five B-scans was used to measure inner retinal thickness (between the ganglion cell layer (GCL) and inner nuclear layer (INL) and outer retinal thickness (outer plexiform layer (OPL) and retinal pigment epithelium (RPE)) (Fig. 1). Electroretinogram (ERG) Animals were dark-adapted overnight prior to the recordings. A dim red light was used to conduct the experiments the following day. Animals were anaesthetised as described previously and placed on a thermostatically controlled heat pad. Both eyes were dilated using a drop of 1% tropicimide following general anaesthesia. The animals were placed on the heat pad of the ERG instrument (Ocuscience, USA) and two silver corneal electrodes were gently placed on the cornea and were kept in position by placing a contact lens on top. Stainless steel reference electrodes were inserted subcutaneously on the forehead of the mouse and at the base of the tail. The Scotopic Threshold Response (STR) was measured first and consisted of the mean of 30 dim flashes (3 cd.s.m-2) with 2-s intervals. This was followed by a single bright flash ERG (30 cd.s.m-2). The positive STR (pSTR) amplitude was measured from the baseline to the maximum peak of the waveform at the flash intensity. The b-wave amplitude was measured from the trough of the a-wave to the peak of the b-wave (Fig. 2). ERG measurements. Right) STR trace and pSTR measurement, Left) ERG trace and a-, b-wave and OP measurements Measurement of oscillatory potentials (OPs) The OPs were isolated from the intact ERG recordings from each eye. To mitigate effects of a-wave contaminants in early OPs, a digital band pass filter (60-235 Hz) was used (MATLAB). The OP response was determined by the summed amplitude of each OP trace (OP1, OP2, OP3) by a blinded examiner. The amplitude (in microvolts) of each OP was characterised by the difference between the peak and trough. The overall OP response was determined through the summation of each of the OP amplitudes (i.e OP=OP1 + OP2 + OP3). A subgroup of animals were euthanized at 3, 6 and 12 months using an intraperitoneal injection of 50% diluted lethabarb and saline. The eyes were enucleated and placed in 4% paraformaldehyde (PFA) overnight for fixation. The brain was dissected and followed the same regime as the eyes. The fixed tissues were removed from PFA the following morning and washed in phosphate buffered saline (PBS) 3 times × 5 min. The tissues were then placed in 70% ethanol and were processed and embedded in paraffin wax within a week. The paraffin blocks were cut using a microtome at 5 μm thickness. A 1% and 2% Thioflavin S stain was applied on the sections to detect Aβ in the brain's hippocampus region and retina (within 200 μm of the optic nerve head), respectively. All slides were imaged using an upright fluorescent microscope at 460 nm excitation. Novel object recognition The mice underwent the novel object recognition behavioural assessment at 6, 9 and 12 month time-points. The mice were acclimatized to the testing equipment daily for 1 week prior to the experiment. The animals were placed for 5 min a day in the test box in their holding room to become comfortable with the equipment prior to testing, to limit unwanted behavioural responses from fear/anxiety. The experiments took place in the holding room to avoid stress caused by new/different smells and sounds in unfamiliar rooms. Two identical objects were placed in a black box at opposite ends, and each mouse was allowed 3 min to explore the objects (familiarisation phase). The objects and box were cleaned with 80% ethanol between each animal to alleviate smells that may distract the next animal from the test. One hour later, one of the objects was replaced with a new "novel" object (test phase). Again each mouse was given 3 min to explore the objects. The recordings were then watched by the same person to limit data variability. The D2 discrimination index was used to present results. The D2 index is a common measure used to discriminate novel and familiar objects, and is considered as a reliable measure as it corrects for the total exploratory activity of each animal [22]. It is calculated based on the following equation: $$ {\boldsymbol{D}}^{\mathbf{2}}=\frac{\left(\boldsymbol{novel}-\boldsymbol{familiar}\right)}{\left(\boldsymbol{novel}+\boldsymbol{familiar}\right)} $$ The D2 index, therefore, corrects for total exploratory behaviour of each animal. Statistical analysis was performed using Graphpad Prism 7 (San Diego, CA, USA). Data was presented as the mean sum of the total ± standard deviation (SD). Within each group (intra-analysis), a one-way analysis of variance (ANOVA) was applied to assess the overall change. Tukeys post-hoc analysis was used for multiple comparisons within each group. Analysis of covariance using linear regression analysis (ANCOVA) was applied to assess whether the slope of change over time is different between the two groups. An independent students t-test was used to compare the mean difference for each parameter between APP/PS1 and WT mice at each time point. Statistically significant differences were considered at p-value < 0.05 and 95% CI. Retinal functional parameters are exacerbated in APP/PS1 mice starting from 3 months The ERG can be used to assess the functionality of various cells in the retina through an evaluation of the electrical impulse recorded at the cornea in response to a flashlight. The major known parameters of an ERG are the a- and b –wave, scotopic threshold response (STR) and the oscillatory potentials (OP). The a-wave, the initial negative wave recorded following a bright stimulus, is generated as a result of photoreceptor phototransduction. The b-wave, the positive wave following the a-wave, is generated as a result of rod bipolar cells' depolarization [23] in the dark-adapted retina. The positive STR (pSTR) is known as the most sensitive response of the dark-adapted ERG [24]. pSTR's originate from the inner retina and are specifically used to study RGC functionality. In addition to STR, the oscillatory potentials (OP) of the ERG is also shown to originate from the inner retina, mainly the inner plexiform layer (IPL) [25]. We assessed and quantified all these parameters (i.e. a-,b- wave, pSTR and total OP) in both of the animal groups across age. The intra-group analysis showed a significant decline in b-wave, pSTR and total OP amplitude and a non-significant increase in the a-wave amplitude of both APP/PS1 and WT controls from 3 to 12 months (one-way ANOVA). A summary of these measurements and their statistical significance are shown in Table 1. Table 1 Age-associated retinal functional changes in APP/PS1 and WT mice* We then assessed whether the slope of change in each parameter over the given time period was significantly different between the APP/PS1 and WT mice. The ANCOVA analysis revealed that for a-, b- wave and OPs, if the overall slopes were identical, there is a 34.7, 8.3 and 24.2% chance, respectively, for randomly choosing data points with slopes this different. This suggests that the differences between the slopes (i.e. rate of change) are not significant (p = 0.3 [a-wave], p = 0.08 [b-wave], p = 0.2 [OP]). However, a similar analysis applied to pSTR, revealed that there is a 0.51% chance randomly choosing data points with slopes this different, suggesting the differences between the slopes are very significant (p < 0.001) (Fig. 3). Slope analysis (ANCOVA) for all functional parameters. A significant difference in the slope of decline was only observed in pSTR, suggesting that this parameter's decline is exacerbated in the APP/PS1 group (n = 15 per group, per time-point) We studied differences in each parameter between the two groups at any given time point. The inter-group analysis revealed a significant difference in b-wave and OPs of APP/PS1 compared to WT controls at 3 months (p < 0.001). There was also a significant difference in pSTR between the two groups starting from 6 months (p < 0.001). No difference was observed in the a-wave between the two groups at any of the time-points (p = 0.1). Retinal structural changes are exacerbated in APP/PS1 mice starting from 9 months We studied changes to the inner (NFL/INL) and outer retinal (OP/RPE) thickness separately to determine whether variations in each structure are associated with the functional changes reported earlier. The intra-group analysis revealed a significant decline in the inner retinal thickness of APP/PS1 and WT control mice from 3 to 12 months. A similar significant change in the outer retinal thickness of APP/PS1 mice was observed, however, the decline in the outer retinal thickness of WT mice was in-significant. A summary of these measurements and their statistical significance are shown in Table 2. Table 2 Age-associated retinal structural changes in APP/PS1 and WT mice* Similar to the retinal function, we assessed whether the slope of change in inner and outer retinal thickness over the given time period was significantly different between the APP/PS1 and WT mice. For inner and outer retinal thickness, results from the ANCOVA analysis revealed that if the overall slopes were identical, there is a 0.09, and 0.5% chance, respectively, for randomly choosing data points with slopes this different. This suggests that the differences between the slopes (i.e. rate of change) are significant (p < 0.0001 [inner retina], p < 0.01 [outer retina]) (Fig. 4). Slope analysis (ANCOVA) for all inner and outer retinal thickness. A significant difference in the slope of decline was observed in both outer and inner retinal thickness, suggesting that structural damage is exacerbated in the APP/PS1 group (n = 15 per group, per time-point) Finally, the inter-group analysis revealed a significant difference in the inner retinal thickness of APP/PS1 compared to WT controls starting at 9 months (p < 0.001). There was also a significant difference in the outer retinal thickness between the two groups at 12 months (p < 0.01). Aβ plaques were observed in the hippocampus and retina of APP/PS1 starting from 3 and 6 months, respectively At 3, 6 and 12 months, five animals from each group was randomly selected, euthanized and sagittal sections of the retina and hippocampus imaged to determine Aβ distribution. Aβ was present in the hippocampus from 3 months and appeared in the retina, mainly in the inner nucleus layer (INL) from 6 months of age. Aβ distribution increased with ageing in the hippocampus and inner retina, mainly ganglion cell layer (GCL), in the APP/PS1 mice (Fig. 5). In contrast, no Aβ was observed in the WT group (Additional file 1: Figure S1). Aβ distribution in the hippocampus and retina of the APP/PS1 mice at 3,6,12 months of age (Dentate Gyrus, CA1-CA3 regions of the hippocampus and retinal layers are shown)- Scale bar for hippocampus and retina is 100 μm and 20 μm, respectively. An increase in Aβ levels in both regions were observed. Retinal Aβ was observed in the inner/outer retinal regions from 6 months and was more pronounced in the GCL by 12 months of age. (Images for WT in Additional file 1: Figure S1) No significant change in cognition from 6 to 12 months of age We observed a non-significant decline in the D2 index of APP/PS1 and a non-significant increase in the D2 index of WT mice, from 6 to 12 months (Fig. 6). ANCOVA analysis also revealed a non-significant difference between the sloped of the two groups (Additional file 1: Figure S2). Episodic memory assessment of APP/PS1 and WT mice from 6 to 12 months of age. A non-significant change in the D2 discrimination index was observed in both group of animals (n = 15 per group, per time-point, One-way ANOVA) In this study we investigated an array of retinal functional and structural changes in the APP/PS1 double transgenic mouse model of AD and WT control mice over 12 months. Our results suggest that there is an overall age-dependent decline in retinal functional and structural parameters, however, inner retinal functional and structural damage seems to be accelerated in the APP/PS1 group. More specifically, we observed a significant difference in pSTR amplitude between APP/PS1 and WT controls starting from 6 months with a significant inner retinal thinning following in the APP/PS1 from 9 months. To our knowledge, this is the first study to characterise major functional and structural parameters in the retina of the APP/PS1 mice, longitudinally. The ERG is used to assess various retinal cellular response to a light stimulus. Consistent with previous reports [26, 27], we found a significant age-dependent decline in ERG parameters in both groups. A significant reduction in pSTR amplitude and ERG latency has also been reported in the symptomatic stage of the same strain previously [15, 28] . We studied ERG parameters in both groups over 12 months and found that the rate of decline was only significant in the pSTR of the APP/PS1 compared to WT controls. This suggests that the disease phenocopy has significantly altered RGC functionality. This is somewhat not surprising, as RGC loss is a known retinal pathology of AD [29]. When ERG parameters were compared between the two groups for any given time-point, we found a significant difference in the outer retina parameters early on in the disease process (i.e. 3 months) with inner retinal function decline to reach significant status only from 6 months. While a reduction in outer retinal ERG amplitude has been reported in AD patients [30], we could not find any other study that has specifically characterised ERG changes in early AD or during mild cognitive impairment. This led us to hypothesise that the significant difference in outer retinal parameters may be attributed to a different starting baseline recording. There are several studies that have investigated retinal structural changes and more specifically retinal nerve fibre layer (RNFL) thickness in AD [31,32,33]. Assessed by optical coherence tomography (OCT), a non-invasive and readily available tool, most studies have reported a significant reduction in mean RNFL thickness in Mild Cognitive Impairment and AD compared to healthy controls [34]. Results from animal studies have been inconclusive with Perez [35] reporting a non-significant neuronal cell loss in the retina based on retinal layer thickness of APP/PS1 mice, while Liu [36] reported a significant decrease in the thickness of the retina in Tg2576 mice in comparison with WT controls (P = 0.0086). Consistent with the latter, our results showed that there was a significant age-related reduction in the inner retinal thickness of APP/PS1 and WT control mice from 3 to 12 months. Furthermore, there was a significant difference between the inner and outer retinal thickness of APP/PS1 and WT mice starting from 9 and 12 months, respectively. Retinal Aβ deposition in 12 months old APP/PS1 mice has been reported before [37]. More interesting, Koronyo has shown deposition of retinal plaques precede the brain in the APP/PS1 mice (2.5 months vs 5 months) [8]. In this study, we observed retinal Aβ starting from 6 months (vs 3 months in the hippocampus), however, the discrepancy between our findings and the aforementioned report maybe due to the different Aβ staining method used. Koronyo has developed and used a curcumin based compound to visualise Aβ (and its isoforms), whereas we have used the more common Thioflavin S approach which can only visualise neuritic plaques. Thioflavin S, a homogenous dye, selectively binds to the beta-sheets of proteins, and as a result, it undergoes a shift in its emission wavelength upon binding to oligomers. Thioflavin S cannot bind to monomeric isoforms and therefore cannot be detected using fluorescence imaging [38, 39]. We initially observed Thioflavin S positive Aβ in the outer plexiform and inner nuclear layer of the retina. By 12 months of age, Aβ appeared in all of the layers of the inner retina with the majority observed in the GCL. Previous reports have shown cognitive impairment as a late occurrence of the APP/PS1 mice in the Morris water maze starting from 7 to 8 months [40, 41]. We assessed cognitive impairment using the NOR test. While Zhang [42] has shown that both the NOR and the Morris water maze work equally well in the cognitive evaluation of APP/PS1 mice, however, results from a current meta-analysis [43] suggest that there is no significant association between experimental evaluation of cognitive deficits and quantified cerebral Aβ levels in these animals. Our results also confirm this; we found a non-significant decrease in the recognition index of the APP/PS1 mice from 6 to 12 months of age. Simultaneously, there was a non-significant increase in the recognition index of the WT control mice for the same period. Our study has a few limitations. First, the animal model used in this study, the double transgenic APP/PS1 mouse model of AD, is a representative of the familial subform of AD that some would refer as a strain only be generalised to the familial subform only. However, our study was aimed at investigating the association between physiological retinal structural and functional parameters with progressive AD-specific cerebral pathophysiological changes. Therefore, our results may be generalised to AD broadly. Furthermore, the number of animal models that replicate the sporadic AD is limited, with the Octodon degus as the only animal model of natural AD-specific neuropathology reported in the literature [44]. Second, as the main aim of our study was to identify early retinal changes in AD, therefore we only aged the animals up until 12 months. Given that neuronal loss, based on the phenotype, occurs starting at around 17 months, it might be interesting to age these animals further to determine whether the changes we observed are accelerated past the 17 month time-point. However, most of the current literature has already reported on retinal dysfunctional and structural damage in the established aged animal models of AD and human participants dismissing any further need in a study past the 17 month time-point. Third, the inter-group analysis revealed an exacerbation of retinal functional decline and structural damage in the APP/PS1 group. Given that there was an age-associated decline in both groups, there is a potential for marginal bias in our selected analysis and modelling. Finally, the OCT device we have used in this study does not have tracking abilities and therefore, intensity/reflectance variability over time may have altered the OCT scans. To address this and ensure OCT scans were taken exactly at the same location, we adopted a few strategies. 1) we segmented the optic nerve head (manually) on the fundus en face and used the B-scan that transected the ONH exactly through the middle of the ONH as reference, 2) the distance between the B-scans, obtained on either side of the ONH, and the reference B-scan was measured in the Image J to ensure consistency at each time point, and 3) to adjust for artefacts due to reflectance, we normalized the intensity measurements of the inner and outer retina to the RPE intensity in each animal at each time point. However, RPE intensity at each time point was very similar, suggesting that reflectance artifact had minimal (if any) effect on our measurements (p = 0.2). While the manual segmentation of the ONH may itself lead to subjective errors, however, the combination of strategies employed (i.e. averaging, consistency in measurement locations, and intensity normalizing) minimizes the risk of artefact altering the results. In conclusion, we show for the first time that retinal dysfunction precedes retinal structural damage in APP/PS1 mouse model of AD. A significant decline in the b-wave of ERG commenced from 3 months, coinciding with hippocampus Aβ deposition. This suggests that retinal dysfunction occurs in parallel to pathological changes in the brain in AD. Further, we observed a dysfunction of the inner retina starting from 6 months with structural damage to the same region following on from 9 months. We did not observe any cognitive deficit in our cohort, however, this maybe due to the fact that this particular animal model does not show signs of cognitive dysfunction parallel to increased amyloid pathology. The dataset supporting the conclusions of this article is available upon a formal and reasonable request from the corresponding author and after the necessary clearances has been obtained from University of Technology Sydney's technology transfer office. APP/PS1: Amyloid Precursor Protein- Presenilin 1 Aβ: Amyloid Beta ERG: Electroretinogram GCL: Ganglion Cell Layer INL: Inner Nuclear Layer MRI: NOR: OCT: Oscillatory Potential OPL: Outer Plexiform Layer RGC: Retinal Ganglion Cells RNFL: Retinal Nerve Fibre Layer RPE: Retinal Pigment Epithelium STR: Scotopic Threshold Response Wild Type Förstl H, Kurz A. Clinical features of Alzheimer's disease. Eur Arch Psychiatry Clin Neurosci. 1999;249:288–90. Johnson KA, Fox NC, Sperling RA, Klunk WE. Brain imaging in Alzheimer disease. Cold Spring Harb Perspect Med. 2012;2:a006213. Article CAS PubMed PubMed Central Google Scholar Bloom GS. Amyloid-β and tau: the trigger and bullet in Alzheimer disease pathogenesis. JAMA Neurol. 2014;71:505. Sperling R, Mormino E, Johnson K. The evolution of preclinical Alzheimer's disease: implications for prevention trials. Neuron. 2014;84:608–22. Mühlbacher A, Johnson FR, Yang J-C, Happich M, Belger M. Do you want to hear the bad news? The Value of Diagnostic Tests for Alzheimer's Disease. Value Heal. 2016;19:66–74. Fiandaca MS, Mapstone ME, Cheema AK, Federoff HJ. The critical need for defining preclinical biomarkers in Alzheimer's disease. Alzheimers Dement. 2014;10:S196–212. Albers MW, et al. At the interface of sensory and motor dysfunctions and Alzheimer's disease. Alzheimers Dement. 2015;11:70–98. Koronyo-Hamaoui M, et al. Identification of amyloid plaques in retinas from Alzheimer's patients and noninvasive in vivo optical imaging of retinal plaques in a mouse model. Neuroimage. 2011;54:S204–17. Lee AG, Martin CO. Neuro-ophthalmic findings in the visual variant of Alzheimer's disease. Ophthalmology. 2004;111:376–80. Sivak JM, et al. The aging eye: common degenerative mechanisms between the Alzheimer's brain and retinal disease. Invest Ophthalmol Vis Sci. 2013;54:871–80. Iseri PK, Altinaş O, Tokay T, Yüksel N. Relationship between cognitive impairment and retinal morphological and visual functional abnormalities in Alzheimer disease. J neuro-ophthalmology. 2006;26:18–24. Javaid FZ, Brenton J, Guo L, Cordeiro MF. Visual and ocular manifestations of Alzheimer's disease and their use as biomarkers for diagnosis and progression. Front Neurol. 2016;7:55. Parnell M, Guo L, Abdi M, Cordeiro MF. Ocular manifestations of Alzheimer's disease in animal models. Int J Alzheimers Dis. 2012;2012:1–13. Gupta V, et al. BDNF impairment is associated with age-related changes in the inner retina and exacerbates experimental glaucoma. Biochim Biophys Acta. 2014;1842:1567–78. Gupta VKVB, et al. Amyloid β accumulation and inner retinal degenerative changes in Alzheimer's disease transgenic mouse. Neurosci Lett. 2016;623:52–6. Golzan SM, et al. Retinal vascular and structural changes are associated with amyloid burden in the elderly: ophthalmic biomarkers of preclinical Alzheimer's disease. Alzheimers Res Ther. 2017;9:13. La Morgia C, et al. Melanopsin retinal ganglion cell loss in Alzheimer disease. Ann Neurol. 2016;79:90–109. Lu Y, et al. Retinal nerve fiber layer structure abnormalities in early Alzheimer's disease: evidence in optical coherence tomography. Neurosci Lett. 2010;480:69–72. Hart NJ, Koronyo Y, Black KL, Koronyo-Hamaoui M. Ocular indicators of Alzheimer's: exploring disease in the retina. Acta Neuropathol. 2016;132:767–87. Colligris P, Perez de Lara MJ, Colligris B, Pintor J. Ocular manifestations of Alzheimer's and other Neurodegenerative diseases: The prospect of the eye as a tool for the early diagnosis of Alzheimer's disease. J. Ophthalmol. 2018;2018:1–12. Chiu SJ, et al. Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation. Opt Express. 2010;18:19413. Lueptow LM. Novel object recognition test for the investigation of learning and memory in mice. J Vis Exp. 2017. https://doi.org/10.3791/55718. Dong C, Agey P, Hare WA. Origins of the electroretinogram oscillatory potentials in the rabbit retina. Vis Neurosci. 2004;21:533–43. Saszik SM, Robson JG, Frishman LJ. The scotopic threshold response of the dark-adapted electroretinogram of the mouse. J Physiol. 2002;543:899–916. Ogden TE. The oscillatory waves of the primate electroretinogram. Vis Res. 1973;13:1059–74. Li C, Cheng M, Yang H, Peachey NS, Naash MI. Age-related changes in the mouse outer retina. Optom Vis Sci. 2001;78:425–30. Gresh J, Goletz PW, Crouch RK, Rohrer B. Structure-function analysis of rods and cones in juvenile, adult, and aged C57bl/6 and Balb/c mice. Vis Neurosci. 2003;20:211–20. Leinonen H, Lipponen A, Gurevicius K, Tanila H. Normal amplitude of electroretinography and visual evoked potential responses in AβPP/PS1 mice. J Alzheimers Dis. 2016;51:21–6. Blanks JC, Torigoe Y, Hinton DR, Blanks RH. Retinal pathology in Alzheimer's disease. I. Ganglion cell loss in foveal/parafoveal retina. Neurobiol Aging. 1996;17:377–84. Parisi V, et al. Morphological and functional retinal impairment in Alzheimer's disease patients. Clin Neurophysiol. 2001;112:1860–7. Kirbas S, Turkyilmaz K, Anlar O, Tufekci A, Durmus M. Retinal nerve Fiber layer thickness in patients with Alzheimer disease. J Neuro Ophthalmol. 2013;33:58–61. O'Bryhim BE, Apte RS, Kung N, Coble D, Van Stavern GP. Association of preclinical Alzheimer disease with optical coherence tomographic angiography findings. JAMA Ophthalmol. 2018;136(11):1242–8. https://doi.org/10.1001/jamaophthalmol.2018.3556. Paquet C, et al. Abnormal retinal thickness in patients with mild cognitive impairment and Alzheimer's disease. Neurosci Lett. 2007;420:97–9. Thomson KL, Yeo JM, Waddell B, Cameron JR, Pal S. A systematic review and meta-analysis of retinal nerve fiber layer change in dementia, using optical coherence tomography. Alzheimer's Dement Diagn Assess Dis Monit. 2015;1:136–43. Perez SE, Lumayag S, Kovacs B, Mufson EJ, Xu S. β-Amyloid deposition and functional impairment in the retina of the APPswe/PS1ΔE9 transgenic mouse model of Alzheimer's disease. Investig Opthalmol Vis Sci. 2009;50:793. Liu B, et al. Amyloid-peptide vaccinations reduce {beta}-amyloid plaques but exacerbate vascular deposition and inflammation in the retina of Alzheimer's transgenic mice. Am J Pathol. 2009;175:2099–110. Koronyo Y, Salumbides BC, Black KL, Koronyo-Hamaoui M. Alzheimer's Disease in the Retina: Imaging Retinal Aβ Plaques for Early Diagnosis and Therapy Assessment. Neurodegener Dis. 2012;10:285-93. https://doi.org/10.1159/000335154. Sun A, Nguyen XV, Bing G. Comparative analysis of an improved Thioflavin-S stain, Gallyas silver stain, and immunohistochemistry for neurofibrillary tangle demonstration on the same sections. J Histochem Cytochem. 2002;50:463–72. Ly PTT, Cai F, Song W. Detection of neuritic plaques in Alzheimer's disease mouse model. J Vis Exp. 2011. https://doi.org/10.3791/2831. Serneels L, et al. Gamma-secretase heterogeneity in the Aph1 subunit: relevance for Alzheimer's disease. Science. 2009;324:639–42. Radde R, et al. Abeta42-driven cerebral amyloidosis in transgenic mice reveals early and robust pathology. EMBO Rep. 2006;7:940–6. Zhang R, et al. Novel object recognition as a facile behavior test for evaluating drug effects in AβPP/PS1 Alzheimer's disease mouse model. J Alzheimers Dis. 2012;31:801–12. Foley AM, Ammar ZM, Lee RH, Mitchell CS. Systematic review of the relationship between amyloid-β levels and measures of transgenic mouse cognitive deficit in Alzheimer's disease. J Alzheimers Dis. 2015;44:787–95. Deacon RMJ, et al. Natural AD-like neuropathology in Octodon degus: impaired burrowing and Neuroinflammation. Curr Alzheimer Res. 2015;12:314–22. The authors would like to acknowledge Mr. Venkata Allam's assistance with post experimental animal monitoring. OS and SMG are supported by a National Health and Medical Research Council- Australian Research Council Dementia research fellowship. This work was supported by a Mason Foundation project grant. The funding bodies had no role in the design of the study and collection, analysis, and interpretation of data and also in writing the manuscript. Vision Science group, Graduate School of Health (Orthoptics Discipline), University of Technology Sydney, 15 Broadway, Ultimo, Sydney, NSW, 2007, Australia Dana Georgevsky, Stephanie Retsas, Newsha Raoufi & S. Mojtaba Golzan Institute of Biomedical Materials & Devices (IBMD), Faculty of Science, University of Technology Sydney, 15 Broadway, Ultimo, Sydney, NSW, 2007, Australia Newsha Raoufi & Olga Shimoni Dana Georgevsky Stephanie Retsas Newsha Raoufi Olga Shimoni S. Mojtaba Golzan DG: All animal experiments, data collection, analysis and manuscript preparation. SR: Data analysis and interpretation. NR: Data analysis and interpretation. OS: Data interpretation and manuscript preparation. SMG: Animal experiments, data collection, analysis, interpretation and manuscript preparation. All authors read and approved the final manuscript. Correspondence to S. Mojtaba Golzan. All animal experimental work done in the current study was approved by University of Technology Sydney's Animal Care and Ethics Committee prior to commencement of experiments (ETH16–0246). Figure S1. No Aβ was observed in the hippocampus and retina of the WT group for the same time-points. Figure S2. Slope analysis (ANCOVA) for Novel Object Recognition test from 6 to 12 months. We observed no significant difference between the two animal groups and across age. (DOCX 1024 kb) Georgevsky, D., Retsas, S., Raoufi, N. et al. A longitudinal assessment of retinal function and structure in the APP/PS1 transgenic mouse model of Alzheimer's disease. Transl Neurodegener 8, 30 (2019). https://doi.org/10.1186/s40035-019-0170-z Accepted: 19 August 2019 DOI: https://doi.org/10.1186/s40035-019-0170-z Retinal function Retinal structure Early assessment
CommonCrawl
Search all SpringerOpen articles Environmental Systems Research Pore saturation model for capillary imbibition and drainage pressures James E. Laurinat ORCID: orcid.org/0000-0002-6704-79091 Environmental Systems Research volume 7, Article number: 18 (2018) Cite this article Leaching and transport of radionuclides from cementitious waste forms and from waste tanks is a concern at the Savannah River Site and other Department of Energy sites. Computer models are used to predict the rate and direction for migration of these through the surrounding soil. These models commonly utilize relative permeability and capillary pressure correlations to calculate migration rates in the vadose (unsaturated) zone between the surface and the water table. The most commonly used capillary pressure models utilize two parameters to relate the pressure to the relative saturation between the wetting (liquid) and nonwetting (gas) phases. The correlation typically takes the form of a power law relation or an exponential equation. A pore saturation model is used to derive the secondary drainage pressure and the bounding imbibition pressure as functions of a characteristic pore pressure and the liquid saturation. The model utilizes singularity analyses of the total energies of the liquid and gas to obtain residual saturations for the two phases. The model successfully correlates a selected set of laboratory imbibition and drainage data for sand. The capillary pressure model utilizes a single fitting parameter, a characteristic pore pressure, which is related to a characteristic pore diameter by the Laplace equation. This pore diameter approximately equals the diameters predicted by two different geometric pore models based on the particle diameter. A general parametric model for secondary drainage and bounding imbibition pressures for quasi-steady state flow is proposed. The model is derived by equating changes in the potential energy associated with capillary forces to the energy required for flow. The model does not include transient effects and therefore does not address scanning curves between the bounding pressures for imbibition and drainage. The proposed model differs from most previous capillary models in that it is based on a single parameter, a characteristic capillary pore pressure. The pore pressure can be related to the mean particle diameter through the Laplace equation and geometric pore models. Previously, capillary pressures have been evaluated using either parametric correlations or pore-scale models. Parametric correlations typically express the capillary pressure, normalized with respect to a characteristic pore pressure, as a function of the liquid saturation, normalized with respect to the difference between the residual liquid saturation and full saturation. Most capillary pressure correlations utilize two or more parameters to relate the pressure to the relative saturation. As in the proposed model, the characteristic pore pressure serves as one of the parameters. Additional parameters are added as fitting constants. The most commonly used parametric correlations are those of Brooks and Corey (1964) and van Genuchten (1980). Other correlations include the Brutsaert (1967), lognormal (Kosugi 1994, 1996), and Gardner–Russo (Russo 1988; Gardner 1958) correlations. The Brooks and Corey correlation is a two-parameter power law correlation. Both the van Genuchten and the Brutsaert correlations are comprised of asymptotic power law relationships; the Brutsaert correlation has two parameters and the van Genuchten has three parameters. The lognormal correlation takes a logarithmic form when solved for the normalized saturation as a function of the pressure and has two parameters. The Gardner–Russo correlation is a single-parameter correlation that is expressed as a mixed exponential function when solved for the saturation. Chen et al. (1999) present a comprehensive comparison of these parametric correlations. Because these parametric correlations, as originally formulated, assume that the liquid can completely saturate the pores, in a strict sense they apply only to primary drainage from a completely saturated porous material. Modifications are required to address the hysteresis between drainage and imbibition and between primary drainage and secondary drainage from a state of residual gas saturation. Along these lines, Parker and Lenhard (1987) developed a model that includes effects of gas entrapment during imbibition. Their model defines limiting maximum capillary heads for drainage and minimum capillary heads for imbibition. They state that the actual pressure must lie at some value between these limits that depends on the flow history. Another way to incorporate hysteresis, applied by White and Oostrom (1996), is simply to define separate limiting capillary pressures for imbibition and drainage. Pore-scale models encompass pore network models and lattice Boltzmann methods. Pore network models, introduced by Fatt (1956), use Monte Carlo methods to calculate pressures and flows in a network of capillary pores with a specified coordination number, loosely defined as the number of neighboring pores connected to each pore. In lattice Boltzmann methods, the fluid flow through the porous network is modeled as the flow of a stream of particles in a lattice of pores. In an early implementation of the lattice Boltzmann method, Hassanizadeh and Gray (1990, 1993) included an interfacial area to enable the modeling of hysteresis (Porter et al. 2009). This concept was extended to a porous network model by Reeves and Celia (1996). A complicating factor in modeling the capillary pressure is the existence of fractures through which flow preferentially occurs. To model fractured media, Klavetter and Peters (1986) and Nitao (1988) used dual porosity functions that assign different capillary pressure–saturation relations to the fractures and to the low permeability solid matrix. Their models equated the fracture and solid matrix pressures. Di Donato and Blunt (2004) incorporated dual porosities into a streamline flow model. In this analysis, a probabilistic pore pressure model and a mechanical energy balance replace the semi-empirical approach of previous parametric correlations. The energy balances are solved for singularities in the driving forces for flow of the wetting and nonwetting phases, to obtain the residual wetting and nonwetting saturations, respectively. The pore pressure models are then used to calculate the capillary pressure as a function of saturation. An adjusted saturation is defined to account for preferential flow through fractures or fissures. Results are compared to selected capillary pressure measurements. Derivation of energy balance for capillary pressure model The singularity analyses for the residual saturations are based on comparisons of the potential energy associated with the difference between the pressures of the wetting and nonwetting phases in the porous material (hereafter referred to as the liquid and gas phases, respectively) and the work required for flow of each phase. The analysis is loosely analogous to the Gibbs' adsorption theorem, which relates work done by surface forces to the chemical potential of a system. The model replaces the chemical potential with a mechanical potential based on the stored interfacial energy and substitutes a flow work term for the work needed to extend an interface. The model is applied to an adiabatic control volume within a larger volume of the porous material, from which the only energy transfer occurs by flow of gas and liquid in or out. The model evaluates changes of the potential energy within the control volume and changes in the amount of flow work performed by the control volume on its surroundings. The energy balance for the control volume equates changes in the enthalpy, ΔH, to changes in the potential energy associated with saturation of the porous material, ΔGs, and the work required for flow, ΔW $$\Delta H = \Delta G_{s} + \Delta W.$$ The energy balance is formulated in terms of the enthalpy because it is an open system in which liquid and gas may cross the volume boundaries. Because the model assumes that the soil is adiabatic, the enthalpy term is zero, and $$\Delta G_{s} = - \Delta W$$ The mechanical potential and flow work terms can be expressed in terms of pressures much as the chemical potential in the Gibbs' adsorption equation. For a control volume initially comprised of one mole of gas, these expressions are $$\Delta G_{s} = - R_{g} T\Delta \rm{{ln}} \,\left( {P_{pot} } \right)$$ $$\Delta W = - R_{g} T\Delta \left( { - {\rm{ln}}\, \left( {P_{flow} } \right)} \right)$$ where Rg is the ideal gas constant, T is the temperature, Ppot is the potential pressure associated with partial saturation of capillary pores, and Pflow is a characteristic pressure arising due to capillary flow. Derivation of pore pressure model The model employs different flow pressures for imbibition and drainage. The flow pressure for imbibition is based on liquid phase flow, while the flow pressure for drainage is based on gas phase flow. To derive the pressures for the potential energy and the flow work, a model for the pore pressure as a function of saturation must be developed. The pore pressure model assumes that each pore is filled with either a liquid wetting or a gaseous nonwetting phase. The capillary pore pressure is distributed by gas–liquid interfaces between the pores. The distribution of gas-and liquid-filled pores and the connections between pores are assumed to be random. The difference between the average pore pressure and the liquid pressure is equal to the product of the capillary pore pressure, Pc, and the probability that a liquid-filled pore is in contact with a gas-filled pore and not with liquid in another pore. This probability, in turn, is the gas saturation, s. Thus, the difference between the average pore pressure, Pa, and the liquid pressure, Pl, is given by $$P_{a} - P_{l} = P_{c} \left( {1 - s} \right)$$ Likewise, the difference between the gas pressure, Pg, and the average pore pressure is equal to the product of the capillary pressure and the liquid saturation, or, $$P_{g} - P_{a} = P_{c} s$$ The potential energy is calculated from the difference between Pa and the reference pressure for flow through the soil, Po. For an influx of liquid into the porous material (imbibition), the reference pressure is the liquid phase pressure, and $$P_{a} - P_{o} = P_{c} \left( {1 - s} \right)$$ For flow of gas (drainage), the reference pressure is the gas phase pressure, so that $$P_{a} - P_{o} = - P_{c} s$$ These pressure differences apply only to the fractions of the pore volume occupied by liquid and gas, respectively. Consequently, for either liquid or gas flow, the potential energy is based on a pressure differential given by $$P_{pot} = P_{c} s\left( {1 - s} \right)$$ The pressure differential is defined to be positive for both imbibition and drainage. Substitution of this pore pressure in the equation for the potential energy yields $$\Delta G_{s} = - R_{g} T\Delta \it{ln} \left( {P_{c} s\left( {{\text 1} - s} \right)} \right)$$ Analysis of residual saturations At the residual wetting phase saturation, swr, the resistance to flow of the liquid becomes infinite, and, in an overall energy balance, the resistance to flow of the gas can be ignored. It may be stated, then, that for a given change in the saturation, s, the change in the potential energy counteracts the change in the work needed to cause the liquid to flow. In other words, at s = swr, $$\frac{{{\rm{d}}G_{s} }}{{\rm{d}}s} = - \frac{{{\rm{d}}W_{l} }}{{\rm{d}}s}$$ The liquid work function is derived from the Darcy equation for flow of two immiscible phases. The Darcy equation for the liquid phase velocity, vw, is: $$\varepsilon sv_{w} = - \frac{{kk_{w} }}{{\mu_{w} }}\frac{{{\rm{d}}P_{l} }}{{\rm{d}}z} = - \frac{{kk_{w} }}{{\mu_{w} }}P_{c} \frac{{\rm{d}}s}{{\rm{d}}z}$$ $$v_{w} = - \frac{{kk_{w} }}{{\mu_{w} \varepsilon }}P_{c} \frac{{\rm{d}}{\rm{ln}} \,\left( s \right)}{{\rm{d}}z}$$ where ɛ is the porosity, k is Darcy permeability coefficient, kw is the relative liquid permeability, μw is the liquid viscosity, and z is the direction of flow. From this expression, the pressure acting on a given volume of liquid is seen to be Pc ln (s). The change in the corresponding work function is given by $$\Delta W_{l} = - R_{g} T\Delta {\rm{ln}}\, \left( { - P_{c} {\rm{ln}}\, \left( s \right)} \right)$$ As with the potential energy, this expression gives the change in the work function for small changes in the saturation. Substitution of this work term and the change in the potential energy in the energy balance yields $$- R{}_{g}T\frac{{{\rm{d}}{\rm ln}\, \left( {P_{c} s\left( {{\text 1} - s} \right)} \right)}}{{\rm{d}}s} = - R_{g} T\frac{{{\rm{d}}{\rm{ln}} \,\left( { - P_{c} {\rm{ln}} \,\left( s \right)} \right)}}{{\rm{d}}s}$$ $${\rm{ln}}\, \left( s \right) = \frac{s - {\text 1}}{{\text 1} - 2s}$$ This equation is satisfied for swr = 0.236. A similar line of reasoning is used to calculate the residual nonwetting phase saturation, snr. Here, the resistance to the gas flow is infinite, and the resistance to flow of the liquid can be ignored. Thus, $$\frac{{{\rm{d}}G_{s} }}{{\rm{d}}s} = - \frac{{{\rm{d}}W_{g} }}{{\rm{d}}s}$$ The Darcy equation for the gas phase velocity, vn, is: $$\varepsilon \left( {{\text 1} - s} \right)v_{n} = - \frac{{kk_{n} }}{{\mu_{n} }}\frac{{{\rm{d}}P_{g} }}{{\rm{d}}z} = - \frac{{kk_{n} }}{{\mu_{n} }}P_{c} \frac{{\rm{d}}s}{{\rm{d}}z}$$ $$v_{n} = - \frac{{kk_{n} }}{{\mu_{n} \varepsilon }}\left( { - P_{c} \frac{{{\rm{d}}{\rm{ln}} \,\left( {{\text 1} - s} \right)}}{{\rm{d}}z}} \right)$$ where kn is the relative gas permeability and μn is the gas viscosity. From this expression, the pressure acting on a given volume of gas is seen to be − Pc ln (1 − s). The capillary pressure is subtracted from this pressure. The capillary pressure is supplied to the gas to convert it from a stationary state in which it is in contact with a pore wall to a mobile state in which it is surrounded by liquid. The total pressure supplied to the gas phase is, therefore, − Pc(1 + ln (1 − s)), and the corresponding change in the work function is: $$\Delta W_{g} = - R_{g} T\Delta {\rm{ln}} \,\left( { - P_{c} \left( {{\text 1} + {\rm{ln}} \,\left( {{\text 1} - s} \right)} \right)} \right)$$ Substitution of this work term and the change in the potential energy in the energy balance gives $$- R_{g} T\frac{{{\rm{d}}{\rm{ln}} \left( {P_{c} s\left( {{\text 1} - s} \right)} \right)}}{{\rm{d}}s} = - R_{g} T\frac{{{\rm{d}}{\rm{ln}} \,\left( { - P_{c} \left( {{\text 1} + {\rm{ln}}\, \left( {1 - s} \right)} \right)} \right)}}{{\rm{d}}s}$$ $${\rm{ln}}\,\left( {{\text 1} - s} \right) = \frac{3s - {\text 1}}{{\text 1} - 2s}$$ This equality is satisfied for snr = 0.884. Measured residual saturations for a uniform sand (Demond and Roberts 1991; Morel-Seytoux and Khanji 1975; Morel-Seytoux et al. 1973) agree almost exactly with the derived values; the measured wetting saturation, estimated by graphical interpolation, was 0.230, and the measured nonwetting saturation was 0.884. Calculation of capillary pressures for imbibition and drainage The following section describes a capillary pressure model for imbibition and drainage, based on the residual nonwetting saturation. For imbibition, the liquid supplies the motive force for displacement of the gas. Consequently, it may be argued that the effective capillary pressure within the porous material is the integral of the pressure gradient for the liquid. During imbibition, the force to displace the gas is applied over the entire surface of the porous material, but the porous material is aerated so that the air within the capillary pores remains at atmospheric pressure. A hydraulic advantage accompanies the force applied to the liquid in the pores, then, in inverse proportion to the liquid saturation. It follows that the measured liquid pressure is the integral of the liquid phase pressure gradient, divided by the saturation. For the measured capillary pressure for imbibition, Pc,i, this gives: $$\frac{{P_{c,i} }}{{P_{c} }} = \frac{{P_{c,i,0} }}{{P_{c} }} - \frac{{\rm{ln}}\, \left( s \right)}{s}$$ where Pc,i,0 represents the minimum pressure required for imbibition of liquid into a saturated porous material containing non-displaceable gas, i.e., the entry head. During drainage, both liquid and gas displace the liquid that leaves the porous material. Hence it may be argued that the effective capillary pressure is the volume average of the integrals of the pressure gradients for the gas and the liquid. Again, because the air remains at atmospheric pressure during drainage, there is a hydraulic advantage applied to the measured pressure, so the effective pressure is this volume-average pressure integral, divided by the saturation. At the maximum residual nonwetting phase saturation, the measured suction pressure drops by a step change from its value for drainage to its value for imbibition. This pressure change can be attributed to the change from a continuous gas phase at atmospheric pressure to a continuous liquid phase for which, at static equilibrium, the outside pressure equals the liquid phase pressure. Since the liquid phase pressure is less than the gas phase pressure by a pressure difference equal to the characteristic capillary pore pressure, the suction pressure for imbibition must be less than the pressure for drainage by just this pressure, divided by the saturation to account for the effective hydraulic advantage. These considerations, with adjustments to account for differences between the integration constants for the liquid and volume-average pressure gradients, give, for the capillary pressure for drainage, Pc,d, $$\begin{aligned} \frac{{P_{c,d} }}{{P_{c} }} & = \frac{{P_{c,i,0} }}{{P_{c} }} + \frac{1}{{s_{nr} }} - \frac{{{\rm{ln}}\, \left( {s_{nr} } \right)}}{{s_{nr} }} + \frac{{\left( {{\text 1} - s} \right)\left( {{\text 1} + {\rm{ln}}\, \left( {{\text 1} - s} \right)} \right) - s{\rm{ln}}\, \left( s \right)}}{s} \\ & \quad - \frac{{\left( {{\text 1} - s_{nr} } \right)\left( {1 + {\rm{ln}}\, \left( {{\text 1} - s_{nr} } \right)} \right) - s_{nr} {\rm{ln}}\, \left( {s_{nr} } \right)}}{{s_{nr} }} \\ \end{aligned}$$ The entry head most likely is a function of the capillary pore pressure. The most plausible explanation that fits the measured data is that an excess capillary force is required to displace air from an array of pores located on the surface in all three directions (one perpendicular to the surface and two in transverse directions). The required force is the three-dimensional vector sum of the forces required for unidirectional displacement, one component of which corresponds to the pressure. According to this interpretation, the entry head is the pore pressure multiplied by the square root of three: $$\frac{{P_{c,i,0} }}{{P_{c} }} = \sqrt 3$$ The entry head is applied at the outer surface of the porous material, so this pressure difference is not normalized with respect to the liquid saturation. A combination of the capillary pressure equations for imbibition and drainage yields a critical saturation, scr, where these two pressures coincide: $$\frac{{1 - \left( {1 - s_{nr} } \right)\left( {1 + {\rm{ln}}\, \left( {s_{nr} } \right) + {\rm{ln}}\, \left( {{\text 1} - s_{nr} } \right)} \right)}}{{s_{nr} }} = - \frac{{\left( {1 - s_{cr} } \right)\left( {1 + {\rm{ln}}\, \left( {s_{cr} } \right) + {\rm{ln}}\, \left( {{\text 1} - s_{cr} } \right)} \right)}}{{s_{cr} }}$$ Substitution of snr in this expression gives scr = 0.301. Below this saturation, the suction pressure that develops during drainage is less than the suction pressure that accompanies imbibition. The logical conclusion is that the porous material will not drain to a saturation below this critical value, unless the source of liquid is removed from contact with the porous material or there is an external gas flow. Comparison of model predictions to capillary pressure data Data from the UNsaturated SOil hydraulic DAtabase (UNSODA) maintained by the George E. Brown, Jr., Salinity Laboratory of the U. S. Department of Agriculture (Nemes et al. 1999, 2001) was selected for comparison with the model. The comparison is limited to data sets with paired imbibition and drainage pressures that exhibited a significant entry head Pc,i,0 and that included particle size measurements. There were three such data sources, namely Shen and Jaynes (1988), Stauffer and Dracos (1986), and Poulovassilis (1970). An additional data source not included in UNSODA, that of Smiles et al. (1971) and Vachaud and Thony (1971), also is analyzed. All these data sources contained measurements made in sand. The fitting procedure entailed a calculation of the capillary pore pressure Pc, computed as the average of the measured capillary pressures divided by the capillary pressures predicted by the model at the relative saturations for each measurement. To eliminate measurements near saturation with either liquid or gas, the calculation was limited to relative liquid saturations between 0.4 and 0.75. Figure 1 compares these data with model predictions. As this figure shows, the Shen and Jaynes data and the Stauffer and Dracos data have minimum measured saturations about half the predicted critical minimum saturation scr, while the Poulovassilis and Smiles et al. data have minimum measured saturations approximately equal to scr. Variation of imbibition and drainage pressures with relative saturation The model fits the data closely for the Poulovassilis and Smiles et al. data. The most logical explanation for the apparent discrepancy between the minimum measured saturations for the Shen and Jaynes and the Stauffer and Dracos data and the model prediction is that the porous material develops fissures that fill with gas during drainage. According to this interpretation, the fissures are larger than the pores by a sufficient magnitude that they do not offer any flow resistance but rather simply conduct fluid among the pores. The minimum saturation for a fissured material, smin, can be derived by applying a jump condition for the development of fissures. Following the logic used in the analysis of residual wetting saturations, the pressure applied to the liquid phase across such a jump, ΔPjump, is $$\Delta P_{jump} = P_{c} \left( {\left( {1 - s_{\text{min} } } \right) - \left( {1 - s_{cr} } \right)} \right) = P_{c} \left( {s_{cr} - s_{\text{min} } } \right)$$ The minimum saturation is the saturation that will minimize the free energy associated with this jump, ΔGjump. This free energy, evaluated after the appearance of the fissures, is $$\Delta G_{jump} = - R_{g} T\ln \left( {P_{c} s_{\text{min} } \left( {s_{cr} - s_{\text{min} } } \right)} \right)$$ This condition is satisfied by $$s_{\text{min} } = 0.5s_{cr}$$ A similar analysis, combined with the observation that the maximum saturation for a homogeneous porous material is the residual nonwetting saturation, gives for the maximum saturation for a fissured porous material, smax, $$s_{\text{max} } = 0.5 + 0.5s_{nr}$$ The capillary pressure model describes the saturation for just the pores of a fissured material. Because the fissure volume merely conducts pressure among the pores, the overall saturation for the total volume of porous material is a linear function of the pore saturation. If the maximum saturation is defined to be equal to the residual nonwetting saturation to conform with the model, then a linear interpolation between this saturation and the minimum saturation yields the following expression for the overall saturation of a fissured material, s*, as a function of s, snr, and scr: $$s* = \frac{{0.5s_{cr} s_{nr} - s_{cr} s + s_{nr} s}}{{s_{nr} - 0.5s_{cr} }}$$ Figure 2 compares the capillary pressure model with the Shen and Jaynes data and the Stauffer and Dracos data defined by s*. The model fits the adjusted imbibition data closely, but the Stauffer and Dracos drainage data deviate significantly from the model. The difference between the measured and predicted drainage pressures remains approximately constant as the saturation increases. This suggests the presence of a metastable equilibrium that originates from the upstream (dry) side of the capillary pressure gradient. Variation of imbibition and drainage pressures with relative saturation, corrected for fissure effect The most plausible explanation for the overshoot and apparent metastability phenomena is that at the minimum liquid saturation, the interstitial gas in the fissures begins to exert pressure on the residual liquid in the pores. As explained in the capillary pressure model derivation, this pressure would be the product of the capillary pore pressure Pc and the fraction of the volume occupied by the interstitial gas, 0.5(1 − 0.5scr), divided by the residual pore saturation, 0.5scr. Thus, the overshoot pressure, ΔPc,ov, is given by $$\frac{{\Delta P_{c,ov} }}{{P_{c} }} = \frac{{1 - 0.5s_{cr} }}{{s_{cr} }}$$ According to this interpretation, the apparent metastability results when the overshoot in the capillary pressure at minimum saturation propagates downstream in the nonwetting (gas) phase. The fraction of the overshoot pressure that propagates would be equal to the fraction of the pore volume occupied by gas, 1 − scr. If this overshoot pressure propagates, then the metastable drainage pressure, Pc,d,ms, would be given by $$\frac{{P_{c,d,ms} }}{{P_{c} }} = \frac{{P_{c,d} }}{{P_{c} }} + \frac{{\left( {1 - s_{cr} } \right)\left( {1 - 0.5s_{cr} } \right)}}{{s_{cr} }}$$ As shown by Fig. 2, the predicted overshoot pressure approximately equals maximum measured pressures, and the Stauffer and Dracos data closely follow the predicted metastable drainage pressure curve. It appears that the Stauffer and Dracos drainage data are not constrained to the maximum residual nonwetting phase saturation but instead extend to complete saturation. This suggests that the additional capillary pressure at the metastable equilibrium is sufficient to overcome the capillary pore pressure. Evaluation of characteristic pore pressures The final step in development of the capillary pressure model is to relate the characteristic pore pressure Pc derived from the data fit to the particle size of the porous material. One may recall that, for each data set that was regressed, the pressure multiplier is \(\frac{1}{{P_{c} }}\). This pore pressure is related to the capillary rise pressure or capillary head Pc,i,0 by Eq. 25. The Laplace equation is used to relate the capillary head to the effective pore diameter, dpore, the interfacial tension, σ, and the liquid wetting angle, θ: $$d_{pore} = \frac{4\sigma \cos \left( \theta \right)}{{P_{c,i,0} }}$$ Two geometric models have been developed to relate the effective pore diameter to the particle diameter and the porosity. From an analysis of the capillary rise in a powder, based on the specific surface area of the powder, White (1982) derived the following relationship between the particle diameter, dpart, and the pore diameter: $$d_{pore} = \frac{2\varepsilon }{{3\left( {1 - \varepsilon } \right)}}d_{part}$$ The Revil-Glover-Pezard-Zamora (RGPZ) model (Glover et al. 2006) was developed to correlate the permeability as a function of particle size based on the limiting flow rate through the necks connecting adjacent pores. According to the RGPZ model, for spherical particles, the effective pore diameter is related to the particle diameter by: $$d_{pore} = \frac{{2\sqrt 3 \varepsilon^{1.5} }}{3}d_{part}$$ The RGPZ model was developed to describe flows in low-porosity oil sands and therefore contains only the leading term for the porosity. The laminar term in the Ergun equation for flow through packed beds (Ergun 1952) indicates that, at low flow rates, the particle diameter scales as \(\frac{{\varepsilon^{1.5} }}{1 - \varepsilon }\) with respect to pressure. This suggests that a modified RGPZ model for higher porosity materials should take the form: $$d_{pore} = \frac{{2\sqrt 3 \varepsilon^{1.5} }}{{3\left( {1 - \varepsilon } \right)}}d_{part}$$ Both the White and RGPZ models were developed for a material comprised of uniform size spheres. For this study, it is assumed that the mean particle diameter calculated from sieve tray data is representative of the particle diameter in the models. The particle diameter data are binned by tray mesh size. Accordingly, mean particle diameters were calculated using an interval-censored method, based on an assumed log normal size distribution. The interval-censored averaging was performed using JMP® statistical software Footnote 1 (see So et al. 2010). For each data set, the void fraction was either given, calculated from the bulk density of the sand, or estimated by fitting the residual nonwetting saturation to the capillary pressure data. The Laplace equation for the pore diameter was evaluated using the properties of water in contact with air. The surface tension of water at room temperature is 0.0722 N/m (Lide 1994). The measured wetting angle is highly variable and depends on the measurement method. The calculated pore diameter is based on a dynamic wetting angle of 45°, measured for air-dried sand (Weisbrod et al. 2009). Table 1 compares pore diameters calculated using the White model, the RGPZ model, and the modified RGPZ model to the characteristic pore diameter for the capillary pressure model presented in this study. As may be noted, pore diameters calculated from the White and modified RGBZ models agree relatively closely with the effective pore diameter based on the capillary pressures, whereas the original RGBZ model underestimates the effective pore diameter. Table 1 Comparison of pore diameters based on particle diameters with pore diameters from capillary pressure measurements Statistical comparison of pore diameter models It is informative to compare the three geometric pore diameter models in terms of how closely they fit the calculated results from the capillary pressure model. Figure 3 plots the results from Table 1 in terms of the ratio of the pore diameter to the average particle diameter. The datum points in this figure are calculated by dividing the average pore diameter for each of the five analyzed tests by the pore diameter calculated from the model and the Laplace equation. Comparison of pore diameters from fit of capillary pressure data to pore diameters calculated from mean particle diameters A statistical comparison of the three geometric models indicates that the White and modified RGPZ models fit the pore diameters calculated from the capillary pressure measurements significantly better than the original RGPZ model. To compare the models, the Bayesian Information Criterion (BIC) (Schwarz 1978) was applied to the prediction of the pore diameter to particle diameter ratio given the porosity and to the prediction of the porosity given the diameter ratio. Equal weights were given to each of these comparisons. The modified RGPZ provides the best fit; the BIC scores for the original RGPZ model and the White model relative to the modified RGPZ model were 8.9 and 0.9, respectively. According to the criteria proposed by Kass and Raftery (1995), the BIC score for the original RGPZ model provides a strong indication that this is not the correct model. The BIC scores for the modified RGPZ and White models do not differ sufficiently to distinguish which is the superior model. A model has been developed that uses a characteristic pore pressure to predict residual wetting and nonwetting saturations and capillary imbibition and drainage pressures. This model has a closed form solution that does not contain any empirical or dimensional constants. The model is based on an energy balance that equates changes in potential surface energy to changes in pressure–volume work. A simple probabilistic distribution expresses capillary forces as a function of the relative saturation and the characteristic pore pressure. The model implicitly assumes that the porous material is homogeneous and isotropic. The model predicts a residual wetting saturation, swr, of 0.236 and a residual nonwetting saturation, snr, of 0.884. These predicted residual saturations are in almost exact agreement with measurements. In addition, the model predicts that there is a critical saturation, scr, of 0.301, below which the imbibition pressure exceeds the drainage pressure. A porous material will not drain to any lower saturation, provided that the porous material remains in contact with liquid and there is no external gas flow. The capillary pressure model successfully correlates limiting imbibition and drainage pressures for selected laboratory tests that used uniformly packed, sieved sand, although for certain tests an adjusted saturation is required to account for fissures. For the data that was correlated, the characteristic pore diameter from the capillary pressure model approximately equals pore diameters calculated from the average particle diameter using two different geometric models. This agreement suggests that, for uniform packed sands with a relatively narrow particle size distribution, a capillary pressure curve can be estimated from a measured particle size distribution. The applicability of the model to soils and rocks with widely varying pore size distributions has not been demonstrated. JMP is a registered trademark of The SAS Institute of Cary, North Carolina, USA. Bayes information criterion RGPZ: Revil-Glover-Pezard-Zamora (pore diameter model) UNSODA: UNsaturated SOil hydraulic DAtabase Brooks RH, Corey AT (1964) Hydraulic properties of porous media. Hydrol Pap 3. Civ Eng Dept, Colorado State University, Fort Collins Brutsaert W (1967) Some methods of calculating unsaturated permeability. T ASEA 10(3):400–404 Chen J, Hopmans JW, Grismer ME (1999) Parameter estimation of two-fluid capillary pressure—saturation and permeability functions. Adv Water Resour 22(5):479–493 Demond AH, Roberts PV (1991) Effect of interfacial forces on two-phase capillary pressure-saturation relationships. Water Resour Res 27(3):423–437 Di Donato G, Blunt MJ (2004) Streamline-based dual-porosity simulation of reactive transport and flow in fractured reservoirs. Water Resour Res 40(4):W04203 Ergun S (1952) Fluid flow through packed columns. Chem Eng Prog 48(2):89–94 Fatt I (1956) The network model of porous media: I. capillary pressure characteristics. Petrol Trans AIME 207:144–159 Gardner WR (1958) Some steady state solutions of unsaturated moisture flow equations with application to evaporation from water table. Soil Sci 85(4):228–232 Glover PWJ, Zadjali II, Frew KA (2006) Permeability prediction from MICP and NMR data using an electrokinetic approach. Geophysics 71(4):F49–F60 Hassanizadeh SM, Gray WG (1990) Mechanics and thermodynamics of multiphase flow in porous media including interphase boundaries. Adv Water Resour 13(4):169–186 Hassanizadeh SM, Gray WG (1993) Thermodynamic basis of capillary pressure in porous media. Water Resour Res 29(10):3389–3404 Kass RE, Raftery AE (1995) Bayes factors. J Am Stat Assoc 90(430):773–795 Klavetter EA, Peters RR (1986) Estimation of hydrological properties of unsaturated fractured rock mass. U.S. DOE Sandia National Laboratory Report SAND84-2642 Kosugi K (1994) Three-parameter lognormal distribution model for soil water retention. Water Resour Res 30(4):891–901 Kosugi K (1996) Lognormal distribution model for unsaturated soil hydraulic properties. Water Resour Res 32(9):2697–2703 Lide DR (ed) (1994) CRC handbook of chemistry and physics, 75th edn. CRC Press, Boca Raton Morel-Seytoux HJ, Khanji J (1975) Prediction of imbibition in a horizontal column. Soil Sci Soc Am J 39(4):613–617 Morel-Seytoux HJ, Khanji J, Vachaud G (1973) Prediction errors due to uncertainties in the measurement and extrapolation of the diffusivity function. Civil Engineering Report, Colorado State University, Fort Collins, Colorado 80523, CEP72–73, HJM48 Nemes A, Schaap MG, Leij FJ (1999) The UNSODA unsaturated soil hydraulic database, version 2.0. U. S. Salinity Laboratory, U. S. Department of Agriculture, Agricultural Research Service, Riverside Nemes A, Schaap MG, Leij FJ, Wosten JHM (2001) Description of the unsaturated soil hydraulic database UNSODA version 2.0. J Hydrol 251(3–4):151–162 Nitao JJ (1988) Numerical modeling of the thermal and hydrological environment around a nuclear waste package using the equivalent continuum approximation: horizontal emplacement. U.S. DOE Lawrence Livermore National Laboratory Report UCID-2144 Parker JC, Lenhard RJ (1987) A model for hysteric constitutive relations governing multiphase flow: 1. Saturation-pressure relations. Water Resour Res 23(12):2187–2196 Porter ML, Schaap MG, Wildenschild D (2009) Lattice-Boltzmann simulations of the capillary pressure—saturation—interfacial area relationship for porous media. Adv Water Resour 32(11):1632–1640 Poulovassilis A (1970) Hysteresis of pore water in granular porous bodies. Soil Sci 109(1):5–12 Reeves PC, Celia MA (1996) A functional relationship between capillary pressure, saturation, and interfacial area as revealed by a pore-scale network model. Water Resour Res 32(8):2345–2358 Russo D (1988) Determining soil hydraulic properties by parameter estimation: on the selection of a model for the hydraulic properties. Water Resour Res 24(3):453–459 Schwarz GE (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464 Shen R, Jaynes DB (1988) Effect of soil water hysteresis on simulation, infiltration, and distribution. Shuili Xuebao J Hydr Eng (China) 10:11–20 Smiles DE, Vachaud G, Vauclin M (1971) A test of the uniqueness of the soil moisture characteristic during transient, nonhysteric, flow of water in a rigid soil. Soil Sci Soc Am J 35(4):534–539 So Y, Johnston G, Kim SH (2010) In: Proceedings, SAS global forum 2010, Analyzing interval-censored data with SAS® Software. paper 257-2010, Seattle, Washington, April 11–14 Stauffer F, Dracos T (1986) Experimental and numerical study of water and solute infiltration in layered porous media. J Hydrol 84(1–2):9–34 Vachaud G, Thony J-L (1971) Hysteresis during infiltration and redistribution in a soil column at different initial water contents. Water Resour Res 7(1):111–127 van Genuchten MT (1980) A closed-form equation for predicting the hydraulic conductivity of unsaturated soils. Soil Sci Soc Am J 44(5):892–898 Weisbrod N, McGinnis T, Rockhold ML, Niemet MR, Selker JS (2009) Effective Darcy-scale contact angles in porous media imbibing solutions of various surface tensions. Water Resour Res 45(4):W00D39 White LR (1982) Capillary rise in powders. J Colloid Interf Sci 90(2):536–538 White MD, Oostrom M (1996) STOMP, subsurface transport over multiple phases: theory guide. US DOE Pacific Northwest National Laboratory Report PNNL-11217, UC-2010 The author read and approved the final manuscript. Authors' information James E. Laurinat PE is a Principal Engineer at the Savannah River National Laboratory with over thirty years combined experience in computer modeling, actinide chemical processing, including ion exchange and solvent extraction, nuclear reactor thermal hydraulics, and two-phase flow. Dr. Laurinat holds BS, MS, and PhD degrees in chemical engineering. He is author or coauthor of nine journal publications. The author declares no competing interests. Version 2.0 of the UNSODA database has been made publicly available by the US Department of Agriculture National Agricultural Library in Riverside, California. The website for the database can be accessed by entering the following address into a search engine: https://data.nal.usda.gov/dataset/unsoda (Accessed June 2018). This work was funded by the US Department of Energy Office of Environmental Management under contract number DE-AC09-08SR22470. Savannah River National Laboratory, Savannah River Site, Aiken, SC, 29808, USA James E. Laurinat Correspondence to James E. Laurinat. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Laurinat, J.E. Pore saturation model for capillary imbibition and drainage pressures. Environ Syst Res 7, 18 (2018). https://doi.org/10.1186/s40068-018-0120-2 Accepted: 11 July 2018 Capillary pressure Imbibition Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Dual space 1 Vectors and covectors 3 Operators and naturality 4 Change of basis Vectors and covectors What is the relation between $2$, the counting number, and "doubling", the function $f(x)=2\cdot x$? Linear algebra helps one appreciate this seemingly trivial relation. Indeed, the answer is a linear operator $$D : {\bf R} \rightarrow L({\bf R},{\bf R}),$$ from the reals to the vector space of all linear functions. In fact, it's an isomorphism! More generally, suppose $V$ is a vector space. Let $$V^* = \{ \alpha \colon V \rightarrow {\bf R}, \alpha {\rm \hspace{3pt} linear}\}.$$ It's the set of all linear "functionals", also called covectors, on $V$. It is called the dual of $V$. An illustration of a vector in $V={\bf R}^2$ and a covector in $V^*$. Here a vector is just a pair of numbers, while a covector is a correspondence of each unit vector with a number. The linearity is visible. Note. If $V$ is a module over a ring $R$, the dual space is still the set of all linear functionals on $V$: $$V^* = \{ \alpha \colon V \rightarrow R, \alpha {\rm \hspace{3pt} linear}\}.$$ The results below apply equally to finitely generated free modules. In the above example, it is easy to see a way of building a vector from this covector. Indeed, let's pick the vector $v$ such that the direction of $v$ is that of the one that gives the largest value of the covector $w$ (i.e., $2$), and the magnitude of $v$ is that value of $w$. So the result is $v=(2,2)$. Moreover, covector $w$ can be reconstructed from this vector $v$. (Cf. gradient and norm of a linear operator.) This is how the covector above can be visualized: It is similar to an oil spill. Fact 1: $V^*$ is a vector space. Just as any set of linear operators between two given vector spaces, in this case $V$ and ${\bf R}$. We define the operations for $\alpha, \beta \in V^*, r \in {\bf R}$: $$(\alpha + \beta)(v) = \alpha(v) + \beta(v), v \in V,$$ $$(r \alpha)(w) = r\alpha(w), w \in V.$$ Exercise. Prove it. Start with indicating what $0, -\alpha \in V^*$ are. Refer to theorems of linear algebra, such as the "Subspace Theorem". Below we assume that $V$ is finite dimensional. Fact 2: Every basis of $V$ corresponds to a dual basis of $V^*$, of the same size, built as follows. Given $\{u_1,\ldots,u_n\}$, a basis of $V$. Define a set $\{u_1^*,\ldots,u_n^*\} \subset V^*$ by setting: $$u_i^*(u_j)=\delta _{ij} ,$$ or $$u_i^*(r_1u_1+\ldots+r_nu_n) = r_i, i = 1,\ldots,k.$$ Exercise. Prove that $u_i^* \in V^*$. Example. Dual bases for $V={\bf R}^2$: Theorem. The set $\{u_1^*,\ldots,u_n^*\}$ is linearly independent. Proof. Suppose $$s_1u_1^* + \ldots + s_nu_n^* = 0$$ for some $r_1,\ldots,r_k \in {\bf R}$. This means that $$s_1u_1^*(u)+\ldots+s_nu_n^*(u)=0 \hspace{7pt} (1)$$ for all $u \in V$. We choose $u=u_i, i=1,\ldots,n$ here and use $u_j^*(u_i)=\delta_{ij}$. Then we can rewrite (1) with $u=u_i$ for each $i=1,\ldots,n$ as: $$s_i=0.$$ Therefore $\{u_1^*,\ldots,u_n^*\}$ is linearly independent. $\blacksquare$ Theorem. $\{u_1^*,\ldots,u_n^*\}$ spans $V^*$. Proof. Given $u^* \in V^*$, let $r_i = u^*(u_i) \in {\bf R},i=1,...,n$. Now define $$v^* = r_1u_1^* + \ldots + r_nu_n^*.$$ Consider $$v^*(u_i) = r_1u_1^*(u_i) + \ldots + r_nu_n^*(u_i) = r_i.$$ So $u^*$ and $v^*$ match on the elements of the basis of $V$. Thus $u^*=v^*$. $\blacksquare$ Conclusion 1: $$\dim V^* = \dim V = n.$$ So by the Classification Theorem of Vector Spaces, we have Conclusion 2: $$V^* \simeq V.$$ A two-line version of the proof: $V$ with basis of $n$ elements $\simeq {\bf R}^n$. Then $V^* \simeq {\bf M}(1,n)$. But $\dim {\bf M}(1,n)=n$, etc. Even though a space is isomorphic to its dual, their behavior is not "aligned" (with respect to linear operators), as we show below. In fact, the isomorphism is dependent on the choice of basis. Note 1: The relation between a vector space and its dual can be revealed by looking at vectors as column-vectors (as always) and covectors as row-vectors: $$V = \left\{ x=\left[ \begin{array}{} x_1 \\ \vdots \\ x_n \end{array} \right] \right\}, V^* = \{y=[y_1,\ldots,y_n]\}.$$ This way we can multiply the two as matrices: $$xy=[y_1,\ldots,y_n] \left[ \begin{array}{} x_1 \\ \vdots \\ x_n \end{array} \right] = [y_1,\ldots,y_n][x_1,\ldots,x_n]^T =x_1y_1+...+x_ny_n.$$ The result is their dot product which can also be understood as a linear operator $y\in V^*$ acting on $x\in V$. Note 2: $$\dim \mathcal{L}(V,U) = \dim V \cdot \dim U,$$ if the spaces are finite dimensional. Exercise. Find and picture the duals of the vector and the covectors depicted in the first section. Exercise. Find the dual of ${\bf R}^2$ for two different choices of basis. Operators and naturality That's not all. We need to understand what happens to a linear operator $$A:V \rightarrow W$$ under duality. The answer is uncomplicated but also unexpected, the corresponding dual operator goes in the opposite direction: $$A^*:W^* \rightarrow V^*!$$ And not just because this is the way we chose to define it: $$A^*(f)=f \circ A.$$ A dual counterpart of $A$ really can't be defined in any other way. Consider: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{A} & W \\ & _{g \in V^*} & \searrow & \da{f \in W^*} \\ & & & {\bf R} \end{array} $$ If this is understood a commutative diagram, the relation between $f$ and $g$ is given by the equation above. So, we get $g$ from $f$ by $g=fA$, but not vice versa. Note. The diagram also suggests that the reversal of the arrows has nothing to do with linearity. The issue is "functorial". Theorem. For finite dimensional $V,W$, the matrix of $A^*$ is the transpose of that of $A$: $$A^*=A^T.$$ Proof. Exercise. $\blacksquare$ The composition is preserved but in reverse: Theorem. $(AB)^*=B^*A^*.$ Proof. $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{A} & W & \ra{B} & U\\ & _{g \in V^*} & \searrow & \da{f \in W^*} & \swarrow & _{h \in U^*} \\ & & & {\bf R} \end{array} $$ Finish (Exercise.) $\blacksquare$ As you see, the dual $A^*$ behaves very much like but is not to be confused with the inverse $A^{-1}$. Of course, the former is much simpler! The isomorphism $D$ between $V$ and $V^*$ is very straight-forward: $$D_V(u_i)=u_i^*,$$ where $\{u_i\}$ is a basis of $V$ and $\{u^*_i\}$ is its dual. However, because of the reversed arrows the isomorphism isn't "natural". Indeed, if $f:V \rightarrow U$ is linear, the diagram below does not commute, in general: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{f} & U \\ & \da{D_V} & \ne & \da{D_U} \\ & V^* & \la{f^*} & U^*\\ \end{array} $$ Exercise. Why not? However, the isomorphism with the "second dual" $V^{**}=(V^*)^*$ is natural: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{f} & U \\ & \da{D_V} & & \da{D_U} \\ & V^* & & U^*\\ & \da{D_{V^*}} & & \da{D_{U^*}} \\ & V^{**} & \ra{f^{**}} & U^{**}\\ \end{array} $$ Exercise. Prove that. Demonstrate that it is independent from the choice of basis. When the dot product above is replaced with a particular choice of inner product, we have an identical effect. A general term related to this is adjoint operator. A major topological application of the idea is in chains vs cochains. Change of basis The reversal of arrows also reveals that a change of basis of $V$ affects differently the coordinate representation of vectors and covectors. Retrieved from "https://calculus123.com/index.php?title=Dual_space&oldid=990"
CommonCrawl
How can solar wind be supersonic? I was reading this Wikipedia article on Heliosphere and was confused a bit: this supersonic wind must slow down to meet the gases in the interstellar medium How can sonic speeds exist in space? The space is almost vacuum and sound cannot exist there. Also, isn't the heliosphere about electromagnetic wind, not atomic wind? physics space-weather solar-wind TildalWave yanpasyanpas $\begingroup$ Space is indeed almost vacuum, but that's the keyword. Also, can you clarify what you mean by an electromagnetic wind? $\endgroup$ – Nathan Tuggy Oct 24 '15 at 23:19 $\begingroup$ This is actually pretty tricky to explain. See Eugene Parker's model of hydrodynamic expansion and the solar wind theory (PDF), or Justin C. Kasper's (CfA) The Solar Wind (PDF) presentation for the 2012 Heliophysics Summer School. The latter is the most intuitive way of explaining it that I've come across. And no, solar wind is not merely EM radiation, you're probably confusing it with radiation pressure. Solar wind is a charged particles flux. $\endgroup$ – TildalWave Oct 24 '15 at 23:33 $\begingroup$ @NathanTuggy I mean that Sun emits in space mostly electrons and protons, not heavy particles like Helium, Hydrogen. Am I wrong? $\endgroup$ – yanpas Oct 24 '15 at 23:39 $\begingroup$ @yanpas: Hydrogen is only fractionally heavier than a single proton, you know. About .05%, give or take. $\endgroup$ – Nathan Tuggy Oct 24 '15 at 23:41 The speed of sound in space has multiple meanings because space is not a vacuum (though the number density of Earth's magnetosphere can be ~6-12 orders of magnitude more tenuous than the best vacuums produced in labs), it is full of ionized particles, neutral and charged dust. In the interplanetary medium or IPM, there are five relevant speeds that can all be considered a type of sound in a way, because each are related to the speed of information transfer in the medium. The "speeds of sound" Sound Speed Since a plasma can act collectively like a fluid, it can have a sound speed in the classic form of $C_{s}^{2} = \partial P/\partial \rho$, where $P$ is the thermal pressure and $\rho$ is the mass density. In a plasma, this takes the slightly altered form of: $$ C_{s}^{2} = \frac{ k_{B} \left( Z_{i} \ \gamma_{e} \ T_{e} + \gamma_{i} \ T_{i} \right) }{ m_{i} + m_{e} } $$ where $k_{B}$ is Boltzmann's constant, $Z_{s}$ is the charge state of species $s$, $\gamma_{s}$ is the adiabatic or polytrope index of species $s$, $m_{s}$ is the mass of species $s$, and $T_{s}$ is the average temperature of species $s$. In a tenuous plasma, like that found in the IPM, it is often assumed that $\gamma_{e}$ = 1 (i.e., isothermal) and $\gamma_{i}$ = 2 or 3, or that $\gamma_{e}$ = 1 and $T_{e} \gg T_{i}$. The above form of the sound speed is known as the ion-acoustic sound speed because it is the phase speed at which linear ion-acoustic waves propagate. Thus, $C_{s}$ is a legitimate type of sound speed in space. Alfvén Speed The Alfvén speed is defined as: $$ V_{A} = \frac{ B_{o} }{ \sqrt{ \mu_{o} \ \rho } } $$ where $B_{o}$ is the magnitude of quasi-static, ambient magnetic field, $\mu_{o}$ is the permeability of free space, and $\rho$ is the plasma mass density (which is roughly equivalent to the ion mass density unless it's a pair plasma). This speed is typically associated with transverse Alfvén waves, but the speed is relevant to information transfer in plasmas. Magnetized Sound Waves In a magnetized fluid like a plasma, there are fluctuations that are compressive whereby they compress the magnetic field in phase with the density, called magnetosonic or fast mode waves. The phase speed for a fast mode wave is given by: $$ 2 \ V_{f}^{2} = \left( C_{s}^{2} + V_{A}^{2} \right) + \sqrt{ \left( C_{s}^{2} + V_{A}^{2} \right)^{2} + 4 \ C_{s}^{2} \ V_{A}^{2} \ \sin^{2}{\theta} } $$ where $\theta$ is the angle of propagation with respect to $\mathbf{B}_{o}$. Thermal Speeds There are also the thermal speeds one often thinks about in regards to gases. In a plasma, there is a thermal speed for each particle species, e.g., electrons and ions. The one-dimensional rms speed is given by: $$ V_{Ts}^{rms} = \sqrt{\frac{ k_{B} \ T_{s} }{ m_{s} }} $$ where $s$ can be $e$(electrons) or $i$(ions). The three dimensional most probable speed, which is given by: $$ V_{Ts}^{mps} = \sqrt{\frac{ 2 \ k_{B} \ T_{s} }{ m_{s} }} $$ The reason the solar wind becomes supersonic is actually a bit of a mystery and a large part of the motivation for the Solar Probe Plus mission. One of the original theories was that the change in the ratio of the $B_{o}/n_{o}$ with increasing distance from the solar surface caused an effect similar to that of a de Laval nozzle. There are several other aspects that complicate things (i.e., multiple non-Maxwellian particle distributions), but more recently, Alfvén waves have been proposed as the possible acceleration mechanism. Why subpopulations of particles in a plasma can become supersonic is less of a mystery. There are several mechanisms from wave-particle interactions, Fermi acceleration, different types of shock acceleration, etc. Once supersonic, a gas does not necessarily have any reason to slow down unless it encounters an obstacle, whether a solid body (e.g., asteroid) or a slower flowing fluid. Without some impeding force/resistance, there is no reason that a gas cannot maintain a supersonic speed. Shock waves in a collisional fluid, like Earth's atmosphere, do not last long because they "run into" slower moving fluid, which impedes its propagation. honeste_viverehoneste_vivere Not the answer you're looking for? Browse other questions tagged physics space-weather solar-wind or ask your own question. How are the astronauts in the ISS protected from solar flares? Can electricity be generated from solar storms Does solar wind have any influence on probes? How does Venus' thick atmosphere survive against the solar wind? How to check, if there is currently an increased solar activity? How can a solar storm be predicted? How much solar wind (mass) hits Mercury's north pole? Capture protons from solar wind How quick was the solar wind particle decrease detected by Voyager 1?
CommonCrawl
Previous abstract Next abstract Session 68 -- Solar Activity Oral presentation, Thursday, January 13, 2:15-3:45, Salons A/B Room (Crystal City Marriott) [68.01] A Multiwavelength Study of Solar Ellerman Bombs Tamara E. W. Payne (U. S. Air Force Phillips Laboratory) Solar Ellerman bombs (also known as moustaches) are small ($\approx 1$ arcsec) bright structures which appear in solar active regions. It was the purpose of this study to determine their fundamental character, i.e. are Ellerman bombs flare-like phenomena or do they represent an in-situ, driven release of energy? In order to answer this question and others, several different aspects of bomb behavior were analyzed. Photometric analysis of their optical emission indicates that bombs are constrained to a 500 km region in the lower chromosphere ranging from $\approx 600 - 1100$ km above the $\tau_{5000} = 1$ level. They have lifetimes of 15 minutes and exhibit time profiles of rapid rise and rapid decay that are not convincingly similar to flares (which generally show rapid rise and slow decay). Simultaneous optical, microwave, and soft x-ray observations detected no coronal or upper chromospheric emission associated with Ellerman bombs, thereby constraining the bomb emission to the lower chromosphere and virtually eliminating the possibility of a triggering mechanism high in the atmosphere. The microwave (2 and $3.6$ cm) and soft x-ray observations, however, did detect faint microwave ``twinklings'' which appeared to be cospatial with the footpoint of a coronal soft x-ray loop but which were not associated with an Ellerman bomb. High spatial resolution H$\alpha - 1\,\AA$ movies show some bombs moving radially out from the outer edge of the penumbra into the surrounding undisturbed granulation pattern. High spatial resolution H$\alpha - 1\,\AA$ images deconvolved using a Quasi-Weiner filter, revealed that elliptical bombs appear to consist of two or more emission structures on scales $\leq .5$ arcsec. The optical energy output of a typical Ellerman bomb with a lifetime of 840 seconds and an area of $10^{16} \,cm^{2}$ was estimated to be a minimum of $3.2 \times 10^{27} \,ergs$. This is shown to be on the order of the energy in a photospheric magnetic field of 1000 G contained in a volume of 1000 km x 1000 km x 500 km (the volume of a typical Ellerman bomb). This was also shown to be on the order of the upper limit of the energy output of non-thermal gyro-synchrotron-producing elections located at a height of 2000 km where the $\tau_{2cm} = 1$. Thursday program listing
CommonCrawl
Linear Relationship Definition Michael J Boyle Reviewed by Michael J Boyle Michael Boyle is an experienced financial professional with more than 10 years working with financial planning, derivatives, equities, fixed income, project management, and analytics. Timothy Li Fact checked by Timothy Li Timothy Li is a consultant, accountant, and finance manager with an MBA from USC and over 15 years of corporate finance experience. Timothy has helped provide CEOs and CFOs with deep-dive analytics, providing beautiful stories behind the numbers, graphs, and financial models. What Is a Linear Relationship? A linear relationship (or linear association) is a statistical term used to describe a straight-line relationship between two variables. Linear relationships can be expressed either in a graphical format where the variable and the constant are connected via a straight line or in a mathematical format where the independent variable is multiplied by the slope coefficient, added by a constant, which determines the dependent variable. A linear relationship may be contrasted with a polynomial or non-linear (curved) relationship. A linear relationship (or linear association) is a statistical term used to describe a straight-line relationship between two variables. Linear relationships can be expressed either in a graphical format or as a mathematical equation of the form y = mx + b. Linear relationships are fairly common in daily life. The Linear Equation Is: Mathematically, a linear relationship is one that satisfies the equation: y = m x + b where: m = slope b = y-intercept \begin{aligned} &y = mx + b \\ &\textbf{where:}\\ &m=\text{slope}\\ &b=\text{y-intercept}\\ \end{aligned} ​y=mx+bwhere:m=slopeb=y-intercept​ In this equation, "x" and "y" are two variables which are related by the parameters "m" and "b". Graphically, y = mx + b plots in the x-y plane as a line with slope "m" and y-intercept "b." The y-intercept "b" is simply the value of "y" when x=0. The slope "m" is calculated from any two individual points (x1, y1) and (x2, y2) as: m = ( y 2 − y 1 ) ( x 2 − x 1 ) m = \frac{(y_2 - y_1)}{(x_2 - x_1)} m=(x2​−x1​)(y2​−y1​)​ Linear Relationship What Does a Linear Relationship Tell You? There are three sets of necessary criteria an equation has to meet in order to qualify as a linear one: an equation expressing a linear relationship can't consist of more than two variables, all of the variables in an equation must be to the first power, and the equation must graph as a straight line. A commonly used linear relationship is a correlation, which describes how close to linear fashion one variable changes as related to changes in another variable. In econometrics, linear regression is an often-used method of generating linear relationships to explain various phenomena. It is commonly used in extrapolating events from the past to make forecasts for the future. Not all relationships are linear, however. Some data describe relationships that are curved (such as polynomial relationships) while still other data cannot be parameterized. Mathematically similar to a linear relationship is the concept of a linear function. In one variable, a linear function can be written as follows: f ( x ) = m x + b where: m = slope b = y-intercept \begin{aligned} &f(x) = mx + b \\ &\textbf{where:}\\ &m=\text{slope}\\ &b=\text{y-intercept}\\ \end{aligned} ​f(x)=mx+bwhere:m=slopeb=y-intercept​ This is identical to the given formula for a linear relationship except that the symbol f(x) is used in place of y. This substitution is made to highlight the meaning that x is mapped to f(x), whereas the use of y simply indicates that x and y are two quantities, related by A and B. In the study of linear algebra, the properties of linear functions are extensively studied and made rigorous. Given a scalar C and two vectors A and B from RN, the most general definition of a linear function states that: c × f ( A + B ) = c × f ( A ) + c × f ( B ) c \times f(A +B) = c \times f(A) + c \times f(B) c×f(A+B)=c×f(A)+c×f(B) Examples of Linear Relationships Linear relationships are pretty common in daily life. Let's take the concept of speed for instance. The formula we use to calculate speed is as follows: the rate of speed is the distance traveled over time. If someone in a white 2007 Chrysler Town and Country minivan is traveling between Sacramento and Marysville in California, a 41.3 mile stretch on Highway 99, and the complete the journey ends up taking 40 minutes, she will have been traveling just below 60 mph. While there are more than two variables in this equation, it's still a linear equation because one of the variables will always be a constant (distance). A linear relationship can also be found in the equation distance = rate x time. Because distance is a positive number (in most cases), this linear relationship would be expressed on the top right quadrant of a graph with an X and Y-axis. If a bicycle made for two was traveling at a rate of 30 miles per hour for 20 hours, the rider will end up traveling 600 miles. Represented graphically with the distance on the Y-axis and time on the X-axis, a line tracking the distance over those 20 hours would travel straight out from the convergence of the X and Y-axis. In order to convert Celsius to Fahrenheit, or Fahrenheit to Celsius, you would use the equations below. These equations express a linear relationship on a graph: ° C = 5 9 ( ° F − 3 2 ) \degree C = \frac{5}{9}(\degree F - 32) °C=95​(°F−32) ° F = 9 5 ° C + 3 2 \degree F = \frac{9}{5}\degree C + 32 °F=59​°C+32 Assume that the independent variable is the size of a house (as measured by square footage) which determines the market price of a home (the dependent variable) when it is multiplied by the slope coefficient of 207.65 and is then added to the constant term $10,500. If a home's square footage is 1,250 then the market value of the home is (1,250 x 207.65) + $10,500 = $270,062.50. Graphically, and mathematically, it appears as follows: Image by Julie Bang © Investopedia 2019 In this example, as the size of the house increases, the market value of the house increases in a linear fashion. Some linear relationships between two objects can be called a "proportional relationship." This relationship appears as Y = k × X where: k = constant Y , X = proportional quantities \begin{aligned} &Y = k \times X \\ &\textbf{where:}\\ &k=\text{constant}\\ &Y, X=\text{proportional quantities}\\ \end{aligned} ​Y=k×Xwhere:k=constantY,X=proportional quantities​ When analyzing behavioral data, there is rarely a perfect linear relationship between variables. However, trend-lines can be found in data that form a rough version of a linear relationship. For example, you could look at the daily sales of ice-cream and the daily high temperature as the two variables at play in a graph and find a crude linear relationship between the two. Google Maps. "Sacramento, California to Marysville, California." Accessed Aug. 10, 2020. What is Regression? Definition, Calculation, and Example Regression is a statistical measurement that attempts to determine the strength of the relationship between one dependent variable and a series of other variables. What Is a Learning Curve? Formula, Calculation, and Example A learning curve is a mathematical concept that graphically depicts how a process is improved over time due to learning and increased proficiency. Line of Best Fit: Definition, How It Works, and Calculation The line of best fit is an output of regression analysis that represents the relationship between two or more variables in a data set. Covariance: Formula, Definition, Types, and Examples Covariance is an evaluation of the directional relationship between the returns of two assets. MRS in Economics: What It Is and the Formula for Calculating It Marginal rate of substitution (MRS) is the willingness of a consumer to replace one good for another, as long as the new good is equally satisfying. The Basics of Probability Density Function (PDF), With an Example Probability density function is a statistical expression defining the likelihood of a series of outcomes for a discrete variable, such as a stock or ETF. Linear vs. Multiple Regression: What's the Difference? What Does a Negative Correlation Coefficient Mean? Regression Basics for Business Analysis Technical Analysis Basic Education How Is the Exponential Moving Average (EMA) Formula Calculated? Macaulay Duration vs. Modified Duration: What's the Difference? How to Calculate Return on Investment (ROI)
CommonCrawl
why complex plane closed Author Topic: why complex plane closed (Read 514 times) caowenqi why the complex plane is both open and closed? why did it close? Because by definition, a set is called closed if it contains its boundary. And I don't see the complex plane contains its boundary. Kuba Wernerowski Re: why complex plane closed I used to be really confused by this until I really looked at the definitions of closed and open sets. By definition, a set $S$ is closed if $S^c$ is open, and vice-versa. The empty set $\emptyset$ trivially has nothing but interior points, and also trivially includes its boundary. This took me a while to internalize, but I think it's easiest to just ¯\_(ツ)_/¯ and accept it. Since $\emptyset$ is open, then $\emptyset^c = \mathbb{C}$ is closed. And since $\emptyset$ is closed, then $\emptyset^c = \mathbb{C}$ is open. Therefore, $\mathbb{C}$ is both open and closed. Lubna Burki You can also use the different properties to argue that both the empty set and complex plane are open and by complementarity both are closed. Specifically referring to those listed in JB's lecture as attatched, (1) says that the set S is open iff the intersection between S and its boundary is the empty set. For the empty set itself, the intersection between the empty set and its boundary (also the empty set) is the empty set. So the empty set is open. By (2), the complex plane is a neighborhood of every element in the complex plane by definition. so (2) is satisfied. By (4), since the complement of the empty set is the entire plane, then the entire complex plane must also be closed. Vice versa to conclude that both the empty set and complex plane are open and closed at the same time. Quote from: caowenqi on October 01, 2020, 05:26:28 PM And what is the boundary of $\mathbb{C}$?
CommonCrawl
Prenatal prediction and typing of placental invasion using MRI deep and radiomic features Rongrong Xuan1 na1, Tao Li2 na1, Yutao Wang1, Jian Xu3 & Wei Jin ORCID: orcid.org/0000-0002-6844-43242 To predict placental invasion (PI) and determine the subtype according to the degree of implantation, and to help physicians develop appropriate therapeutic measures, a prenatal prediction and typing of placental invasion method using MRI deep and radiomic features were proposed. The placental tissue of abdominal magnetic resonance (MR) image was segmented to form the regions of interest (ROI) using U-net. The radiomic features were subsequently extracted from ROI. Simultaneously, a deep dynamic convolution neural network (DDCNN) with codec structure was established, which was trained by an autoencoder model to extract the deep features from ROI. Finally, combining the radiomic features and deep features, a classifier based on the multi-layer perceptron model was designed. The classifier was trained to predict prenatal placental invasion as well as determine the invasion subtype. The experimental results show that the average accuracy, sensitivity, and specificity of the proposed method are 0.877, 0.857, and 0.954 respectively, and the area under the ROC curve (AUC) is 0.904, which outperforms the traditional radiomic based auxiliary diagnostic methods. This work not only labeled the placental tissue of MR image in pregnant women automatically but also realized the objective evaluation of placental invasion, thus providing a new approach for the prenatal diagnosis of placental invasion. Placental invasion (PI) is a phenomenon that placental villi directly invade the myometrium due to abnormal hyperplasia of decidua [1]. According to the degree of implantation, it can be divided into three types: placenta accrete (PA), placenta increta (PC), and placenta percreta (PP) [2]. It is called PA if the placental villi are directly attached to the myometrium and require manual placental dissection during delivery, and when the placental villi penetrate deep into the uterine myometrium, it is called PC. In the most severe case, if the placental villi can reach the serous layer, or even penetrate the serosa layer, to the bladder or rectum, it is called PP. It may cause different degrees of damage to pregnant women according to the severity of placenta implantation. The harm of the occurrence of PI is mild to difficult to peel the placenta during delivery, severe to postpartum hemorrhage, amniotic fluid embolism, diffuse intravascular coagulation (DIC), and even seriously endanger the life of pregnant women. There are many high-risk factors for PI, including the history of cesarean section, placenta previa, multiple abortions and curettage, hysteromyomectomy, and other uterine-related operations, and the advanced age of pregnant women, etc [3, 4]. Among these factors, the history of cesarean section is the main risk factor for placental implantation. In recent years, with the increase of cesarean section and abortion operations, the incidence of PI has been increasing year by year [5]. Due to the clinical symptoms of placental invasion are not obvious and lack of specificity before delivery, it is very difficult to diagnose by clinical manifestations. At present, the color Doppler ultrasound (US) and magnetic resonance imaging (MRI) are commonly used in clinical diagnosis and classification of prenatal placenta implantation. Among them, ultrasound examination has many advantages, such as low price, wide application, harmless to the mother and child, etc. It is the preferred imaging method for the diagnosis of placental implantation. However, the detection rate of PI by ultrasound will be reduced or even difficult to detect when the placenta is located at the fundus or posterior wall of the uterus, or there are interfering factors such as intestinal gas. Moreover, the ultrasound examination is of limited value in assessing the degree of placental invasion [6]. MRI examination of the placenta is not interfered by maternal body size, intestinal gas, or placental position, and has a large field of view and high soft tissue resolution. It can be an important complementary imaging method when ultrasound diagnosis of placental implantation is uncertain or limited in evaluation, especially when evaluating the degree of placental implantation and its infiltration to the organs around the uterus [6, 7]. At present, the diagnosis of placental invasion with MRI mainly depends on the visual interpretation of clinicians. This method not only relies on the experience of clinicians, but also easily interfered by various subjective and objective factors, and its efficiency is not high. For the computer-aided diagnosis of PI methods based on radiomics, professional radiologists should label ROI manually, then high-throughput features should be extracted based on ROI, and identified PI by traditional machine learning finally. Sun et al. [8] analyzed 9 pregnant women with pathologically confirmed placental invasion and 56 patients with simple placenta previa using the radiomics approach. They initially extracted texture features from the patient's original MRI images and Laplace Gaussian (LOG) filtered MRI images. Then, the proposed texture features are predicted using an automatic machine learning algorithm. Finally, the intra-placental texture features were shown to be highly efficient in predicting placental invasion after 24 weeks of gestation. Similarly, Romeo et al. [9] explored whether MRI texture features could help to assess the presence of PI in patients with placenta previa. They first manually located ROI on sagittal or coronal T2 weighted images. Then, texture features in the ROI region are extracted by radiomics. Finally, the machine learning model is established to train and test with the extracted features. Among all machine learning algorithms, the k-nearest neighbor algorithm had the highest accuracy of 98.1%. The experimental results suggest that machine learning analysis using MRI-derived texture features is a feasible tool for identifying placental tissue abnormalities in patients with placenta previa. Although the method of radiomics is widely used, it also has some shortcomings. Such methods are difficult to obtain high-quality annotation data and the extracted features lack the ability to express higher-order semantic information, resulting in high missed diagnosis and misdiagnosis when faced with complex cases [10]. Recently, deep learning methods have been widely used in medical image analysis, such as segmentation and disease computer-aided diagnosis [11,12,13,14,15,16,17]. In particular, the related technology represented by U-net has achieved excellent performance in medical segmentation [18]. The U-net consists of a contracting path and an expansive path. In the contracting path, the image features of different levels are extracted by repeated convolution and pooling. And in the expansive path, through the up-sampling and convolution operation symmetrical to the contracting path, and combining the shallow positioning information and the deep classification information of the objects using jump connection, the U-net can fully acquire multi-level features of the image and simultaneously achieves accurate object positioning. Experiments show that U-net based methods have been successful in many medical image segmentation tasks [19,20,21,22]. On the other hand, due to the advantages of the local receptive field, weight sharing, and temporal & spatial sub-sampling, the deep convolution neural network (CNN) can realize the invariance of displacement, scale, and deformation to some extent, and can mine the semantic information contained in the images. Therefore, based on the deep features extracted by CNN, deep learning has been widely used in medical image aided diagnosis. At present, in medical image processing, deep learning and radiomics are also showing a trend of mutual integration and collaborative development. A common kind of method is the feature-level fusion, which combines the deep features and radiomics features into a new feature vector for the subsequent disease classification and prediction [15, 23]. This scheme has been applied to tasks such as detection and classification of lung nodules [24, 25], image attribute analysis of tumors [26], and prediction of cancer survival rates [27]. In July 2019, Zhu et al. [28] used the MRI of 181 patients with meningioma to establish a deep radiomics model to classify meningioma in a non-invasive manner. The results show that the deep learning combined radiomics model has outstanding quantification ability in the non-invasive individualized meningioma grade prediction. Although the model of combining deep learning and radiomics is playing an increasingly important role in medical imaging diagnosis, there is still a lack of relevant research in the auxiliary diagnosis of placental invasion. Based on the above analysis, deep learning and radiomics were combined to carry out the prenatal diagnosis of placental invasion based on MRI in this paper. Firstly, we train the U-net to segment the placental region of the magnetic resonance (MR) image and extract the radiomics features. Then, we construct a deep dynamic convolution neural network via self-encoder learning to extract the deep features of the placenta tissues to characterize the status of placental invasion. Finally, a multi-layer perceptron network is constructed and trained to realize the prenatal diagnosis of placental invasion and determine the subtypes of placental invasion. The main contributions of this work are as follows: A new method for the detection of placental invasion based on MRI is proposed. The placental tissue of pregnant women was marked automatically by deep segmentation instead of manual, which provides the regions of interest (ROI) for subsequent diagnosis of placental invasion. The influence of different degrees of placental boundary dilation on the prediction accuracy of placental invasion was quantitatively analyzed. A deep dynamic convolution neural network (DDCNN) with codec structure was established to better characterize the semantic information of different placental implant types. The prenatal diagnosis and accurate typing of placental invasion were realized by fusing the deep and radiomic features. This paper is organized as follows. In this section, we introduce the definition and related work of placental implantation and briefly describe the characteristics of the methods in this paper. In the next four sections, we first present the experimental configuration and evaluation metrics in the "Results" section and analyze the experimental results. It is followed by a "Conclusion" section where we conclude and discuss future work. After that, we discuss the methods in this paper and summarize their advantages and disadvantages in the "Discussion" chapter. Finally, we provide a detailed description of the dataset and methods in the "Methods" section. All the experiments were conducted on an AMD Ryzen 7 3800X @ 3.89 GHz with 32-GB RAM. Unless otherwise specified, for all deep learning models, we initialized the weights with random values, set the batch size to 8, set the learning rate to 0.001, and trained 200 epochs on 11G NVIDIA RTX 2080Ti GPU, SGD as the optimizer. The automatic segmentation of placental tissues in MR images is realized by the trained U-net. The performance of the segmentation method was evaluated quantitatively using the four widely used evaluation metrics, i.e, segmentation Accuracy (ACC), Precision (PRE), Recall (REC), and F1 score (F1). These evaluation metrics were calculated as follows: $$\begin{aligned} {\text{ACC}}&= \frac{{\text{TP}}+{\text{TN}}}{{\text{TP}}+{\text{TN}}+{\text{FP}}+{\text{FN}}} \end{aligned}$$ $$\begin{aligned} {\text{PRE}}&= \frac{{\text{TP}}}{{\text{TP}}+{\text{FP}}} \end{aligned}$$ $$\begin{aligned} {\text{REC}}&= \frac{{\text{TP}}}{{\text{TP}}+{\text{FN}}} \end{aligned}$$ $$\begin{aligned} F_1&= \frac{2*{\text{PRE}}*{\text{REC}}}{{\text{PRE}}+{\text{REC}}} \end{aligned}$$ where TP(True Positive) was the number of placenta pixels that were correctly identified as placenta and FP(False Positive) was the number of background pixels that were incorrectly identified as placenta. FN(False Negative) was the number of placenta pixels that were incorrectly identified as background and TN(True Negative) was the number of background pixels that were correctly identified as background. To evaluate the performance of various methods for predicting and typing placental invasion, the three evaluation metrics, i.e, Average Accuracy (AACC), Average Sensitivity (ASEN), and Average specificity (ASPE) were calculated as follows: $$\begin{aligned} {\text{AACC}}&= \frac{1}{4} * \sum _{i=0}^3{\frac{{\text{TP}}_i+{\text{TN}}_i}{{\text{TP}}_i+{\text{TN}}_i+{\text{FP}}_i+{\text{FN}}_i }} \end{aligned}$$ $$\begin{aligned} {\text{ASEN}}&= \frac{1}{4} * \sum _{i=0}^3{\frac{{\text{TP}}_i}{{\text{TP}}_i+{\text{FN}}_i}} \end{aligned}$$ $$\begin{aligned} {\text{ASPE}}&= \frac{1}{4} * \sum _{i=0}^3{\frac{{\text{TN}}_i}{{\text{FP}}_i+{\text{TN}}_i}} \end{aligned}$$ where i = 0,\(\ldots \), 3 indicate the type of no placental invasion, placenta accreta, placenta increta, and placenta percreta respectively. \( {\text{TP}}_i \) is the number of patients that belong to the ith type of placental invasion and were classified exactly as ith type by the classifier. \( {\text{FP}}_i \) is the number of patients that do not belong to the ith type of placental invasion but were classified as ith type by the classifier. The definitions of \( {\text{TN}}_i \) and \( {\text{FN}}_i \) follow this pattern. Automatic segmentation of placenta We selected 490 T2-sequence MR images with placenta marked by the radiologists as training samples, including 165 transverse, 182 sagittal, and 143 coronal images to train U-net. And the trained U-net was used to segment the placenta automatically on abdominal T2WI MR images. The segmentation results of placental tissue of some test images were shown in Fig. 1. Where Fig. 1a–c were the segmentation results of the transverse, sagittal, and coronal planes respectively. Comparison between u-net automatic segmentation and expert segmentation In the figure, the red border is the boundary of placental tissue delineated by radiologists, and the green border is the boundary of placental tissue automatically segmented by the trained U-net. It can be seen that U-net segmentation results of the transverse and sagittal planes are consistent with the radiologists' segmentation results. While for the coronal plane, the inconsistency of the segmentation results between the U-net and radiologists are more obvious than that of the transverse and sagittal planes. In general, the segmentation errors of U-Net are all within the controllable range and have little impact on subsequent tasks. To evaluate the performance of the ROI extraction network, we calculated the quantitative indicators of the segmentation results of the test images and gave a statistical box plot as shown in Fig. 2. Box plot of accuracy, precision, recall, and F1 score of the placenta segmentation model The digits in Fig. 2 represent the median of the corresponding metrics. It can be seen that the accuracy and recall of segmentation are both 0.940, the precision is 0.954, and the F1 score is 0.945. Besides, we calculated the inference speed of U-net, whose average computation time of single-image is 29.640ms. The above results show that the model has high segmentation accuracy and can replace radiologists to segment the placental tissue region of the MR image to a certain extent. The influence of different pixels extended from the placental region on the accuracy of predicting placental invasion Normally, the placenta and uterine myometrium are separated by the basal decidua. If the basal decidua is lost due to various reasons, the placental villi will adhere directly to the uterine myometrium in the absence of basal decidua which will lead to placental invasion, and the depth of the invasion of placental villi into the myometrium will determine the severity of placental implantation. Therefore, we extend different pixels from the placental region to form ROIs for subsequent placental invasion diagnosis. To optimize the extension size, we extended the placental region of the T2WI MRI image to the surrounding area with 10, 20, 40, 60 pixels to form ROIs and extracted the radiomic features and deep features from the ROIs. The classification model described in prenatal prediction and typing of placental invasion was used to predict the type of placental invasion, and the performance of different size boundary expansion was evaluated from the aspects of AACC, ASEN, and ASPE. The experimental results are shown in Table 1. Table 1 Performance analysis of the model in different extended pixels It can be seen from the table that the model trained by the samples with the 40 pixels extension from the placental region performed better in various quantitative indicators than other extension models, the AACC, ASEN, and ASPE are 0.877, 0.857, and 0.954 respectively. The results show that the extension of the placental tissue region to form ROIs is favorable to detect placental invasion, and the border extension with 40 pixels is relatively good. The following experiments for the computer-assisted diagnosis of placental invasion are all based on the ROIs formed after the 40 pixels extension of the placental region. The diagnostic ability of the proposed approach To objectively evaluate the clinical application potential of the proposed method in the assisted diagnosis of placental invasion, we tested the model with the test set. The confusion matrix of the diagnosis results for the test set is shown below. Table 2 Confusion matrix of the diagnosis results of the proposed method for typing of placental invasion It can be seen from Table 2 that the model has a strong ability to distinguish between non-placental invasion (normal placenta) and placenta percreta, which is consistent with the signs of placental invasion in MR images. The image features of non-placental invasion and placental percreta are more obvious than those of placental accreta and increta. Due to the clinical manifestations of the difference between placenta accreta and placenta increta are ambiguous, the model has certain room for improvement in the ability to distinguish placental increta and placenta accreta. On the other hand, a large number of studies have also confirmed there is little difference between placental accreta and increta, and it is difficult to completely distinguish them by imaging alone. For this difficulty, other clinical information of patients can be considered in the next step to further improve the discriminatory ability of the model. Comparisons with other approaches Various approaches can be used for the diagnosis of placental invasion. Each approach has its respective accuracy level. Here, the proposed approach was compared with traditional methods based on machine learning (ML) and deep learning (DL). The comparison of traditional machine learning methods includes random forest (RF), decision tree (DT), and logistic regression (LR), which use only radiomics features for placental invasion identification and typing. For the random forest, univariate feature selection was used to screen the radiomics features, and the 20 highest-scoring features were selected to train the random forest. All machine learning models are built on Scikit-learn (version 0.23, download link: https://scikit-learn.org/stable/index.html). The approaches based on deep learning combine deep features with radiomic features to type placental invasion. Specifically, the proposed approach uses the dynamic convolutional neural network to extract deep features, while the comparison method uses a standard convolutional neural network (CNN) to extract deep features. We also compare with current deep learning methods, including ResNet50 (RN) [29] and SENet (SEN) [30]. The performance of different approaches was evaluated in terms of AUC (area under ROC curve), AACC, ASEN, and ASPE. The results are shown in Table 3. Table 3 Comparison of different approaches for typing placental invasion As can be seen from the table, compared with other traditional machine learning approaches, the random forest has better performance, with an average accuracy of 0.767, an average sensitivity of 0.709, and an average specificity of 0.919. The average accuracy, sensitivity, and specificity of the traditional deep learning approach were 0.840, 0.805, and 0.937, respectively, which improved the performance of traditional machine learning approaches. The average accuracy, sensitivity, and specificity of the proposed approach reached 0.877, 0.857, and 0.954, respectively, and all the evaluation metrics were the best. From the perspective of AUC, the proposed approach is 0.904, which is superior to all other approaches, followed by ResNet50 and SENet with 0.879 and 0.874, respectively. The AUC of the standard convolution neural network is 0.870, while the approaches based on traditional machine learning models are relatively poor. This indicates that it is beneficial to introduce deep features into the computer-assisted diagnosis of placental invasion and that the dynamic convolutional neural network proposed in this paper can more effectively mine the health semantic information contained in MR images, so as to improve the ability to distinguish pathological subtypes of placental invasion. To evaluate the computational efficiency of different methods, the total reasoning time is calculated for 163 MRI images in the test set. Then the mean time for placental invasion staging on a single MRI image was calculated for each method. As shown in the last column of Table 3, the average computation time of various methods is listed. It can be seen that overall the inference time of the machine learning methods is faster compared to the deep learning methods. For machine learning methods, the fastest is LR with a single sample inference time of 0.003ms, and the slowest is RF with 0.037ms. For the deep learning methods, ResNet50 took the longest time to detect a single MRI image, 17.845ms, followed by this paper's method and CNN, 16.140 and 13.306ms, respectively. Note that, except for ResNet and SENet, the other methods mentioned above rely on the results of U-Net segmentation, so the inference time calculation of U-Net should also be taken into account during practical use. Besides, machine learning methods do not need to use GPU for computing, while deep learning methods must use GPU for acceleration to obtain faster speedups. Overall, the method proposed in this paper has higher performance and acceptable efficiency for placental invasion detection compared to competing methods. Placental invasion is a common emergency in obstetrics, which is mainly caused by patients with traumatic endometrial defects and primary decidual hypoplasia. The patients may have a severe postpartum hemorrhage, postpartum placenta retention, uterine perforation, and secondary infection may occur, which seriously endanger the life of pregnant women and fetuses. According to the severity of placental villi invading the myometrium, placenta accreta can be divided into placenta accreta, placenta increta, and placenta percreta. Accurate prediction of the degree of placental invasion helps to provide more effective treatment for patients. At present, MRI has been widely used in the diagnosis and subtyping of placental invasion (PI), and the effectiveness of MRI in the diagnosis of PI has been verified by an enormous amount of research [7, 31]. Although some researchers have carried out a computer-aided diagnosis of placental invasion, most of the existing work requires radiologists to manually segment the placental region in advance, which not only relies on expert experience and is inefficient. In this paper, a U-net model that can automatically achieve the segmentation of placental tissue is trained, which provides a scalable medical image segmentation method with high segmentation accuracy, and can provide a reliable ROI for the detection and typing of placental invasion. To the best of our knowledge, most of the current studies on placental invasion are limited to the use of radiomic features [8, 9, 32,33,34]. Although the radiomic features have good interpretation, their low-level characteristics make it difficult to mine deep health semantic information of images. In this paper, deep learning is introduced to extract the deep features of ROI and combined with radiomic features to improve the accuracy of placental invasion prediction. On the other hand, in general, the deeper the degree of placental invasion, the more serious of placental villi invasion into the myometrium, that is to say, the prediction of placenta accreta is more dependent on the characteristics of the placental boundary area. In this paper, we extend different pixels from the placental region to form ROIs, and the optimal extension size is determined by experiments. The results show that the proposed method greatly improves the prediction performance. Although the model in this paper has some advantages in the staging of placental invasion, it still has shortcomings. Firstly, since the model in this paper integrates multiple depth models, it requires more hardware resources and training time compared to machine learning methods. Besides, due to the segmentation model U-net used in this paper, its performance relies on a large number of segmentation labels, so it still requires radiologists to annotate the ROI regions when training the segmentation model. In summary, future work will focus on optimizing the model efficiency, getting a better trade-off between efficiency and accuracy, and improving the generalization ability of the model to extend the method of this paper to other medical image processing tasks. With the increasing of cesarean section and other intrauterine operations, the incidence of placental invasion has become a common and frequently occurring disease in obstetrics. Different types of placental invasion cause different degrees of injury to pregnant women, and the placental invasion often has no clinical symptoms or symptoms lack of specificity before delivery, so it is difficult to diagnose through clinical manifestations. At present, the placental invasion diagnosis usually requires professional radiologists to mark the ROI first, which is hard and tedious work, and the segmentation standard is difficult to unify. To solve these problems, this paper adopts deep learning to automatically segment placental region to form ROI based on MR image, then placental invasion was diagnosed and typed according to the degree of implantation by combining radiomic and deep features of ROI. The results show that the proposed approach has the potential to predict different degrees of placental invasion and can be used as an auxiliary tool for the clinical diagnosis of placental invasion. To fuse the deep and radiomic features of MRI images, and establish an automatic prenatal prediction and typing model for placental invasion, this research mainly includes data collection, ROI extraction, deep and radiomics features extraction, and classification network training. The process flow diagram is shown in Fig. 3. The process flow diagram of this study In the figure, firstly, the U-net is trained using the ROI data marked by the radiologists to make it have the ability to segment the placental tissue from the original MRI image. Then, the trained U-net is used to realize the automatic extraction of placental tissue, and ROI expansion is performed to determine the relative better ROIso as to the deep features and radiomics features can be extracted. Finally, a multilayer perceptron network is established by combining radiomics and deep features to realize the prenatal diagnosis of placental invasion. It is fundamental to collect necessary MR images and related clinical materials for the prenatal evaluation of placental invasion. The MR images and clinical materials were collected from the Affiliated Hospital of Medical College of Ningbo University and Ningbo women's and children's hospital, the time span of the collected materials was from January 2017 to November 2020. All included cases were suspected of placental invasion by ultrasonography or clinical examination. Meanwhile, the surgical (delivery) and postoperative pathological data of relevant patients were obtained. All MRI examinations were performed by radiologists with more than 5-year of work experience using 1.5 Tesla units to perform 8 or 16-channel array sensitivity-coded abdominal coil scans. The imaging equipment of the Affiliated Hospital of Medical College of Ningbo University is Ge signa twinspeed 1.5T superconducting dual gradient magnetic resonance scanner with 8-channel body phased array coil. The imaging equipment of Ningbo Women's and Children's Hospital is Philips Achieva Noval Dual 1.5T superconducting dual gradient magnetic resonance scanner, using a 16-channel body phased array coil. Before MR examination, the patients were asked to fill the bladder with moderate water and respiratory training. When scanning, the supine position is adopted, and the head is advanced. All sequences are scanned in three directions: transverse, sagittal, and coronal. The scanned images are stored in the hospital's Picture Archiving and Communication System (PACS) in the Digital Imaging and Communications in Medicine (DICOM) format. The inclusion criteria were as follows: (1) Patients who underwent MRI examination after 30 weeks of gestation with T2 weighted image (T2WI) sequence; (2) Patients with definite placental invasion or pathological records after cesarean section; (3) Patients with good image quality and meeting the diagnostic requirements. The exclusion criteria were as follows: (1) Patients without T2WI MRI data; (2) Patients without clinical or surgical pathology confirmation; (3) Patients with severe image artifacts due to fetal movement or poor cooperation of pregnant women. According to the above criteria, we collected 352 patients' data from the Affiliated Hospital of Medical College of Ningbo University and Ningbo women's and children's Hospital. There were 147 cases without placental invasion and 205 cases with placental invasion. Among 205 cases of placental invasion, 66 cases were placenta accreta, 117 cases were placenta increta and 22 cases were placenta percreta. We divide it into the train set and test set. There were 189 cases in the train set and 163 cases in the test set. In the train set, 84 cases without placental invasion, 34 cases with placenta accrete, 60 cases with placenta increta, and 11 cases with placenta percreta. The test set included 63 cases without placental invasion, 32 cases with placenta accrete, 57 cases with placenta increta, and 11 cases with placenta percreta. Considering that the main signs of placental invasion in MR imaging are as follows: the placental signal was uneven (low or slightly high, mixed high signal shadows were seen in the placenta) on T2WI images, the local irregular thinning or disappearance of moderate or slightly high signal myometrium on T2WI images, the placenta or (and) uterine localized abnormal protrusion reflected by T2WI images, and low signal strip shadow in the placenta in T2WI images, etc. [32]. Therefore, T2WI is the main reference imaging sequence for clinical diagnosis of placenta accrete. In this study, T2 sequences of transverse, coronal, and sagittal were selected to study the auxiliary diagnosis of placental invasion. ROI extraction Extraction of ROI is the basis of computer-aided diagnosis of placental invasion. Based on U-net, we established the model of placental tissue segmentation, thus extracting the ROI automatically. Firstly, some MR images were selected, and two radiologists with more than 5-year working experience annotated the region and of placental tissue and outlined the boundary of placental. The annotation software was ITK-SNAP (version 3.6.0, download website: http://www.itksnap.org/). To ensure annotating placental region accuracy, the two radiologists annotated each image separately and took the intersection of the labeled areas. If the annotating regions of the two radiologists diverge significantly, another radiologist with more than 10-year of working experience was invited to evaluate the labeling results, and the final results were given after negotiation among them. Figure 4 shows the placental tissue boundary of T2WI labeled by the radiologist. Figures (a), (b), and (c) show the transverse, sagittal, and coronal planes respectively. Examples of the T2WI MRI images and labels used in the present study In this study, 490 T2WI images were selected annotated the placental region as the ground truth training U-net. Recently, U-net has been successfully applied in image segmentation, especially in medical image segmentation. By end-to-end training from very few images, U-net can obtain accurate target boundary location in image segmentation. The U-net consists of two paths: down-sampling and up-sampling. The down-sampling encodes image semantic information through the level by level convolution and pooling while the up-sampling decodes the spatial and multi-scale information by step-by-step de-convolution to acquire multi-level features of the image and simultaneously achieves target segmentation. To make up for the loss of the spatial and boundary information in the encoding stage, the feature maps of the encoder and decoder were fused by concatenating correspondingly using the skip connection. By fusion low-level spatial information and high-level semantic information, the decoder of the U-net can obtain more high-resolution information when up-sampling to recover the details of the original image more perfectly, and then improve the segmentation accuracy. Although U-net can segment the placental area accurately, for subsequent placental invasion typing, placental tissue alone cannot fully characterize the relationship between placenta and neighboring tissues and organs. This is because placental invasion is not only related to the characteristics within the placental region, but also the characteristics of the boundary between the placenta and the uterine myometrium [33]. In addition, according to relevant reports, placental tissue infiltration of the bladder and other tissues and organs adjacent to the placenta is also a specific sign for the diagnosis of penetrating placental invasion [35]. To evaluate the discriminative power of peri-placenta pixels on placental invasion property, 10, 20, 40, and 60 pixels were extended from the placental region segmented by U-Net to form ROIs for subsequent placental invasion diagnosis. The reasonable extension was determined through the follow-up placental invasion diagnosis experiments. Based on ROI, the deep and radiomic features were extracted to construct the evaluation model of placental invasion typing. Figure 5 shows an example of the placental tissue segmented by U-Net and the ROIs formed by extending the boundary with different sizes. The ROIs formed with the different radial extension on T2WI MRI sagittal plane. The placental tissue is denoted as red, the colored circles denote different radial extensions of the placental tissue Extraction of radiomics features After segmenting the ROI, we use PyRadiomics (version 3.0, download address: https://github.com/Radiomics/pyradiomics) [36] to extract radiomic features to train the auxiliary diagnosis model of placental invasion. The extracted features can be divided into three categories: (1) Intensity-based features; (2) Shape-based features; (3) Texture-based features. Among them, the intensity-based feature transforms ROI into a single histogram (which describes the distribution of pixel intensity) and derives some basic features (such as energy, entropy, kurtosis, and skewness) from it. Shape-based features describe the geometric structure of ROI, which is useful in a sense because the shape of placental tissue is highly correlated with placental invasion [32, 37]. Texture based features are the most informative features, especially for the issue of tissue heterogeneity, because texture-based features can capture the spatial relationship between adjacent pixels [8, 9, 34]. In this paper, we use the Gray-level co-occurrence matrix (GLCM), Gray-level run length matrix (GLRLM), and Gray-level size zone matrix (GLSZM), etc. to calculate various texture features. We extracted 100 image features, including 18 intensity-based features, 9 shape-based features, and 73 texture-based features. Extraction of deep features Radiomic features can describe the gray distribution, shape, texture, and other characteristics of placental tissue in MR images, but it is difficult to accurately describe the overall structural relationship between lesions and surrounding tissues, which is of great significance for the diagnosis of placental invasion and evaluation of the degree of placenta implantation. In recent years, image features extracted by deep convolution neural network (DCNN) have been proved to be effective in improving the accuracy of image classification, segmentation, or retrieval [38]. We transformed the prenatal prediction and typing of placental invasion into a classification problem. Therefore, according to the characteristics of placental invasion in MR images, a deep dynamic convolution neural network (DDCNN) is designed to extract the deep features. The structure of DDCNN is shown in Fig. 6. As can be seen from Fig. 6, the backbone of DDCNN is a multi-layer automatic coding network, which is composed of the encoder and decoder with a symmetrical structure [39]. During network training, we intercept the original MR image with the smallest bounding rectangle of the extracted ROI and use it as the input of DDCNN. In the encoding stage, the input MR image undergoes a 5-stage Group Model A convolution and pooling operation, and the input MR image is mapped into a feature vector that can represent the semantic information of the placenta. In the decoding stage, the feature vector output by the encoder undergoes a 5-stage Group Model B upsampling and deconvolution operation and restores the original input image as much as possible as the training target. After the DDCNN training is completed, we remove the decoder, fix the encoder parameters, and input the smallest external rectangle region containing the ROI of the MR image, then we can project it into a low-order feature space through the encoder, and realize the extraction of depth features. With this structure, DDCNN can extract the depth features of MR images through unsupervised training, thus solving the problems of traditional CNN in extracting the depth features of MR images, such as the difficulty of extracting supervised samples and the network easily falling into overfitting. The structure of DDCNN The Group Model A and Group Model B of the DDCNN are shown in Fig. 7. Each level of the Group Model is mainly composed of a dynamic convolutional layer [40], RELU activation layer [41], BN layer [42], and the max-pooling layer or up-sampling layer. The structure of the Group module It can be seen from Fig. 7 that in the Group Model, the traditional convolution operation is replaced by dynamic convolution. The method to achieve dynamic convolution is to replace the traditional fixed convolution kernel with a dynamic convolution kernel that is adaptively adjusted with the input image. Specifically, it uses the mechanism of multiple convolution kernels and introduces a lightweight squeeze and excitation module [30] to build an attention model. Through model training, the respective weights of multiple convolution kernels are obtained, and the dynamic convolution kernel obtained by weighting and superimposing each convolution kernel participates in the convolution operation. Suppose the multiple convolution kernels introduced in a certain layer of convolution are \({\text{conv}}_1,{\text{conv}}_2,\ldots ,covn_N\), and their respective weights are \(w_1,w_2,\ldots ,w_N\). The squeeze and excitation introduced in the model is shown in the attention module in Fig. 7. The input images are processed by average pooling, full connection, relu, and finally mapped to the output \(w_1,w_2,\ldots ,w_N\) by softmax [40]. Where: $$\begin{aligned} w_1+w_2+\ldots +w_N=1 \end{aligned}$$ where N denotes the number of convolution kernels. After the weights of each convolution kernel are obtained, the dynamic convolution kernel involved in feature extraction can be constructed through weighted superposition [40]: $$\begin{aligned} {\text{conv}}=\sum _{i=1}^{N}{w_i*{\text{conv}}_i} \end{aligned}$$ where \({\text{conv}}_i\) denotes the ith convolutional kernel, \(w_i\) represents the weight of the corresponding ith convolution generated by the attention module, N indicates the number of convolutional kernels, and conv is the synthesized convolutional kernel, which means the final convolutional kernel involved in the operation in Group Model A. With this structure, the convolution kernel will be adjusted adaptively with the input image during convolution operation, which can better adapt to the structural heterogeneity of placenta tissue of different patients and different types of placental invasion, so that the extracted deep features can effectively describe the pathological information contained in placental tissue. Prenatal prediction and typing of placental invasion Based on the above-mentioned features, we train a classifier using the multi-layer perceptron model to divide the patients into four types: no placental invasion, placenta accreta, placenta increta, and placenta percreta according to the T2WI images, so as to realize the prenatal prediction and typing of placental invasion. The constructed classifier is shown in Fig. 8. As shown in Fig. 8, the input of the classifier is the radiomic feature extracted from the ROI and the deep features extracted by the DDCNN encoder. To maintain the balance between the two types of features, their dimensions are all set to 100. When training the classifier, the results confirmed by clinical or surgical pathology are used as supervision information. The classifier consists of four layers, in which the number of neurons in each layer is 200, 100, 20, and 4 respectively. The activation function of the middle layers is Relu, and the output of the last layer of the classifier is a 4-dimensional feature vector, which is activated by softmax which is often used in multi-classification problems. Classification framework that combines deep with radiomic features The softmax [40] first enhances the difference between input values by nonlinear exponential operation with base exp, and then the output of multiple neurons is mapped to the values in the (0,1) interval and normalized to a probability distribution, so as to perform multi-classification, as shown in formula (10). $$\begin{aligned} {S_i=\frac{\exp ^{y_{j}}}{\sum _{j=0}^{3}{exp^{y_{j}} }}},{0<=i<=3} \end{aligned}$$ where \(y_j\) is the output of the classifier, \(S_i\) is the probability value of patients corresponding to four types of no placental invasion, placenta accreta, placenta increta, and placenta percreta, and the type with the highest probability is taken as the final prediction result. As these data involve personal privacy of patients, it has not been made public yet. Placental invasion MR: Region of interest DDCNN: Deep dynamic convolution neural network The area under the ROC curve PA: Placenta accrete Placenta increta Placenta percreta DIC: Diffuse intravascular coagulation The color Doppler ultrasound CNN: The deep convolution neural network PACS: Picture archiving and communication system DICOM: Digital imaging and communications in medicine T2WI: T2 weighted image GLCM: Gray-level co-occurrence matrix GLRLM: Gray-level run length matrix GLSZM: Gray-level size zone matrix PRE: REC: F1: F1 score AACC: Average accuracy ASEN: Average sensitivity ASPE: Average specificity ML: DL: RF: Receiver operating characteristic curve Teo TH, Law YM, Tay KH, Tan BS, Cheah FK. Use of magnetic resonance imaging in evaluation of placental invasion. Clin Radiol. 2009;64(5):511–6. https://doi.org/10.1016/j.crad.2009.02.003. Silver RM, Barbour KD. Placenta accreta spectrum. Obstetr Gynecol Clin N Am. 2015;42(2):381–402. https://doi.org/10.1016/j.ogc.2015.01.014. Kilcoyne A, Shenoy-Bhangle AS, Roberts DJ, Sisodia RC, Gervais DA, Lee SI. MRI of placenta accreta, placenta increta, and placenta percreta: pearls and pitfalls. Am J Roentgenol. 2017;208(1):214–21. https://doi.org/10.2214/ajr.16.16281. Khong TY. The pathology of placenta accreta, a worldwide epidemic. J Clin Pathol. 2008;61(12):1243–6. https://doi.org/10.1136/jcp.2008.055202. Gielchinsky Y, Rojansky N, Fasouliotis SJ, Ezra Y. Placenta accreta-summary of 10 years: a survey of 310 cases. Placenta. 2002;23(2–3):210–4. https://doi.org/10.1053/plac.2001.0764. Rahaim NSA, Whitby EH. The MRI features of placental adhesion disorder and their diagnostic significance: systematic review. Clin Radiol. 2015;70(9):917–25. https://doi.org/10.1016/j.crad.2015.04.010. Baughman WC, Corteville JE, Shah RR. Placenta accreta: spectrum of US and MR imaging findings. RadioGraphics. 2008;28(7):1905–16. https://doi.org/10.1148/rg.287085060. Sun H, Qu H, Chen L, Wang W, Liao Y, Zou L, Zhou Z, Wang X, Zhou S. Identification of suspicious invasive placentation based on clinical MRI data using textural features and automated machine learning. Eur Radiol. 2019;29(11):6152–62. https://doi.org/10.1007/s00330-019-06372-9. Romeo V, Ricciardi C, Cuocolo R, Stanzione A, Verde F, Sarno L, Improta G, Mainenti PP, Darmiento M, Brunetti A, Maurea S. Machine learning analysis of MRI-derived texture features to predict placenta accreta spectrum in patients with placenta previa. Magn Reson Imag. 2019;64:71–6. https://doi.org/10.1016/j.mri.2019.05.017. Afshar P, Mohammadi A, Plataniotis KN, Oikonomou A, Benali H. From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities. IEEE Signal Process Mag. 2019;36(4):132–60. https://doi.org/10.1109/msp.2019.2900993. Jiang X, Li J, Kan Y, Yu T, Chang S, Sha X, Zheng H, Luo Y, Wang S. MRI based radiomics approach with deep learning for prediction of vessel invasion in early-stage cervical cancer. IEEE/ACM Trans Comput Biol Bioinform. 2020. https://doi.org/10.1109/tcbb.2019.2963867. Wu T, Sun X, Liu J. Segmentation of uterine area in patients with preclinical placenta previa based on deep learning. In: 2019 6th International conference on information science and control engineering (ICISCE). 2019; pp. 541–4 . https://doi.org/10.1109/ICISCE48695.2019.00114. Bano S, Vasconcelos F, Shepherd LM, Vander Poorten E, Vercauteren T, Ourselin S, David AL, Deprest J, Stoyanov D. Deep placental vessel segmentation for fetoscopic mosaicking. In: Martel AL, Abolmaesumi, P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, Racoceanu D, Joskowicz L (eds.) Medical image computing and computer assisted intervention—MICCAI 2020. 2020; pp. 763–73. Springer, Cham. https://doi.org/10.1007/978-3-030-59716-0. Lao J, Chen Y, Li ZC, Li Q, Zhang J, Liu J, Zhai G. A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme. Sci Rep. 2017. https://doi.org/10.1038/s41598-017-10649-8. Wang X, Zhang L, Yang X, Tang L, Zhao J, Chen G, Li X, Yan S, Li S, Yang Y, Kang Y, Li Q, Wu N. Deep learning combined with radiomics may optimize the prediction in differentiating high-grade lung adenocarcinomas in ground glass opacity lesions on CT scans. Eur J Radiol. 2020. https://doi.org/10.1016/j.ejrad.2020.109150. Lynch CJ, Liston C. New machine-learning technologies for computer-aided diagnosis. Nat Med. 2018;24(9):1304–5. https://doi.org/10.1038/s41591-018-0178-4. Liu J, Wu T, Peng Y, Luo R. Grade prediction of bleeding volume in cesarean section of patients with pernicious placenta previa based on deep learning. Front Bioeng Biotechnol. 2020. https://doi.org/10.3389/fbioe.2020.00343. Taghanaki SA, Abhishek K, Cohen JP, Cohen-Adad J, Hamarneh G. Deep semantic segmentation of natural and medical images: a review. Artif Intell Rev. 2020. https://doi.org/10.1007/s10462-020-09854-1. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF (eds.) Medical image computing and computer-assisted intervention—MICCAI 2015, pp. 234–41. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4. Norman B, Pedoia V, Majumdar S. Use of 2d u-net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology. 2018;288(1):177–85. https://doi.org/10.1148/radiol.2018172322. Han Y, Ye JC. Framing u-net via deep convolutional framelets: application to sparse-view CT. IEEE Trans Med Imag. 2018;37(6):1418–29. https://doi.org/10.1109/tmi.2018.2823768. Salem M, Valverde S, Cabezas M, Pareto D, Oliver A, Salvi J, Rovira A, Llado X. Multiple sclerosis lesion synthesis in MRI using an encoder-decoder u-NET. IEEE Access. 2019;7:25171–84. https://doi.org/10.1109/access.2019.2900198. Hua W, Xiao T, Jiang X, Liu Z, Wang M, Zheng H, Wang S. Lymph-vascular space invasion prediction in cervical cancer: exploring radiomics and deep learning multilevel features of tumor and peritumor tissue on multiparametric MRI. Biomed Signal Process Control. 2020. https://doi.org/10.1016/j.bspc.2020.101869. Fu L, Ma J, Ren Y, Han YS, Zhao J. Automatic detection of lung nodules: false positive reduction using convolution neural networks and handcrafted features. In: Medical Imaging 2017: computer-aided diagnosis. 2017; vol. 10134, pp. 60–7. https://doi.org/10.1117/12.2253995. Chen S, Qin J, Ji X, Lei B, Wang T, Ni D, Cheng J-Z. Automatic scoring of multiple semantic attributes with multi-task feature leverage: a study on pulmonary nodules in CT images. IEEE Trans Med Imag. 2017;36(3):802–14. https://doi.org/10.1109/tmi.2016.2629462. Kim B, Sung YS, Suk H. Deep feature learning for pulmonary nodule classification in a lung ct. In: 2016 4th International winter conference on brain-computer interface (BCI). 2016; pp. 1–3. https://doi.org/10.1109/IWW-BCI.2016.7457462. Paul R, Hawkins SH, Balagurunathan Y, Schabath MB, Gillies RJ, Hall LO, Goldgof DB. Deep feature transfer learning in combination with traditional features predicts survival among patients with lung adenocarcinoma. Tomography. 2016;2(4):388. Zhu Y, Man C, Gong L, Dong D, Yu X, Wang S, Fang M, Wang S, Fang X, Chen X, Tian J. A deep learning radiomics model for preoperative grading in meningioma. Eur J Radiol. 2019;116:128–34. https://doi.org/10.1016/j.ejrad.2019.04.022. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016; pp. 770–8. https://doi.org/10.1109/cvpr.2016.90. Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. 2018; pp. 7132–41. https://doi.org/10.1109/CVPR.2018.00745. Sala E, Rockall A, Rangarajan D, Kubik-Huch RA. The role of dynamic contrast-enhanced and diffusion weighted magnetic resonance imaging in the female pelvis. Eur J Radiol. 2010;76(3):367–85. https://doi.org/10.1016/j.ejrad.2010.01.026. Lax A, Prince MR, Mennitt KW, Schwebach JR, Budorick NE. The value of specific MRI features in the evaluation of suspected placental invasion. Magn Reson Imag. 2007;25(1):87–93. https://doi.org/10.1016/j.mri.2006.10.007. Ueno Y, Kitajima K, Kawakami F, Maeda T, Suenaga Y, Takahashi S, Matsuoka S, Tanimura K, Yamada H, Ohno Y, Sugimura K. Novel MRI finding for diagnosis of invasive placenta praevia: evaluation of findings for 65 patients using clinical and histopathological correlations. Eur Radiol. 2013;24(4):881–8. https://doi.org/10.1007/s00330-013-3076-7. Chen E, Mar WA, Horowitz JM, Allen A, Jha P, Cantrell DR, Cai K. Texture analysis of placental MRI: can it aid in the prenatal diagnosis of placenta accreta spectrum? Abdominal Radiol. 2019;44(9):3175–84. https://doi.org/10.1007/s00261-019-02104-1. Masselli G, Gualdi G. MR imaging of the placenta: what a radiologist should know. Abdominal Imag. 2012;38(3):573–87. https://doi.org/10.1007/s00261-012-9929-8. van Griethuysen JJM, Fedorov A, Parmar C, Hosny A, Aucoin N, Narayan V, Beets-Tan RGH, Fillion-Robin J-C, Pieper S, Aerts HJWL. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 2017;77(21):104–7. https://doi.org/10.1158/0008-5472.can-17-0339. Leyendecker JR, DuBose M, Hosseinzadeh K, Stone R, Gianini J, Childs DD, Snow AN, Mertz H. MRI of pregnancy-related issues: abnormal placentation. Am J Roentgenol. 2012;198(2):311–20. https://doi.org/10.2214/ajr.11.7957. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. https://doi.org/10.1038/nature14539. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P-A, Bottou L. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res. 2010;11(12):3371–408. Chen Y, Dai X, Liu M, Chen D, Yuan L, Liu Z. Dynamic convolution: attention over convolution kernels. In: 2020 IEEE/CVF conference on computer vision and pattern recognition (CVPR). 2020; pp. 11027–36. https://doi.org/10.1109/CVPR42600.2020.01104. Agarap AF. Deep learning using rectified linear units (relu); 2018. arXiv preprint arXiv:1803.08375. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift; 2015. arXiv preprint arXiv:1502.03167. The authors are grateful to all study participants. This work was supported in part by the Natural Science Foundation of Zhejiang Province under Grant LY20H180003, the Natural Science Foundation of Ningbo under Grant 2019A610104, the Public Welfare Science and Technology Project of Ningbo under Grant 202002N3104, and in part by the K. C. Wong Magna Fund in Ningbo University. Rongrong Xuan and Tao Li contributed equally to this work Affiliated Hospital of Medical School, Ningbo University, Ningbo, 315020, Zhejiang, China Rongrong Xuan & Yutao Wang Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, 315211, Zhejiang, China Tao Li & Wei Jin Ningbo Women's and Children's Hospital, Ningbo, 315012, Zhejiang, China Jian Xu Rongrong Xuan Yutao Wang Wei Jin RX, TL and WJ contributed to the model construction and data analysis. YW, JX and RX contributed to the imaging and clinical data collection. TL contributed to the manuscript drafting. TL, RX and WJ contributed to the revised submission. All authors approved the final manuscript. Correspondence to Wei Jin. This retrospective study was approved by the Ethics Committee of the Affiliated Hospital of Medical School of Ningbo University, and identity information of all patients had been de-identified to protect patient privacy. All authors provided their consent to publish. Xuan, R., Li, T., Wang, Y. et al. Prenatal prediction and typing of placental invasion using MRI deep and radiomic features. BioMed Eng OnLine 20, 56 (2021). https://doi.org/10.1186/s12938-021-00893-5 Radiomics Assistant diagnosis
CommonCrawl
Dedekind's characterization of modular lattices Free modular lattices Dedekind showed that the free modular lattice on 3 elements has 28 elements; its Hasse diagram can be seen in these lecture notes by J.B. Nation (chapter 9, page 100). N.B.: this notion of lattice is meant with respect to the signature ( ∧ , ∨ ) (\wedge, \vee) ; if we include top and bottom constants in the signature, then the free modular lattice on three elements has 30 elements A paper published by Dedekind in 1900 had lattices as its central topic: He described the free modular lattice generated by three elements, a lattice with 28 elements (see picture). See also. Modular graph, a class of graphs that includes the Hasse diagrams of modular lattices Dedekind lattice. A lattice in which the modular law is valid, i.e. if a ≤ c , then ( a + b) c = a + b c for any b . This requirement amounts to saying that the identity ( a c + b) c = a c + b c is valid. Examples of modular lattices include the lattices of subspaces of a linear space, of normal subgroups (but not all subgroups) of a group, of. Modular lattices are lattices that satisfy the fol-lowing identity, discovered by Dedekind: (c^(a_b))_b=(c_b)^(a_b): This identity is customarily recast in user-friend-lier ways. Examples of modular lattices are lat-tices of subspaces of vector spaces, lattices of ideals of a ring, lattices of submodules of a mod-ule over a ring, and lattices of normal subgroup Richard Dedekind de ned modular lattices which are weakend form of distributive lattices. He recognised the connection between modern algebra and lattice theory which provided the impetus for the development of lattice theory as a subject. Later J onsson, Kurosh, Malcev, Ore, von Neumann, Tarski, and Garrett Birkho contributed prominentl The second characteristic part of Dedekind's methodology consists of persistently (and from early on, cf. Dedekind 1854) attempting to identify and clarify fundamental concepts, including the higher-level concepts just mentioned (continuity, infinity, generalized concepts of integer and of prime number, also the new concepts of ideal, module, lattice, etc.) A lattice in which every pair of elements is modular is called a modular lattice or a Dedekind lattice. A lattice of finite length is a semi-modular lattice if and only if it satisfies the covering condition: If $x$ and $y$ cover $xy$, then $x+y$ covers $x$ and $y$ (see Covering element) The chapter provides examples of lattices that are not modular or modular but not distributive. All lattices of four elements or less are modular. The smallest lattice which is not modular is the pentagon. The chapter provides a characterization of modular lattices using upper and lower covering conditions A modular partial lattice is a partial lattice obtained from a modular lattice as in Definition I.5.12. Show that there is no finite set of identities characterizing the modularity of a partial lattice. (See Exercise I.5.20 for the concept of validity of an identity in a partial lattice.) IV.12 Dedekind-MacNeille completion of the strictly increasing members of $\omega^\omega$ 33 What is the meaning of this analogy between lattices and topological spaces The following characterization of the non-distributive and modular lattices is well known (see Theorems 3.5 (Dedekind) and 3.6 (Birkho )): a lattice is non-distributive if and only if it contains a sublattice isomorphic to one of the lattices M 3 or N 5. A lattice is modular if and only if it does not contain a sublattice isomorphic to N 5 Modular Lattices Dedekind [1900] observed that the additive subgroups of a ring and the normal subgroups of a group form lattices in a natural way (which he called Dualgruppen) and that these lattices have a special property, which was later referred to as the modular law. Modularity is a consequence of distributivity, and Dedekind's modular lattice in nLa Abstract: Modular lattices, introduced by R. Dedekind, are an important subvariety of lattices that includes all distributive lattices. Heitzig and Reinhold developed an algorithm to enumerate, up to isomorphism, all finite lattices up to size 18 Dedekind showed around 1900 that the submodules of a module form a modular lattice with respect to set inclusion. Many other algebraic struc-tures are closely related to modular lattices: both normal subgroups of groups and ideals of rings form modular lattices; distributive lattices (thus also Boolean algebras) are special modular lattices. Later, it turned out that, in addition to algebra, modular lattices appear in other areas of math In this lecture we prove that the integral closure of a Dedekind domain in a nite extension of its fraction eld is also a Dedekind domain; this implies, in particular, that the ring of integers of a number eld is a Dedekind domain. We then consider the factorization of prime ideals in Dedekind extensions. 5.1 Dual modules, pairings, and lattices In the 1930's and 1940's lattice theory was often broken into three subdivisions: distributive lattice theory, modular lattice theory, and the theory of all lattices. A question about lattices could usually be formulated for each of these subdivisions Modularity can be characterized by the absence of pentagons. The following characterization of modularity is due to Dedekind (for more details see Birkhoff, Dedekind, Grätzer). Theorem 2.5 A lattice is modular if and only if it does not contain a pentagon (N 5) as a sublattice characterization of Dedekind modules and also will be needed in proving one of the. main theorems in this paper. Theorem 3.3. [ Mapping between module lattices, Int. Electron. J Definition. A \emph {modular lattice} is a lattice L= L,∨,∧ L = L, ∨, ∧ such that L L has no sublattice isomorphic to the pentagon N5 N 5 <canvas id=c1 width=60 height=60></canvas> <script> unit=20; labelnodes=false; function node (x,y,t,r,nodecolor) { nodes [t]= [];nodes [t] [0]=x;nodes [t] [1]=y;if (r==undefined)r=. In abstract algebra, a Dedekind domain or Dedekind ring, named after Richard Dedekind, is an integral domain in which every nonzero proper ideal factors into a product of prime ideals. It can be shown that such a factorization is then necessarily unique up to the order of the factors. There are at least three other characterizations of Dedekind domains that are sometimes taken as the definition: see below. A field is a commutative ring in which there are no nontrivial proper. $\begingroup$ @ymar I think this is known most places as Dedekind's modularity criterion: a lattice is modular iff it does not contain a pentagon like this. There is a counterpart for distributive lattices that I thought was attributed to Birkhoff but I can't seem to find a reference Dedekind studied modular lattices near the end of the nineteenth century, and in 1900 he published a papershowing that the free modular lattice on 3 generators has 28 elements. One reason this is interesting is that the free modular lattice on 4 or more generators is infinite Abstract. We extend the notions of Dedekind complete and σ-Dedekind complete Banach lattices to Banach C(K)-modules.As our main result we prove for these modules an analogue of Lozanovsky's well-known characterization of Banach lattices with order continuous norm Modular Lattices I A modular lattice M is a lattice that satis es the modular law x;y;z 2M: (x ^y) _(y ^z) = y ^[(x ^y) _z)]. I An alternative way to view modular lattices is by Dedekind's Theorem: L is a nonmodular lattice i N 5 can be embedded into L. Figure 4: N 5. I All distributive lattices are modular lattices. I Examples of modular. Modular lattice - Wikipedi When Dedekind introduced the notion of a module, he also defined their divisibility and related arithmetical notions (e.g. the LCM of modules). The introduction of notations for these notions allowed Dedekind to state new theorems, now recognized as the modular laws in lattice theory. Observing the dualism displayed by the theorems, Dedekind pursued his investigations on the matter Dedekind complete and order continuous Banach $C (K)$-modules. We extend the notions of Dedekind complete and sigma-Dedekind complete Banach lattices to Banach C (K)-modules. As our main result we prove for these modules an analogue of Lozanovsky's well known characterization of Banach lattices with order continuous norm On characterized varieties and quasivarieties of lattices by Anvar Nurakunov Institute of Mathematics NAS, Bishkek, Kyrgyzstan Coauthors: Victor Gorbunov (Institute of Mathematics SB RAS, Novosibirsk, Russsia) The classical results in Lattice Theory by Dedekind [1] and Birkhoff [2] - a lattice is modular (distributive) if and only if it does not contain the pentagon N 5 (resp. N 5 and the 3. Modular lattice - Encyclopedia of Mathematic A MODULAR CHARACTERIZATION OF SUPERSOLVABLE LATTICES STEPHANFOLDESANDRUSSWOODROOFE Abstract. We characterize supersolvable lattices in terms of a certain modular type re-lation. It follows easily from Dedekind's modular identity NH. Modular Forms and L-functions, Dedekind's eta)[updated 14:11, Nov 17, 2013] Klingen's theorem on special values of zeta functions of totally real number fields Example of characterization of objects by universal mapping properties, rather than by construction. Dedekind's analysis of continuity, the use of Dedekind cuts in the characterization of the real numbers, the definition of being Dedekind-infinite, the formulation of the Dedekind-Peano axioms, the proof of their categoricity, the analysis of the natural numbers as finite ordinal numbers, the justification of mathematical induction and recursion, and most basically, the insistence on. to Lattices and OrderConvergence and Uniformity in Topology. (AM-2), Volume 2Jets, Wakes, and CavitiesLectures on Number TheoryGeneral Lattice algebras and presents McKenzie's characterization of directly representable varieties, which clearly shows the power of the universal algebraic toolbox. Th Dedekind's Contributions to the Foundations of Mathematics This first volume is divided into three parts. Part I. Topology and Lattices includes two chapters by Klaus Keimel, Jimmie Lawson and Ales Pultr, Jiri Sichler. Part II. Special Classes of Finite Lattices comprises four chapters by Gabor Czedli, George Grätzer and Joseph P. S. Kung. Part III 75. Characterization of Real Closed Fields507 76. Hilbert's 17th Problem512 Chapter XV. Dedekind Domains519 77. Integral Elements519 78. Integral Extensions of Domains523 79. Dedekind Domains527 80. Extension of Dedekind Domains535 81. Hilbert Rami cation Theory541 82. The Discriminant of a Number Field545 83. Dedekind's Theorem on Rami. n) is the Dedekind's eta function. As such #(˝;z) is the basic functions out of which can be constructed all theta func-tions on elliptic curves. In the arithmetic theory of elliptic curves, it shows up as the Green's function for the elliptic curve C Z˝+ Z. Moreover, #(˝;z modular lattice 148. modular lattices 146. dual 140. algebraic 140 . Post a Review . You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read Algebraic proof theory for substructural logics: cut-elimination and completions Agata Ciabattonia, Nikolaos Galatosb, Kazushige Terui∗,c aDepartment of Formal Languages, Vienna University of Technology, Favoritenstrasse 9-11, 1040 Wien, Austria bDepartment of Mathematics, University of Denver, 2360 S. Gaylord St., Denver, CO 80208, USA cResearch Institute for Mathematical Sciences, Kyoto. Semi-modular lattice - Encyclopedia of Mathematic Theorem 13 says (paraphrasing a little): Two permutable equivalence relations on the same set form a modular pair in the dual of the partition lattice. The problem is that, according to the index at the end of the book, the notion of permutable relations appears first on that very page. combinatorics intuition universal-algebra In the present note we consider the module $$\\mathcal E^{(n)}(\\underline{L})$$ E ( n ) ( L ̲ ) of elliptic functions of lattice-valued index $$\\underline{L}$$ L ̲ and degree n. We introduce conditions of regularity and cuspidality based on Eichler and Zagier (The theory of Jacobi forms. Progress Math 55, Birkhäuser, Boston, Basel, Stuttgart, 1985) and Ziegler (Abh Math Sem Univ Hamburg. Lattice theory : first concepts and distributive lattices | George A Gratzer | download | Z-Library. Download books for free. Find book You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them Boolean Lattices and Dense Extensions 63 4. Distributive Lattices and Dense Extensions 70 III Extensions in Categories lo Injective and Projective Kernels 77 2. Injective and Projective Orderings 81 3. Categorical Characterization of ( OkCL), o< 1) 86 BIBLIOGRAPHY 91 i modular lattices already account for all distributive lattices. This is shown by \index {Schmidt, E.T.} % E.T.~Schmidt in~ \cite {Schmidt:1982}, and extended by \index {Freese, Ralph} % Ralph Freese who shows in~ \cite {Freese:1975} that finitely generated modular: lattices suffice.) \footnote {It turns out that the finite distributive lattices AMS Mathematics Subject Classification (2010): Primary: 03F65; Sec- ondary: 06A06, 06A11 Key words and phrases: Constructive mathematics, set with apartness, anti-ordered set, Dedekind partial groupoid 1 Introduction It is well known that in the Classical theory Dedekind's definition of lattices as algebras with two operations satisfying commutativity, associativity and ab- sorption, is. Modular Lattices - Introduction to Lattice Theory with Basel: Birkhäuser, 2016. - 426p. Presents a wide range of material, from classical to brand new results Uses a modular presentation in which core material is kept brief, allowing for a broad exposure to the subject without overwhelming readers with too much information all at once Introduces.. Uses a modular presentation in which core material is kept brief, allowing for a broad exposure to the subject without overwhelming readers with too much information all at once Introduces topics by examining how they related to research problems, providing continuity among diverse topics and encouraging readers to explore these problems with research of their ow In graph theory, the modular product of graphs G and H is a graph formed by combining G and H that has applications to subgraph isomorphism. It is one of several different kinds of graph products that have been studied, generally using the same vertex set but with different rules for determining which edges to include t. e. In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics Mathematics. Advanced Complex Analysis - Part 1:Zeros of Analytic Functions,Analytic continuation, Monodromy, Hyperbolic Geometry and the Reimann Mapping Theorem. Dr. T.E. Venkata Balaji Modular Lattice - an overview ScienceDirect Topic Dedekind's footnotes document what material Dirichlet took from Gauss, allowing insight into how Dirichlet transformed the ideas into essentially modern form. Also shown is how Gauss built on a long tradition in number theory—going back to Diophantus—and how it set the agenda for Dirichlet's work 8 Lattices And Boolean Algebras 8.1 Partially ordered sets and lattices The main topics treated are the Jordan-Holder theorem on semi-modular lattices; again that the subgroup of order q of the finite cyclic group of order r can be displayed as in (13). There is another characterization of this subgroup which is often useful, namely:. Similarly, Dedekind's categoricity result (1888) for second order Peano arithmetic has an extension to a Combinatorial principles from proof complexity. One of the goals of proof theory is to find combinatorial characterization of sentences provable in and thus duality for additional structure on lattices and Boolean. Infinite-dimensioned Lie algebras and Dedekind's $\eta$-function Funktsional. Anal. i Prilozhen., 1974, Volume 8:1, 77-78: Simple-connectivity of a factor-space of the modular Hilbert group Funktsional. Anal. i Prilozhen., 1974 Characterization of objects dual to locally compact groups Funktsional. Anal. i Prilozhen., 1974. This is a joint event of the CUNY Logic Workshop and the Kolchin Seminar in Differential Algebra, as part of a KSDA weekend workshop. Zilber trichotomy principle differential algebra model theory. Make poster. CUNY Logic Workshop Friday, May 6, 20162:00 pmGC 6417 'atlas of finite groups maximal subgroups and ordinary june 4th, 2020 - atlas of finite groups maximal subgroups and ordinary characters for simple groups by john horton conway 1986 01 02 spiral bound january 1 1750''atlas Of Finite Groups Maximal Subgroups And Ordinar Characterization of the continuous images of all pseudo-circles. Pacific Journal of Mathematics 23 (1967) 491-513. a family of quantum modular forms. Pacific Journal of Mathematics 274 (2015) 1-25. Fong, Congruence lattices of algebras of fixed similarity type. I. Pacific Journal of Mathematics 82 (1979). Questions on advanced topics - beyond those in typical introductory courses: higher degree algebraic number and function fields, Diophantine equations, geometry of numbers / lattices, quadratic forms, discontinuous groups and and automorphic forms, Diophantine approximation, transcendental numbers, elliptic curves and arithmetic algebra. The second part of the book contains new results about free lattices and new proofs of known results, providing the reader with a coherent picture of the fine structure of free lattices. The book closes with an analysis of algorithms for free lattices and finite lattices that is accessible to researchers in other areas and depends only on the first chapter and a small part of the second ATLAS of Brauer Characters — Bibliography. [MathJax on] Bibliography on pp. xv-xvii. Bibliography on pp. 311-327. Cross-referenced Collections 14.11. Modular binomial lattices 247 References 249 Exercises 249 Projects 250 Appendix A. Analysis review 251 A.1. Infinite series 251 A.2. Power series 252 A.3. Double sequences and series 253 References 254 Appendix B. Topology review 255 B.1. Topological spaces and their bases 255 B.2. Metric topologies 256 xii Contents B.3. Separation. Monstrous moonshine relates distinguished modular functions to the representation theory of the Monster M. The celebrated observations that. 1 = 1, 196884 = 1 + 1. 21493760 = 1 + 196883 + 21296876, illustrate the case of J(t) = j(r) - 744, whose coefficients turn out to be sums of the dimensions of the 194 irreducible representations of M It is shown that the integer linear programming problem with a fixed number of variables is polynomially solvable. The proof depends on methods from geometry of numbers In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics.. The best known fields are the field of rational numbers, the field of real. Modular Forms: Basics and Beyond (Springer Monographs in Mathematics) This gives a new characterization of amenable groups in terms of cellular automata. Let us mention that, up to now, the validity of the Myhill implication for non-amenable groups is still an open problem Harmonic Maass Forms and Mock Modular Forms: Theory and Applications | Kathrin Bringmann , Amanda Folsom , Ken Ono , Larry Rolen | download | Z-Library. Download books for free. Find book Norton defined a class of functions known as replicable functions which generalizes the class of Hauptmodules, which in turn generalizes the elliptic modular function, j(z). By generalizing Dedekind's construction of j(z), and working with differential equations, we are able to determine many useful invariants of Hauptmodules Various developments in physics have involved many questions related to number theory, in an increasingly direct way. This trend is especially visible in two broad families of problems, namely, field theories, and dynamical systems and chaos. The 14 chapters of this book are extended, self-contained versions of expository lecture courses given. The group of rotations that preserves the Leech lattice is called the Conway group Co0. It has 8,315,553,613,086,720,000 elements. It's not simple, because the transformation v ↦ −v lies in the Conway group and commutes with everything else. If you mod out the Conway group by its center, which is just ℤ / 2, you get a finite simple. lattice theory - Characterization of Dedekind complete Each year, ICTP organizes more than 60 international conferences and workshops, along with numerous seminars and colloquiums. These activities keep the Centre at the forefront of global scientific research and enable ICTP staff scientists to offer Centre associates, fellows and conference participants a broad range of research opportunities A positive integer n is the area of a Heron triangle if and only if there exists a non-zero rational number tau such that the elliptic curve E^n_tau: Y² = X(X-n tau)(X+n tau-¹) has a rational point of order different than two. Such an integer n is called a tau-congruent number. In the first part of this thesis, we give two distribution theorems on tau-congruent numbers; in particular we show. In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other area Find link is a tool written by Edward Betts.. searching for Proceedings of the American Mathematical Society 260 found (478 total) alternate case: proceedings of the American Mathematical Society Lester Dubins (1,422 words) exact match in snippet view article find links to article Conditional Probability Distributions in the Wide Sense Linear and Multilinear Algebra Volume 1, Number 2, 1973 Paul Erd\Hos and Henryk Minc Diagonals of nonegative matrices . . . [A good=20 counter-example is Z[x,y], where the ideal (x,y^2) is primary,=20 but falls strictly between the prime ideal (x,y) and its square (x,y)^2=3D(x^2,xy,y^2).] =20 A Noetherian integral domain which does have this extra property=20 is called a Dedekind domain; examples include the ring of algebraic=20 integers w.r.t. an arbitrary number field---proving the latter result=20 (which is. 1.6 Modular Lattices Summary. Modular lattices were defined in Section 1.2, where we also provided some examples and facts. Here we give some more examples. A central result for modular lattices is the isomorphism theorem (Dedekind's transposition principle). One consequence is the Kurosh-Ore theorem Congruence lattices of distributive lattices are not considered in this text became their characterization problem is trivial by Lemma 4.5: A lattice is isomorphic to the congruence lattice of a distributive lattice iff it is isomorphic to I ( B ) , where B is a generalized Boolean lattice; I ( B ) , by Theorem 3.13, is characterized as a distributive algebraic lattice in which the compact. modular lattices [30], Boolean algebras [39], and Jordan algebras [33]. Apart from their intrinsic interest, all of the latter structures host mathematical models for quantum-mechanical notions such as observables, states, and ex-perimentally testable propositions [11, 40] and thus are pertinent in regar 56 lattices, free modular lattices, and free Arguesian lattices, since each of these laws can be written as a lattice equation. Theorem 6.1. For any nonempty set X, there exists a free lattice generated by X. The proof uses three basic principles of universal algebra. These correspond for lattices to Theorems 5.1, 5.4, and 5.5 respectively 60 lattices, free modular lattices, and free Arguesian lattices, since each of these laws can be written as a lattice equation. Theorem 6.1. For any nonempty set X, there exists a free lattice generated by X. The proof uses three basic principles of universal algebra. These correspond for lattices to Theorems 5.1, 5.4, and 5.5 respectively [1309.5036] Generating all finite modular lattices of a .. %%% -*-BibTeX-*- %%% ===== %%% BibTeX-file{ %%% author = Nelson H. F. Beebe, %%% version = 1.11, %%% date = 06 March 2016, %%% time = 11:10:37 MST. AMS Session on Lattices, Algebras, and Matrix Functions. 8:30 a.m. Archimedean closed lattice-ordered groups. Yuanqian Chen*, Central Connecticut State University (889-06-82) 8:45 a.m. Strongly independent subsets in lattices. Zsolt Lengvarszky*, University of South Carolina, Columbia (889-06-761) 9:00 a.m modular functions 126. elliptic curve 124. elliptic curves 123. math 119. weil 116. irreducible 113. 0 comments . Post a Review . You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read Belkhelfa, I.E. Hirica, R. Rosca, L. Verstlraelen[6] obtained a complete characterization of surfaces with paralel second fundamental form in 3-dimensional Bianchi-Cartan-Vranceanu spaces(BCV). In this study, we define the canal surface around Legendre curve with Frenet frame in BCV spaces Characterization methodology of thermal managing materials.- 2.1 Thermal properties and measurement techniques.- 2.2 Electrical properties and measurement techniques.- 2.3 Thermomechanical characterization.- 2.4 Analytical techniques for materials characterization.- 2.5 Surface finish and contact interface compatibility.- 2.6 Reliability analysis and environmental performance evaluation. General Lattice Theory | George Grätzer, B.A. Davey, R. Freese, B. Ganter, M. Greferath, P. Jipsen, H.A. Priestley, H. Rose, E.T. Schmidt, S.E. Schmidt, F. Wehrung. Readbag users suggest that burrissanka.pdf is worth reading. The file contains 331 page(s) and is free to view, download or print Characterization of Sato-Tate distributions by character theory (SVP) on Euclidean lattices. All known sieving algorithms for solving SVP require space which (heuristically) grows as $2^ Modular Galois representations p-adically using Makdisi's moduli-friendly forms Moreover, every modular lattice with complementation is both weakly orthomodular and dually weakly orthomodular. The class of weakly orthomodular lattices and the class of dually weakly orthomodular lattices form varieties which are arithmetical and congruence regular Springer_News_August_2006 A Adult An and Before Behavioral Beyond Bolts Care Case Challenging Common Complete Conclusion Consulatation Consultations Delivery Ethica Dedekind (1888) proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction A Concise Introduction to Mathematical Logi Examples of such groups are the modular group (in which a, b, c, Dedekind's work was the culmination of seventy years of investigations of problems related to unique factorization Find the training resources you need for all your activities. Studyres contains millions of educational documents, questions and answers, notes about the course, tutoring questions, cards and course recommendations that will help you learn and learn Btcdirect nl. Ebunos gg log in. Spring Trends 2021. Bad Banks Staffel 2 Folge 5. Naturreservat Partille. Papilly Hemnet. Vilka volvo modeller tillverkas i sverige 2020. Jamie Dimon President. How to trade on Coinbase Pro. Bitcoin is a bubble. Xkcd pie. Predator P3. Define fretwork. Dagcoin 2020. Bitcoin trend. Nygårda Julmust. IOTA News live. Sparbank Göteborg. Cryptos Poznanski Teil 2. Pool liner tjocklek. Lannebo informationsbroschyr. Telia felanmälan företag. Genisys Credit Union checking account. Swedbank spartips. New Vegas Casino no deposit codes. Veckomatsedel Coop. Should my business accept cryptocurrency. Reddit cars. Canadian Bitcoins promo code. Klarna Überweisung Daten. Ethos Autoflower. IFRS 17 for dummies. Rendement Bitcoin per jaar. T Mobile Kinderslot. Crypto tax accountant Adelaide. Krav synonym. Morgan Stanley logo. Aktie Raiffeisen Schweiz. Wrapped MIR token.
CommonCrawl
Equivalent Low-Pass Signal and its Spectral Function < Signal Representation Bandpass Signals 1 Motivation for describing in the equivalent low-pass range 2 Definition in the frequency domain 3 Description in the time domain 4 Definition of the "Locality Curve" 5 Representing with magnitude and phase 6 Relation between equivalent low-pass signal and band-pass signal 7 Why multiple representations of the same signal exist 8 Representation according to real and imaginary part 9 Determination of the equivalent low-pass signal from the band-pass signal 10 Power and energy of a band-pass signal 11 Exercises for the chapter Motivation for describing in the equivalent low-pass range The following figure shows a possible structure of a communication system: Often the low-frequency source signal $q(t)$ is converted into a band-pass signal $s(t)$ ⇒ $\text{Modulation}$. After transmission, the received signal $r(t)$ – compared to the transmission signal $s(t)$ possibly distorted and with (noise) interference applied – must be reset to the original frequency range ⇒ $\text{Demodulation}$. The sink signal $v(t)$, which should match the source signal $q(t)$ as closely as possible, is then again a low-pass signal. Block diagram of a band-pass communication system Modulation and demodulation are therefore fundamental components of a communication system, which are dealt with in detail in the book Modulation Methods. A short summary can be found in the first chapter Principles of Communication of this book. The investigation, simulation, optimization, and dimensioning of band-pass systems are mostly done in the $\text{equivalent low-pass range}$, for which the following reasons can be given: If quality characteristics (bandwidth efficiency, signal-to-noise ratio, bit error rate, power requirements, etc.) of a low-pass system are known, the corresponding values of related band-pass systems can be derived from them relatively easily. Examples are the digital modulation methods Amplitude Shift Keying $\text{(ASK)}$ and Binary Phase Shift Keying $\text{(BPSK)}$, whose performance variables can be "extrapolated" from the comparable Baseband System (i.e., without modulator and demodulator). Individual subchannels in a so-called Frequency Division Multiplex system, which differ by different carrier frequencies, can often be considered qualitatively equivalent. Therefore, it is sufficient to limit the calculation and dimensioning to a single channel and to perform these investigations in the equivalent low-pass range - i.e. without considering the specific carrier frequency. It is often the case that the bandwidth of a communication connection is orders of magnitude smaller than the carrier frequency. For example, in the GSM standard the individual channels are located in the frequency range around $900\ \rm MHz$ ("D-Network") and $1800\ \rm MHz$ ("E-Network"), while each channel has only a small bandwidth of $200\ \rm kHz$. Therefore a simulation in the equivalent low-pass range is much less complex than a simulation of the corresponding band-pass signals. Definition in the frequency domain We consider a real band-pass signal $x(t)$ with the spectrum $X(f)$. Furthermore we want to apply: The band-pass signal $x(t)$ is said to result from the modulation of a low-frequency source signal $q(t)$ with the carrier signal $z(t)$ of frequency $f_{\rm T}$. The type of modulation (whether analog or digital, amplitude or angle modulation, single-sideband or double-sideband) is not specified. The spectral function $X_+(f)$ of the corresponding analytical signal $x_+(t)$ exists only for positive frequencies and is twice as large as $X(f)$. For the derivation of $X_+(f)$ the carrier frequency $f_{\rm T}$ $($German: "Trägerfrequenz" ⇒ $\rm T)$ of the system does not need be known. $\text{Definition:}$ If the spectrum of the analytical signal $x_+(t)$ is shifted to the left for $f_{\rm T}$, the result is called the $\text{equivalent low-pass spectrum}$: $$X_{\rm TP}(f) = X_{\rm +}(f + f_{\rm T}).$$ Note: The identifier $\rm TP$ stands for "low-pass" (German: Tiefpass)and the identifier $\rm T$ stands for "carrier" (German: Träger). In general $X(f)$, $X_+(f)$ and $X_{\rm TP}(f)$ are complex-valued. However, if $X(f)$ is real, then the spectral functions $X_+(f)$ and $X_{\rm TP}(f)$ are real too, because they result from $X(f)$ only with the operations "Truncate and Double" and "Frequency Shift". In contrast to $X_+(f)$ for the calculation of the equivalent low-pass spectrum $X_{\rm TP}(f)$ the knowledge of the carrier frequency $f_{\rm T}$ is absolutely necessary. For other values of $f_{\rm T}$ other low-pass spectra will also result. If one transforms the above equation into the time domain, one obtains after applying the Shifting Theorem: $$x_{\rm TP}(t) = x_{\rm +}(t)\cdot {\rm e}^{-{\rm j} \hspace{0.05cm}\cdot \hspace{0.05cm}2 \pi \cdot f_{\rm T}\cdot \hspace{0.05cm}t}.$$ The relation $x(t) = \text{Re}\big[x_+(t)\big]$ yields the procedure to determine the actual physical band-pass signal from the equivalent low-pass signal: $$x(t) = {\rm Re}[x_{\rm +}(t)] = {\rm Re}\big[x_{\rm TP}(t)\cdot {\rm e}^{\hspace{0.05cm}{\rm j} \hspace{0.05cm} \cdot 2\pi \hspace{0.05cm}\cdot\hspace{0.05cm} f_{\rm T}\hspace{0.05cm} \cdot \hspace{0.05cm} \hspace{0.05cm}t}\big].$$ $\text{Example 1:}$ The upper figure shows the real spectral function $X(f)$ of a band-pass signal $x(t)$ which is the result of modulating a low frequency signal $q(t)$ with the carrier frequency $f_{\rm T}$ . Construction of the equivalent low-pass spectrum Below that, the two likewise real spectral functions $X_+(f)$ and $X_{\rm TP}(f)$ are shown. Due to the asymmetries concerning the frequency origin $(f = 0)$ the corresponding time functions are complex. The solid- green spectral function $X_{\rm TP}(f)$ is shifted to the left with respect to $X_{+}(f)$ by the carrier frequency $f_{\rm T}$. If $X(f)$ is the modulation result of another source signal $q\hspace{0.05cm}'(t)$ with a different carrier frequency ${f_{\rm T} }\hspace{0.05cm}'$, this would also result in another equivalent low-pass spectrum ${X_{\rm TP} }\hspace{0.05cm}'(f)$. An exemplary spectral function ${X_{\rm TP} }\hspace{0.05cm}'(f)$ is drawn in the graphic with green-dashed lines. Description in the time domain To simplify the presentation we now assume a line spectrum, so that the analytical signal can be represented as "pointer group" ⇒ sum of complex rotating pointers: $$X_{+}(f) = \sum_{i=1}^{I} {A_i} \cdot {\rm e}^{-{\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm} \varphi_i}\cdot\delta (f - f_i) \hspace{0.3cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ\, \hspace{0.3cm} x_{+}(t) = \sum_{i=1}^{I} A_i \cdot {\rm e}^{{\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm}( 2 \pi \hspace{0.05cm}\cdot \hspace{0.05cm}f_i\hspace{0.05cm}\cdot \hspace{0.05cm} t \hspace{0.05cm}-\hspace{0.05cm} \varphi_i)}.$$ By shifting the frequency by $f_{\rm T}$ to the left, the equivalent low-pass signal in frequency and time domain is thus: $$X_{\rm TP}(f) = \sum_{i=1}^{I} {A_i} \cdot {\rm e}^{-{\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm} \varphi_i}\cdot\delta (f - \nu_i)\hspace{0.3cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ\, \hspace{0.3cm} x_{\rm TP}(t) = \sum_{i=1}^{I} A_i \cdot {\rm e}^{{\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm}( 2 \pi \hspace{0.05cm}\cdot \hspace{0.05cm} \nu_i \hspace{0.05cm}\cdot \hspace{0.05cm} t \hspace{0.05cm}-\hspace{0.05cm} \varphi_i)}.$$ The following relation is valid between the frequency values $f_i$ and $\nu_i$ $(i = 1, \ \text{...} \ , I)$: $$\nu_i = f_i - f_{\rm T} .$$ These equations can be interpreted as follows: At time $t = 0$ the equivalent low-pass signal is identical to the analytical signal: $$x_{\rm TP}(t = 0) = x_{\rm +}(t = 0)= \sum_{i=1}^{I} A_i \cdot {\rm e}^{{-\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm} \varphi_i}.$$ At this time, the "pointer group" is thus defined by the $I$ amplitude parameter $A_i$ and the $I$ phase positions $\varphi_i$ alone. All pointers of the analytical signal $x_+(t)$ rotate for $t > 0$ corresponding to the (always positive) frequencies $f_i$ counterclockwise. For the equivalent low-pass signal, the rotation speeds are lower. Pointers with $\nu_i > 0$ turn in mathematically positive direction (counterclockwise), those with $\nu_i < 0$ in counterclockwise direction (clockwise). If the frequency parameter for a pointer is $\nu_i = 0$, this pointer rests in the complex plane corresponding to its initial position. $\text{Example 2:}$ We consider a spectrum consisting of three spectral lines at $40\,\text{kHz}$, $50\,\text{kHz}$ and $60\,\text{kHz}$ consisting of spectrum $X_+(f)$. With the amplitude and phase parameters recognizable from the graphic you obtain the analytical signal $x_+(t)$ corresponding to the lower left sketch. Construction of the equivalent low-pass signals in the time domain The snapshot of the lower left graph ⇒ "analytical signal" $x_+(t)$ applies to the time $t = 0$. All pointers then turn counterclockwise at a constant circular velocity. The blue pointer rotates with $60000$ rotations per second (it is the fastest pointer). The green pointer is the slowest; it rotates with the circular frequency $\omega_{40} = 2\pi \cdot 40000 \hspace{0.1cm} 1/\text{s}$. The violet sum point of all three pointers moves for $t > 0$ in the complex plane in a complicated manner, for the above numerical values first roughly in the direction drawn. The graphics on the right describe the "equivalent low-pass signal" in the frequency domain (top) and in the time domain (bottom), valid for the carrier frequency $f_{\rm T} = 50\,\text{kHz}$. The carrier is now at $f = 0$ and the corresponding red rotating pointer does not move. The blue pointer ("upper sideband") rotates here counterclockwise with $\omega_{10} = 2\pi \cdot 10000 \hspace{0.1cm}1/\text{s}$. The green pointer ("lower sideband") rotates clockwise at the same angular velocity ($-\omega_{10}$). Definition of the "Locality Curve" $\text{Definition:}$ As $\text{Locality Curve}$ we call the curve on which the $\text{equivalent low-pass signal}$ $x_{\rm TP}(t)$ moves in the $\text{complex plane}$ Notes: In other technical literature the term "locality curve" is rarely used. Therefore, initially, an example is given. Definition of the locality curve $\text{Example 3:}$ We consider the equivalent low-pass signal $x_{\rm TP}(t)$ of $\text{Example 2}$, consisting of the resting (red) pointer of length $3$, the (blue) pointer rotating with $\omega_{10} = 2\pi \cdot 10000 \hspace{0.1cm} 1/\text{s}$ in mathematical positive direction, complex value $\rm j$, the (green) pointer of length $2$, which is currently $(t = 0)$ in the direction of the negative imaginary axis; this rotates with the same circular velocity $\omega_{10}$ as the blue pointer, but in the opposite direction ($-\omega_{10}$). The blue and the green pointer each require exactly one period duration $T_0 = 100 \,{\rm µ}\text{s}$ for one rotation. The further course of the process can be seen in the above illustration: The violet pointer sum at time $t = 0$ is equals $x_{\rm TP}(t=0) = 3 - \text{j}$. After $t = T_0/4 = 25 \,{\rm µ}\text{s}$ the resulting pointer group has the value "zero", since now the two rotating pointers lie in the opposite direction to the carrier and compensate it exactly. After a period $(t = T_0 = 100 \,{\rm µ}\text{s})$ the initial state is reached again: $x_{\rm TP}(t = T_0) = x_{\rm TP}(t=0) = 3 - \text{j}$. In this example the locality curve is an ellipse, which is traversed by the equivalent low-pass signal once per period. The representation applies to the Double Sideband Amplitude Modulation with Carrier of a sinusoidal $10\ \rm kHz$ signal with a cosinusoidal carrier of any frequency, where the upper sideband (blue pointer) is attenuated. If the lengths of the blue and the green rotating pointer were equal, the locality curve would be a horizontal one on the real axis - see Exercise 4.5. In the chapter Envelope Demodulation the locality curves of different system variants are treated in detail. Representing with magnitude and phase The equivalent low-pass signal of the band-pass signal $x(t)$ is generally complex and can therefore be expressed in the form $$x_{\rm TP}(t) = a(t) \cdot {\rm e}^{{\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm} \phi(t)}.$$ Note the plus sign in the argument of the exponential function, which differs from the "complex Fourier series". This is because the equation with the positive sign for the phase is usually used to describe the modulation method for the physical signal as well: $$x(t) = a(t) \cdot {\cos} ( 2 \pi f_{\rm T} t + \phi(t)).$$ In many textbooks this equation is used with plus or minus signs depending on the application, but always with the same "phase identifier". By using two different symbols $(\varphi$ and $\phi)$ we try to avoid this ambiguity in our learning tutorial $\rm LNTwww$. $\text{Example 4:}$ The same prerequisites apply as in the $\text{Example 2}$ and in the $\text{Example 3}$. However, instead of the complex function $x_{\rm TP}(t)$ the two real functions $a(t)$ and $\phi(t)$ are now displayed in the graphic. Magnitude and phase of the equivalent low-pass signal It should be noted with regard to this representation: The $\text{magnitude function}$ shows the time dependence of the pointer length: $$a(t)= \vert x_{\rm TP}(t)\vert =\sqrt{ {\rm Re}\left[x_{\rm TP}(t)\right]^2 + {\rm Im}\left[x_{\rm TP}(t)\right]^2 }.$$ The magnitude function $a(t)$ in this example is like the complex equivalent low-pass signal $x_{\rm TP}(t)$ periodic with $T_0$ and takes values between $0$ and $6$. The $\text{phase function}$ describes the time-dependent angle of the equivalent low-pass signal $x_{\rm TP}(t)$, related to the coordinate origin: $$\phi(t)= {\rm arc} \left[x_{\rm TP}(t)\right]= {\rm arctan} \hspace{0.1cm}\frac{ {\rm Im}\left[x_{\rm TP}(t)\right]}{ {\rm Re}\left[x_{\rm TP}(t)\right]}.$$ Here are some numerical results for the phase values: The phase at start time is $\phi (t = 0) =\hspace{0.1cm} -\arctan (1/3) ≈ \hspace{0.1cm} -18.43^{\circ} = \hspace{0.1cm}-0.32\,\text{rad}$. At $t = 25\,{\rm µ}\text{s}$ as well as at all equidistant times in the distance $T_0 = 100 \,{\rm µ}\text{s}$ thereof $x_{\rm TP}(t) = 0$, so that at these times the phase function $\phi(t)$ changes abruptly from $-\pi /2$ to $+\pi /2$. At the time $t = 60\,{\rm µ}\text{s}$ the phase has a slightly positive value. Relation between equivalent low-pass signal and band-pass signal A band-pass signal $x(t)$ resulting from the modulation of a low-frequency source signal $q(t)$ with a carrier signal $z(t)$ of frequency $f_{\rm T}$ can be represented as follows: $$x(t) = a(t) \cdot {\cos} ( 2 \pi f_{\rm T} t + \phi(t)) \hspace{0.3cm}\Rightarrow\hspace{0.3cm} x_{\rm TP}(t) = a(t) \cdot {\rm e}^{{\rm j}\hspace{0.05cm}\cdot \hspace{0.05cm} \phi(t)}.$$ It should be noted here: $a(t)$ is the "time-dependent amplitude" ⇒ magnitude function or envelope curve. This is equal to the magnitude $|x_{\rm TP}(t)|$ of the equivalent low-pass signal. $\phi(t)$ is the phase function, i.e. the "time-dependent phase", which can also be determined from the equivalent low-pass signal as the angle to the coordinate origin of the complex plane. In the physical (band-pass) signal $x(t)$, the phase function $\phi(t)$ can be recognized by the "zero crossings". With $\phi(t) > 0$ the zero crossing occurs in $x(t)$ in the range of time $t$ earlier than with the carrier signal $z(t)$. In contrast, $\phi(t) < 0$ means a shift of the zero crossing to a later time. $\text{Definitions:}$ One speaks of $\text{Amplitude Modulation}$ if all information about the source signal is contained in the magnitude function $a(t)$ while $\phi(t)$ is constant. Conversely, with $\text{Phase Modulation}$ the phase function $\phi(t)$ contains all information about the source signal, while $a(t)$ is constant. $\text{Example 5:}$ The upper part of the following figure describes the Double-Sideband Amplitude Modulation $\text{(DSB-AM)}$ with carrier: The equivalent low-pass signal $x_{\rm TP}(t)$ is here always real ⇒ the locality curve is a horizontal straight line. Therefore the zero crossings of the blue DSB-AM signal $x(t)$ correspond exactly to those of the red carrier signal $z(t)$ . This means: The phase function $\phi(t)$ is identical to zero ⇒ the magnitude function $a(t)$ contains all information about the message signal. $x_{\rm TP}(t)$ for Double-Sideband Amplitude Modulation and for Phase Modulation However, the lower part of the graphic applies to the Phase Modulation $\text{(PM)}$: The PM signal $y(t)$ always has a constant envelope (magnitude function) ⇒ the locality curve is an circle. At $t \approx 0$ it holds $\phi (t) < 0$ ⇒ the $y(t)$ zero crossings occur later than those of the carrier $z(t)$ ⇒ the zero crossings are "trailers". For positive values of the source signal ⇒ $\phi (t) > 0$ ⇒ the zero crossings occur earlier than those of the carrier signal ⇒ they are "precursors". With phase modulation, therefore, all information about the source signal $q(t)$ is contained in the positions of the zero crossings. Why multiple representations of the same signal exist Finally, and hopefully not too late, we want to turn to the question why the two complex and less comprehensible signals $x_+(t)$ and $x_{\rm TP}(t)$ are necessary to describe the actual band-pass signal $x(t)$. They were not introduced in Communications Engineering in order to unsettle students, but: $\text{Conclusion:}$ The magnitude function $a(t)$ and the phase function $\phi (t)$ can be extracted directly and easily from the physical band-pass signal $x(t)$ only in some special cases. The real non existing equivalent low-pass signal $x_{\rm TP}(t)$ is a mathematical tool to determine the functions $a(t)$ and $\phi (t)$ by simple geometrical considerations. The analytical signal $x_+(t)$ is an intermediate step in the transition from $x(t)$ to $x_{\rm TP}(t)$. While $x_+(t)$ is always complex, $x_{\rm TP}(t)$ can be real in special cases, for example, with ideal amplitude modulation according to the chapter Double-Sideband Amplitude Modulation $\text{(DSB-AM)}$. The same principle applies as often used in the natural sciences and technologies: The introduction of $x_+(t)$ and $x_{\rm TP}(t)$ brings rather a complication for simple problems. The advantages of this approach can only be seen in more difficult problems, which could not be solved with the physical band-pass signal $x(t)$ alone or only with much more effort. For further clarification we provide two interactive applets: Physical and Analytical Signal ⇒ "Pointer Diagram", Physical and Equivalent Low-Pass Signal ⇒ "Locality Curve". Representation according to real and imaginary part Especially for the description of Quadrature Amplitude Modulation $\text{(QAM)}$, the representation of the equivalent low-pass signal according to real and imaginary part is suitable: $$x_{\rm TP}(t) = x_{\rm I}(t)+ {\rm j} \cdot x_{\rm Q}(t).$$ In this representation, the real part $x_{\rm I}(t)$ describes the $\text{In-phase component}$ (normal component), whereas the imaginary part $x_{\rm Q}(t)$ describes the $\text{Quadrature component}$ of $x_{\rm TP}(t)$. With the magnitude function function $a(t) = |x_{\rm TP}(t)|$ and the phase function $\phi (t) = \text{arc}\,x_{\rm TP}(t)$ according to the definitions on the previous pages: $$\begin{align*}x_{\rm I}(t) & = {\rm Re}[x_{\rm TP}(t)] = a(t) \cdot \cos (\phi(t)),\\ x_{\rm Q}(t) & = {\rm Im}[x_{\rm TP}(t)] = a(t) \cdot \sin (\phi(t)).\end{align*}$$ Real and imaginary part of the equivalent low-pass signal $\text{Example 6:}$ At the considered time $t_0$ applies to the equivalent low-pass signal: $$x_{\rm TP}(t = t_0) = 2\,{\rm V} \cdot {\rm e}^{- {\rm j \hspace{0.05cm}\cdot \hspace{0.05cm} 60 ^\circ} }.$$ With Euler's Theorem, this can be written: $$x_{\rm TP}(t = t_0) = 2\,{\rm V} \cdot \cos(60 ^\circ) - {\rm j} \cdot 2\,{\rm V} \cdot \sin(60 ^\circ).$$ This applies to the in-phase and quadrature component: $$x_{\rm I}(t = t_0) = 2\,{\rm V} \cdot \cos(60 ^\circ) = 1\text{V}, $$ $$x_{\rm Q}(t = t_0) = \hspace{0.05cm} - {\rm j} \cdot 2\,{\rm V} \cdot \sin(60^\circ) =\hspace{0.05cm}-1.733\text{V}.$$ By applying trigonometric transformations it can be shown that the real (physical) band-pass signal can also be represented in the following way: $$x(t) = a(t) \cdot \cos (2 \pi \cdot f_{\rm T} \cdot t + \phi(t)) = x_{\rm I}(t)\cdot \cos (2 \pi \cdot f_{\rm T} \cdot t )-x_{\rm Q}(t)\cdot \sin (2 \pi \cdot f_{\rm T} \cdot t ). $$ The minus sign results from the use of the phase function $\phi (t)$. A comparison with the page Representation with cosine- and sine component in the second main chapter shows that instead of the difference, the sum results when referring to $\varphi (t) = -\phi (t)$. Adapted to our example, you then get $$x(t) = a(t) \cdot \cos (2 \pi \cdot f_{\rm T} \cdot t - \varphi(t)) = x_{\rm I}(t)\cdot \cos (2 \pi \cdot f_{\rm T} \cdot t )+x_{\rm Q}(t)\cdot \sin (2 \pi \cdot f_{\rm T} \cdot t ).$$ The quadrature component $x_{\rm Q}(t)$ thus differs from the above equation in the sign. Determination of the equivalent low-pass signal from the band-pass signal Division of the equivalent low-pass signal into In-phase and Quadrature components The figure on the right shows two arrangements to determine the complex low-pass signal split into inphase and quadrature components from the real band-pass signal $x(t)$ for display on an oscilloscope, for example. Let us first look at the upper model: The analytical signal $x_+(t)$ is first generated here by adding the Hilbert Transformed. Multiplication with the complex exponential function (with negative exponent!) yields the equivalent low-pass signal $x_{\rm TP}(t)$. The sought components $x_{\rm I}(t)$ and $x_{\rm Q}(t)$ are then obtained by forming the real and the imaginary part. With the lower (more practical) arrangement, you get for the upper or lower branch after the respective multiplications: $$a(t)\cdot \cos (\omega_{\rm T} t + \phi(t)) \cdot 2 \cdot \cos (\omega_{\rm T} t ) = a(t)\cdot \cos ( \phi(t)) + \varepsilon_{\rm 1}(t),$$ $$a(t)\cdot \cos (\omega_{\rm T} t + \phi(t)) \cdot (-2) \cdot \sin (\omega_{\rm T} t ) = a(t)\cdot \sin ( \phi(t)) + \varepsilon_{\rm 2}(t)).$$ The respective second parts are in the range around twice the carrier frequency and are removed by the low-pass filters with the respective cut-off frequency $f_{\rm T}$ : $$\varepsilon_{\rm 1}(t) = a(t)\cdot \cos (2\omega_{\rm T} \cdot t + \phi(t)),$$ $$\varepsilon_{\rm 2}(t) = - a(t)\cdot \sin (2\omega_{\rm T} \cdot t + \phi(t)).$$ A comparison with the above equations shows that the desired components $x_{\rm I}(t)$ and $x_{\rm Q}(t)$ can be tapped at the output: $$x_{\rm I}(t) = a(t)\cdot \cos ( \phi(t)) ,$$ $$x_{\rm Q}(t) = a(t)\cdot \sin ( \phi(t)) .$$ Power and energy of a band-pass signal We look at the (blue) band-pass signal $x(t)$ according to the graph, which results for example from Binary Amplitude Shift Keying $\text{(2ASK)}$. This digital modulation method is also known as "On-Off-Keying". The signal power related to $1 \,\Omega$ is given by the explanations on page Energy limited and power limited signals to $$P_x = \lim_{T_{\rm M} \to \infty} \frac{1}{T_{\rm M}} \cdot \int^{+T_{\rm M}/2} _{-T_{\rm M}/2}\hspace{-0.1cm} x^2(t)\,{\rm d}t.$$ If the binary "zeros" and "ones" are equally probable, then the infinite integration range and the boundary crossing can be omitted, and you get for the above sketched pattern signal $$P_x = \frac{1}{2T} \cdot \int ^{2T} _{0} x^2(t)\,{\rm d}t = \frac{4\,{\rm V}^2}{2T} \cdot \int^{T} _{0} \cos^2(\omega_{\rm T} \cdot t)\,{\rm d}t= 1\,{\rm V}^2.$$ From the sketch below you can see that by averaging over the squared envelope $a^2(t)$ – i.e. over the magnitude square of the equivalent low-pass signal $x_{\rm TP}(t)$ – you get a result twice as large. Therefore the same holds here likewise: $$P_x = { {1}/{2} \hspace{0.08cm}\cdot }\lim_{T_{\rm M} \to \infty} \frac{1}{T_{\rm M}} \cdot \int^{T_{\rm M}/2} _{-T_{\rm M}/2} |x_{\rm TP}(t)|^2\,{\rm d}t = {{1}/{2} \hspace{0.08cm}\cdot }\lim_{T_{\rm M} \to \infty} \frac{1}{T_{\rm M}} \cdot \int^{T_{\rm M}/2} _{-T_{\rm M}/2} a^2(t)\,{\rm d}t.$$ This result can be generalized and applied to energy limited signals. In this case, the energy according to page Energy limited and power limited signals: $$E_x = \int ^{+\infty} _{-\infty} x^2(t)\,{\rm d}t = { {1}/{2} \hspace{0.08cm}\cdot }\int ^{+\infty} _{-\infty} |x_{\rm TP}(t)|^2\,{\rm d}t = { {1}/{2} \hspace{0.08cm}\cdot }\int ^{+\infty} _{-\infty} a^2(t)\,{\rm d}t.$$ However, this equation only applies exactly if the carrier frequency $f_{\rm T}$ is much larger than the bandwidth $B_{\rm BP}$ of the band-pass. $\text{Example 7:}$ We look at the band-pass signal $x(t)$ with $A = 2\,\text{V}$, $B = 1\,\text{kHz}$ and $f_{\rm T} = 10\,\text{kHz}$: Power calculation in the equivalent low-pass range $$x(t) = A \cdot {\rm si}(\pi \cdot B \cdot t) \cdot \cos(2 \pi \cdot f_{\rm T}\cdot \hspace{0.05cm}t + \phi(t)).$$ The magnitude spectrum $\vert X(f) \vert$ belonging to the signal $x(t)$ is displayed in the upper right corner. The blue label applies: $X(f)$ is purely real due to the symmetry relations: $$\vert X(f) \vert = X(f).$$ $\vert X(f) \vert$ is thus composed of two rectangles around $\pm f_{\rm T}$ . In the range around the carrier frequency applies: $$\vert X(f) \vert = A/(2B) = 10^{-3}\text{V/Hz}.$$ The energy of this band-pass signal could in principle be calculated by the following equation: $$E_x = \int^{+\infty} _{-\infty} A^2 \cdot \frac{ {\rm sin}^2(\pi \cdot B \cdot t)}{ (\pi \cdot B \cdot t)^2}\cdot \cos^2(2 \pi \cdot f_{\rm T}\cdot \hspace{0.05cm}t + \phi(t))\,{\rm d}t .$$ According to the above equations, however, with the envelope curve $a(t)$ drawn in red at the top left also applies: $$E_x = { {1}/{2} \hspace{0.08cm}\cdot }\int^{+\infty} _{-\infty} a^2(t)\,{\rm d}t= { {1}/{2} \hspace{0.08cm}\cdot }\int^{+\infty} _{-\infty} \vert A \cdot {\rm si}(\pi \cdot B \cdot t)\vert^2\,{\rm d}t $$ $$\Rightarrow \hspace{0.3cm} E_x = A^2\cdot \int^{+\infty} _{0} {\rm si}^2(\pi \cdot B \cdot t)\,{\rm d}t =A^2\cdot \frac {\pi}{2}\cdot \frac {1}{\pi B} = \frac {A^2}{2 B}= 2 \cdot 10^{-3}\,{\rm V}^2/{\rm Hz}.$$ A second solution with the same result is offered by Parseval's theorem: $$\int ^{+\infty} _{-\infty} a^2(t)\,{\rm d}t= \int ^{+\infty} _{-\infty} \vert A(f) \vert ^2\,{\rm d}f \hspace{0.3cm} \Rightarrow \hspace{0.3cm} E_x = {1}/{2}\cdot ( {A}/{B})^2 \cdot B = {A^2}/(2 B).$$ This is taken into account: The following applies $\vert A(f) \vert = \vert X_{\rm TP}(f) \vert $. Inside the bandwidth $B$ around the frequency $f = 0$ ⇒ $X_{\rm TP}(f)$ is twice as large as $X(f)$ around the frequency $f = f_{\rm T}$, namely $A/B$. This is related to the definition of the spectrum $X_+(f)$ of the analytical signal from which $X_{\rm TP}(f)$ is created by shifting. Exercises for the chapter Exercise 4.5: Locality Curve for DSB-AM Exercise 4.5Z: Simple Phase Modulator Exercise 4.6: Locality Curve for SSB-AM Exercise 4.6Z: Locality Curve for Phase Modulation Retrieved from "http://en.lntwww.de/index.php?title=Signal_Representation/Equivalent_Low-Pass_Signal_and_its_Spectral_Function&oldid=41900"
CommonCrawl
A fast and efficient count-based matrix factorization method for detecting cell types from single-cell RNAseq data Shiquan Sun1,2,3,4, Yabo Chen1, Yang Liu1 & Xuequn Shang1,2 Single-cell RNA sequencing (scRNAseq) data always involves various unwanted variables, which would be able to mask the true signal to identify cell-types. More efficient way of dealing with this issue is to extract low dimension information from high dimensional gene expression data to represent cell-type structure. In the past two years, several powerful matrix factorization tools were developed for scRNAseq data, such as NMF, ZIFA, pCMF and ZINB-WaVE. But the existing approaches either are unable to directly model the raw count of scRNAseq data or are really time-consuming when handling a large number of cells (e.g. n>500). In this paper, we developed a fast and efficient count-based matrix factorization method (single-cell negative binomial matrix factorization, scNBMF) based on the TensorFlow framework to infer the low dimensional structure of cell types. To make our method scalable, we conducted a series of experiments on three public scRNAseq data sets, brain, embryonic stem, and pancreatic islet. The experimental results show that scNBMF is more powerful to detect cell types and 10 - 100 folds faster than the scRNAseq bespoke tools. In this paper, we proposed a fast and efficient count-based matrix factorization method, scNBMF, which is more powerful for detecting cell type purposes. A series of experiments were performed on three public scRNAseq data sets. The results show that scNBMF is a more powerful tool in large-scale scRNAseq data analysis. scNBMF was implemented in R and Python, and the source code are freely available at https://github.com/sqsun. Single-cell RNA-sequencing (scRNAseq) analysis plays an important role in investigating tumour evolution, and is more powerful to characterize the intra-tumor cellular heterogeneity [1, 2]. Compared with traditional RNA sequencing (i.e. bulk RNAseq) which measures the specific gene expression level within a cell population, scRNAseq quantifies the specific gene expression level within only an individual cell [3, 4]. scRNAseq is more likely to understand the detailed biological processes of cell developmental trajectories and cell-to-cell heterogeneity, providing us fresh insights into cell composition, dynamic cell states, and regulatory mechanisms [5–8]. However, there are still several big challenges we have to carefully deal with before analyzing scRNAseq data [9, 10]. The first challenge is that the scRNAseq data is easy to involve some unwanted variables [11, 12], e.g. batch effects, confounding factors, etc. Moreover, the scRNAseq data set has their own characterizes, such as gene expression matrix is extremely sparse because of the quite small number of mRNAs represented in each cell [13]; current sequencing technologies, e.g. CEL-Seq2 [14] and Drop-seq [15], etc, do not have enough power to quantify the actual concentration of mRNAs (i.e. well-known "dropout events") [16]; the heavy amplifications may result into strong amplification bias [17]; cell cycle state, cell size or other unknown factors may contribute to cell-cell heterogeneity even within the same cell type [18]. The second important feature of the scRNAseq data set is of count nature [19]. In most RNA sequencing studies, the number of reads mapped to a given gene or isoform is often used as an intuitive estimate of its expression level. To account for the count nature of the RNA sequencing data, and the resulting mean-variance dependence, most statistical methods were developed using discrete distributions in differential expression analysis, i.e., PQLseq [20], edgeR/DESeq [21, 22], and MACAU [23]. Therefore, a nature choice of analyzing scRNAseq data is to develop count-based dimensionality reduction methods. Although several dimensionality reduction techniques have been already applied to scRNAseq data analysis, such as principal component analysis (PCA) [24]; independent components analysis (ICA) [25], and diffusion map [26]; partial least squares (PLS) [27, 28]; nonnegative matrix factorization (or factor analysis) [29, 30], gene expression levels are inherently quantified by counts, i.e., count nature of scRNAseq data [31, 32]. Therefore, developing the bespoke scRNAseq dimensionality reduction method has been triggered within the last two years. The first factor analysis method, ZIFA, is trying to model the drop-out events via the zero-inflated model, but the method does not take into account the count nature of the data [33]; pCMF is trying to build sparse Gamma-Poisson factor model within the Bayesian framework, but such method does not include the covariates [34]; ZINB-WaVE is trying to involve both gene-level and sample-level covariates via a hierarchical model, but the method is really time-consuming when sample size is large [35, 36]. Here, in this paper, we propose a fast and efficient count-based matrix factorization method that utilizes the negative binomial distribution to account for the over-dispersion problem of the count nature of scRNAseq data, single-cell Negative Binomial-based Matrix Factorization, scNBMF. The reason of choosing negative binomial model instead of zero-inflated negative binomial model is that not only the most scRNAseq data sets do not show much technical contribution to zero-inflation (Fig. 1a), but also can largely reduce the computation burden in estimating drop-out parameters for each gene. With the stochastic optimization method Adam [37] implemented within TensorFlow framework, scNBMF is roughly 10 – 100 times faster than the existing count-based matrix factorization methods, such as pCMF and ZINB-WaVE. To make the proposed method scalable, we apply scNBMF to analyze three publicly available scRNAseq datasets. The results demonstrate that scNBMF is more efficient and powerful than other matrix factorization methods. A simple example to show the parameter effect or optimizer effect of NMI and ARI in scRNA-seq data on clustering. a This figure shows the relationship between mean gene expression levels and dropout rates. The black line indicates observed value, which is computed by the number of unexpressed cells divided by the number of cells; The red line represents expected value, which is calculated by negative binomial distribution with mean gene expression levels and dispersion parameter ψ(ψ=mean(ψi))b This figure shows how optimizers affect the performance of different methods on NMI and ARI. c-d These two figure indicate how the number of factors affect the NMI and ARI, respectively scNBMF: model and algorithm scNBMF is to fit the logarithm likelihood function of negative binomial model-based matrix factorization. Given n cells and p genes, we denote Y as a gene expression matrix, and its element yij is the count of gene i and cell j. To account for the over-dispersion problem, we model the gene expression level yij as a random variable following the negative binomial distribution with parameters μij and ϕi, i.e., $$y_{ij} \sim NB(\mu_{ij},\phi_{i}) $$ where the rate parameter μij denotes the mean expression level for gene i and cell j; the parameter ϕi represents variance of gene expression, typically means gene-specific over-dispersion; NB is the negative binomial distribution, i.e. $$ \text{Pr}_{NB}(y_{ij}|\mu_{ij},\phi_{i}) \,=\, \left(\! \begin{array}{c} y_{ij} + \phi_{i} - 1\\ y_{ij} \end{array} \!\!\right) \!\!\left(\frac{\mu_{ij}} {\mu_{ij} + \phi_{i} } \!\right)^{y_{ij}}\!\! \left(\frac{ \phi_{i}} {\mu_{ij} + \phi_{i}} \!\right)^{\phi_{i}}\!. $$ For the rate parameter μij, we consider the following regression model $$log(\mu_{ij}) = log(N_{j}) + {\sum}_{k = 1}^{K} W_{ik} X_{kj}. $$ where Nj is the total read count for the individual cell j (a.k.a read depth or coverage); Wik is the loadings while Hkj is the factors represents the coordinates of the cells, which can be used to identify cell type purpose; K is the pre-defined number of components; When all ϕi→0, the negative binomial distribution will reduce to the standard Poisson distribution. Therefore, the log-likelihood function for gene i and cell j is $$\begin{array}{*{20}l} \mathcal{L}_{NB}(\mu,\phi| Y) = &\sum\limits_{i = 1}^{p} \sum\limits_{j = 1}^{n} log \text{Pr}_{NB} \left(y_{ij}|\mu_{ij},\phi_{i}\right)\\ = &\sum\limits_{i = 1}^{p} \sum\limits_{j = 1}^{n} y_{ij} log (\mu_{ij}) + \phi_{i} log(\phi_{i}) \\&- (y_{ij} + \phi_{i}) log (\mu_{ij}+\phi_{i}) \\ & + log\left(\begin{array}{c} y_{ij} + {\phi_{i}} - 1\\ y_{ij} \end{array} \right). \end{array} $$ where μ denotes the mean gene expression matrix and its element \(\mu _{ij}=e^{log(N_{j}) + {\sum }_{k = 1}^{K} W_{ik} X_{kj} }\); ϕ is a p-vector, and its element ϕi represents the over-dispersion parameter for gene i. To make our model more interpretation for the biological applications, we introduce a sparse penalty (LASSO) on loading matrix W since some genes are expressed while some are not in real-world biological processes. Therefore, the objective function of optimization problem becomes $$\mathcal{L} = \mathcal{L}_{NB}(\mu,\phi|Y) + \lambda \sum\limits_{i = 1}^{p} \left\| W_{i} \right\|_{1} $$ where ∥·∥1 is a l1-norm (i.e. LASSO penalty); λ denotes the penalty parameter. In the above model, we are interested in extracting the factor matrix H for detecting the cell type purposes. We first estimate the dispersion parameter ϕi) for each gene via edgeR [21] with default parameter settings, then fit the above model using Adam optimizer within TensorFlow. For deep learning model, we set the learning rate of the network as 0.001 and maximum iteration as 18000. Compared methods and evaluations To make scNBMF scalable, we compared seven existing methods, i.e. PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, and ZINB-WaVE, in the experiments. Since PCA and ZIFA are only for normalized gene expression data, we normalized raw count data following previous recommendations [38]. Typically, we transformed the count data using base 2 and pseudo count 1.0, i.e., log2(Y+1.0), into continuous data. The performance of each method was evaluated by the normalized mutual information (NMI), defined in [39] $$ NMI(L_{e}, L) = \frac{\sum\limits_{k = 1}^{K}\sum\limits_{t = 1}^{K_{e}} \frac{n_{kt}}{n}log \left(\frac{n_{kt}}{n} \right) - \sum\limits_{k = 1}^{K}\frac{n_{k}}{n}log \left(\frac{n_{k}}{n} \right)- \sum\limits_{t = 1}^{K_{e}}\frac{n_{t}}{n}log \left(\frac{n_{t}}{n} \right)} {\sqrt{\sum\limits_{k = 1}^{K}\frac{n_{k}}{n}log \left(\frac{n_{k}}{n} \right)* \sum\limits_{t = 1}^{K_{e}}\frac{n_{t}}{n}log \left(\frac{n_{t}}{n}\right)} }. $$ and the adjusted rand index (ARI), defined in [40] $$ ARI(L_{e}, L) = \frac{{{\sum}_{kt} {\left(\begin{array}{l} n_{kt}\\ 2 \end{array} \right) - \left({{\sum}_{k} {\left(\begin{array}{l} n_{k}\\ 2 \end{array} \right) {\sum}_{t} {\left(\begin{array}{l} n_{t}\\ 2 \end{array} \right)}} } \right)/\left(\begin{array}{l} n\\ 2 \end{array} \right)}}}{{\frac{1}{2}\left({{\sum}_{k} {\left(\begin{array}{l} n_{k}\\ 2 \end{array} \right) + {\sum}_{t} {\left(\begin{array}{l} n_{t}\\ 2 \end{array} \right)}} } \right) - \left({{\sum}_{k} {\left(\begin{array}{l} n_{k}\\ 2 \end{array} \right){\sum}_{t} {\left(\begin{array}{l} n_{t}\\ 2 \end{array} \right)}} } \right)/\left(\begin{array}{l} n\\ 2 \end{array} \right)}}. $$ where Le and L are the predicted cluster labels and the true labels, respectively; Ke and K are the predicted cluster number and the true cluster number, respectively; nk denotes the number of cells assigned to a specific cluster k (k=1,2,⋯,K); similarly nt denotes the number of cells assigned to cluster t (t=1,2,⋯,Ke); nkt represents the number of cells shared between cluster k and t; and n is the total number of cells. Public scRNAseq data sets Three publicly available scRNAseq data sets were collected from three studies: The first scRNAseq data set was collected from human brain [41]. There are 420 cells in eight cell types after excluded hybrid cells including, fetal quiescent cells (110 cells), fetal replicating cells (25 cells), astrocytes cells (62 cells), neuron cells (131 cells), endothelial (20 cells) and oligodendrocyte cells (38 cells) microglia cells(16 cells), and (OPCs, 16 cells), and remain 16,619 genes to test after filtering out the lowly expressed genes. The original data was downloaded from the data repository Gene Expression Omnibus (GEO; GSE67835); The second scRNAseq data set was collected from human pancreatic islet [42]. There are 60 cells in six cell types after excluding undefined cells including alpha cells (18 cells), delta cells (2 cells), pp cells (9 cells), duct cells (8 cells), beta cells (12 cells) and acinar cells (11 cells),and 116,414 genes to test after filtering out the lowly expressed genes. The original data was downloaded from the data repository Gene Expression Omnibus (GEO; GSE73727); The third scRNAseq data set was collected from the human embryonic stem [43]. There are 1018 cells which belong to seven known cell subpopulations that include neuronal progenitor cells (NPCs, 173 cells), definitive endoderm derivative cells (DEDs), endothelial cells (ECs, 105 cells), trophoblast-like cells (TBs, 69 cells), undifferentiated H1(212 cells) and H9(162 cells) ESCs, and fore-skin fibroblasts (HFFs, 159 cells), and contains 17,027 genes to test after filtering step. The original data was downloaded from the data repository Gene Expression Omnibus (GEO; GSE75748). Model selection Our first set of experiments is to select the optimization method for the log-likelihood function of negative binomial matrix factorization model. Without loss of generality, we choose the human brain scRNAseq data set. Five optimization methods were compared to optimize the neural networks, i.e., Adam, gradient descent, Adagrad, Momentum and Ftrl. The results show that the Adam significantly outperforms other optimization methods regardless of what criteria we choose (Fig. 1b). Specifically, for NMI, Adam, gradient descent, Adagrad, Momentum, and Ftrl achieve 0.8579, 0.0341, 0.0348, 0.4859, and 0.1251, respectively. Therefore, in the following experiments, we will choose the Adam method to optimize the neural networks. Our second set of experiments is to select the number of factors in the low dimensional structure of cell types. Without loss of generality, we still choose the human brain scRNAseq data set. We varied the number of factors (k = 4, 6, 10, 15, and 20). The results demonstrate that the number of factors does not impact PCA (Fig. 1c and d; bule line). The other four methods show an increasing pattern when the number of factors varied from 4 to 20 (Fig. 1c and d). Therefore, we choose the top 20 factors in the following experiments. Our third set of experiments is to apply scNBMF to three scRNAseq real data sets, human brain, human pancreas islet, and human embryonic stem. The cell type information of the three data sets were reported by the original studies. For the comparison, we compared seven other methods, PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF and ZINB-WaVE. For the evaluation, we extracted the low dimensional structure with top 10 factors, and used k-means clustering method in an unsupervised manner, repeated 100 times to test how well each method can recover the cell type assignments on NMI and ARI in the studies. The first biological data application is performed on the human brain scRNAseq data set. Figure 2 demonstrates the comparison results of tSNE with respect to seven compared clustering methods. scNBMF shows the clearly cell type patterns with the annotated cell type (Fig. 1h). Also, we carried out the same analysis using PCA (Fig. 2a), Nimfa (Fig. 2b), NMFEM (Fig. 2c), tSNE (Fig. 2d), ZIFA (Fig. 2e), pCMF (Fig. 2f), and ZINB-WaVE (Fig. 2g). For NMI and ARI, scNBMF outperforms the other methods. Specifically, for NMI criterion, PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, ZINB-WaVE and scNBMF achieve, 0.582, 0.494, 0.456, 0.712, 0.797, 0.787, 0.892, and 0.901, respectively (Fig. 2i and Table 1); while for ARI criterion, PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, ZINB-WaVE and scNBMF achieve, 0.339, 0.258, 0.264, 0.544, 0.721, 0.788, 0.916, and 0.933, respectively (Fig. 2i and Table 1). Performance evaluation on human brain scRNA-seq data. In this data set there are 420 cells in eight different cell types after the exclusion of hybrid cells. Each kind of color represent a kind of cell type. a-h These eight figures display the clustering output of two dimension of tSNE using eight matrix factorization methods(PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, ZINB-WaVE, and scNBMF). f This figure shows NMI and ARI values which are from eight compared methods Table 1 Clustering comparison of the matrix factorization-based methods in terms of Normalized Mutual information (NMI) and Adjusted Random Index (ARI) The second biological data application is to investigate the character of human pancreas islet scRNAseq data set. This data set has a smaller number of cells - only 60 cells in six cell types. Since all methods do not have enough power to detect the cell type clustering patterns, we did not show the tSNE plots for this data set. For NMI and ARI, tSNE shows the highest performance, while scNBMF achieves the second best performance (Table 1). Specifically, tSNE achieves 0.973 and 0.652 on NMI and ARI, respectively; while scNBMF is 0.716 and 0.472 on NMI and ARI respectively. The third biological data application is to investigate lineage-specific transcriptomic features at single-cell resolution. To elucidate the distinctions between different lineages, we performed eight matrix factorization methods, i.e., PCA (Fig. 3a), Nimfa (Fig. 3b), NMFEM (Fig. 3c), tSNE (Fig. 3d), ZIFA (Fig. 3e), pCMF (Fig. 3f), ZINB-WaVE (Fig. 3g), and scNBMF (Fig. 3h). scNBMF demonstrates more clearly their respective cell-type patterns compared with other methods. The cell type H1 and H9 show the tight overlapping pattern to indicate the relative homogeneity of human ES cells, such results are also consistence with the previous results [43]. For NMI and ARI, scNBMF outperforms other methods (Fig. 3i and Table 1). Specifically, for NMI, PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, ZINB-WaVE and scNBMF achieve, 0.366, 0.414, 0.741, 0.658, 0.888, 0.822, 0.888, and 0.908, respectively; For ARI, PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, ZINB-WaVE and scNBMF achieve, 0.187, 0.173, 0.614, 0.538, 0.748, 0.659, 0.721, and 0.763, respectively. Performance evaluation on human embryonic stem scRNA-seq data set, which contains 1018 cells in seven cell types. Different colors also represent different cell types. a-h These five figure display the clustering output of two dimension of tSNE using five matrix factorization methods(PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, ZINB-WaVE, and scNBMF). f This figure shows NMI and ARI values which are from eight compared methods Computation time The last set of experiments is to compare the computation time of PCA, Nimfa, NMFEM, tSNE, ZIFA, pCMF, and ZINB-WaVE. Without loss of generality, we use human brain data set to show the computation time of the compared methods (Table 2). Nimfa, NMFEM, ZIFA, pCMF, and ZINB-WaVE are the bespoke scRNAseq methods. Compared with the count-based methods, ZINB-WaVE and pCMF, scNBMF is roughly 100 folds faster than ZINB-WaVE, and 10 folds faster than pCMF. Even comparing the non-count based methods, ZIFA, Nimfa, and NMFEM, scNBMF is still the fastest method. Table 2 Computation times (second) of the matrix factorization-based methods on human brain scRNAseq data set, k represents the number of factors With rapid developing sequencing technology, a large amount of scRNAseq data sets is easily obtained via different sources. Therefore, computation time is one of these big issues for downstream analysis. On the other hand, scRNAseq data have their own characterizes, i.e., count nature, noisy, and sparsity, etc. These have been triggered the development of a fast and efficient count-based matrix factorization method. In this paper, we proposed a count-based matrix factorization (scNBMF) method to model the raw count data, prevent losing information from normalizing raw count data. On three public biological scRNAseq data sets, scNBMF provides powerful performance compared with other seven methods in terms of NMI, ARI, and computation time. Zero-inflated distribution is more appropriate method to account for dropouts, e.g. ZIFA and ZINB-WaVE. In current study, we did not consider the zero-inflated model because the tested data sets do not show too much dropouts. However, this is a necessary step in analyzing some scRNAseq data sets. Therefore, we will add the zero-inflated distribution in the future version of the scNBMF. Biologically, if we incorporate all genes in scRNAseq data analysis, probably it would be able to involve some unwanted variables because not all genes are expressed in biological processes. An interesting direction to improve the performance of scNBMF is to select some informative genes first, this step can largely reduce unwanted variables, and exclude some redundancy genes [44, 45] in the downstream analysis. In addition, because gene expression levels are highly affected by other gene specific annotations, such as GC-content, gene length, and chromatin states [46]. If some interesting variables in the statistical model, such as "drop-out" parameter, can be inferred by annotation information, the method probably will significantly improve the power of detecting cell types from scRNAseq data. ARI: Adjusted rand index DESeq: Differential expression edgeR: Empirical analysis of digital gene expression data in R ICA: Independent components analysis MACAU: mixed model association for count data via data augmentation NMI: Normalized mutual information PCA: pCMF: Probabilistic count matrix factorization Partial least squares PQLseq: Penalized quasi-likelihood scNBMF: Single-cell negative binomial matrix factorization scRNAseq: Single-cell RNA sequencing tSNE: t-distributed stochastic neighbor embedding ZIFA: Zero-inflated factor analysis ZINB-WaVE: Zero-inflated negative binomial-based wanted variation extraction Alexander J, et al. Utility of Single-Cell Genomics in Diagnostic Evaluation of Prostate Cancer. Cancer Res. 2018; 78:348–58. Love JC. Single-cell sequencing in cancer genomics. Cancer Res. 2015; 75:IA14. Conesa A, et al.A survey of best practices for RNA-seq data analysis. Genome Biol. 2016; 17:13. Vieth B, et al.powsimR: power analysis for bulk and single cell RNA-seq experiments. Bioinformatics. 2017; 33:3486–8. Buettner F, et al.Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat Biotechnol. 2015; 33:155–60. Jiang L, et al.GiniClust: detecting rare cell types from single-cell gene expression data with Gini index. Genome Biol. 2016; 17:144. Kiselev VY, et al.SC3: consensus clustering of single-cell RNA-seq data. Nat Methods. 2017; 14:483–6. Lonnberg T, et al.Single-cell RNA-seq and computational analysis using temporal mixture modeling resolves T(H)1/T-FH fate bifurcation in malaria. Sci Immunol. 2017; 2:eaal2192. Wills QF, Mead AJ. Application of single-cell genomics in cancer: promise and challenges. Hum Mol Genet. 2015; 24:R74–R84. Yuan GC, et al.Challenges and emerging directions in single-cell analysis. Genome Biol. 2017; 18:84. Ding B, et al.Normalization and noise reduction for single cell RNA-seq experiments. Bioinformatics. 2015; 31:2225–7. Vallejos CA, et al.Normalizing single-cell RNA sequencing data: challenges and opportunities. Nat Methods. 2017; 14:565–71. Li WV, Li JYJ. An accurate and robust imputation method scImpute for single-cell RNA-seq data. Nat Commun. 2018; 9:997. Hashimshony T, et al.CEL-Seq2: sensitive highly-multiplexed single-cell RNA-Seq. Genome Biol. 2016; 17:77. Macosko EZ, et al.Highly Parallel Genome-wide Expression Profiling of Individual Cells Using Nanoliter Droplets. Cell. 2015; 161:1202–14. Ziegenhain C, et al.Comparative Analysis of Single-Cell RNA Sequencing Methods. Mol Cell. 2017; 65:631–43. Brennecke P, et al.Accounting for technical noise in single-cell RNA-seq experiments. Nat Methods. 2013; 10:1093–5. McDavid A, Finak G, Gottardo R. The contribution of cell cycle to heterogeneity in single-cell RNA-seq data. Nat Biotechnol. 2016; 34:591–3. Wu AR, Neff NF, Kalisky T, et al.Quantitative assessment of single-cell rna-sequencing methods. Nat Methods. 2014; 11:41–6. Sun S, Zhu J, Mozaffari S, Ober C, Chen M, Zhou X. Heritability estimation and differential analysis of count data with generalized linear mixed models in genomic sequencing studies. Bioinformatics. 2018. https://doi.org/10.1093/bioinformatics/bty644. Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26:139–40. Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010; 11:R106. Sun S, Hood M, Scott L, Peng Q, Mukherjee S, Tung J, Zhou X. Differential expression analysis for RNAseq using Poisson mixed models. Nucleic Acids Res. 2017; e106:45. Zurauskiene J, Yau C. pcaReduce: hierarchical clustering of single cell transcriptional profiles. BMC Bioinforma. 2016; 17:140. Trapnell C, et al.The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat Biotechnol. 2014; 32:381–U251. Haghverdi L, Buettner F, Theis FJ. Diffusion maps for high-dimensional single-cell analysis of differentiation data. Bioinformatics. 2015; 31:2989–98. Chen MJ, Zhou X. Controlling for Confounding Effects in Single Cell RNA Sequencing Studies Using both Control and Target Genes. Sci Rep. 2017; 7:13587. Sun SQ, Peng QK, Shakoor A. A Kernel-Based Multivariate Feature Selection Method for Microarray Data Classification. Plos ONE. 2014; 9:e102541. Shao CX, Hofer T. Robust classification of single-cell transcriptome data by nonnegative matrix factorization. Bioinformatics. 2017; 33:235–42. Zhu X, et al.Detecting heterogeneity in single-cell RNA-Seq data by non-negative matrix factorization. Peerj. 2017; e2888:5. Miao Z, et al.DEsingle for detecting three types of differential expression in single-cell RNA-seq data. Bioinformatics. 2018; 34:3223–4. Streets AM, Huang YY. How deep is enough in single-cell RNA-seq. Nat Biotechnol. 2014; 32:1005–6. Pierson E, Yau C. ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis. Genome Biol. 2015; 16:241. Durif G, et al.Probabilistic Count Matrix Factorization for Single Cell Expression Data Analysis. BioRxiv; 2017. Risso D, et al.A general and flexible method for signal extraction from single-cell RNA-seq data. Nat Commun. 2018; 9:284. Van den Berge K, et al.Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications. Genome Biol. 2018; 19:24. Kingma DP, Ba J. Adam: A Method for Stochastic Optimization. the 3rd International Conference for Learning Representations. San Diego; 2015. Lin PJ, Troup M, Ho JWK. CIDR: Ultrafast and accurate clustering through imputation for single-cell RNA-seq data. Genome Biol. 2017; 18:59. Ghosh J, Acharya A. Cluster ensembles. Adv Rev. 2011; 4:305–15. Hubert L, Arabie P. Comparing partitions. J Classif. 1985; 2:193–218. Darmanis S, et al.A survey of human brain transcriptome diversity at the single cell level. P Natl Acad Sci USA. 2015; 112:7285–90. Li J, et al.Single-cell transcriptomes reveal characteristic features of human pancreatic islet cell types. Embo Rep. 2016; 17:178–87. Chu LF, et al.Single-cell RNA-seq reveals novel regulators of human embryonic stem cell differentiation to definitive endoderm. Genome Biol. 2016; 17:173. Feng Z, Wang Y. Elf: extract landmark features by optimizing topology maintenance, redundancy, and specificity. IEEE ACM T Comput BI. 2018; 99:1. Sun S, Peng Q, Zhang X. Global feature selection from microarray data using Lagrange multipliers. Knowl-Based Syst. 2016; 110:267–74. Sun S, Sun X, Zheng Y. Higher-order partial least squares for predicting gene expression levels from chromatin states. BMC Bioinforma. 2018; 19:113. No applicable. Publication of this artical was sponsored by the Top International University Visiting Program for Outstanding Young scholars of Northwestern Polytechnical University; Fundamental Research Funds for the Central Universities (Grant: 3102017OQD098); Natural Science Foundation of China (NFSC; Grants: 61772426 and 61332014). scNBMF was implemented by R and Python, and the source code are freely available at https://github.com/sqsun. The three publicly scRNAseq datasets are available at https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE67835https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE73727https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE75748 This article has been published as part of BMC Systems Biology Volume 13 Supplement 2, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): systems biology. The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-13-supplement-2. School of Computer Science, Northwestern Polytechnical University, Xi'an, Shaanxi, 710129, People's Republic of China Shiquan Sun, Yabo Chen, Yang Liu & Xuequn Shang Key Laboratory of Big Data Storage and Management, Northwestern Polytechnical University, Ministry of Industry and Information Technology, Xi'an, Shaanxi, 710129, People's Republic of China Shiquan Sun & Xuequn Shang Centre for Multidisciplinary Convergence Computing (CMCC), School of Computer Science, Northwestern Polytechnical University, Xi'an, Shaanxi, 710129, People's Republic of China Shiquan Sun Department of Biostatistics, University of Michigan, Ann Arbor, MI 48109, USA Yabo Chen Xuequn Shang SS, YC, YL and XS conceived and wrote the manuscript. SS and YC implemented the software and analyzed the data. All authors read and approved the final manuscript. Correspondence to Xuequn Shang. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Sun, S., Chen, Y., Liu, Y. et al. A fast and efficient count-based matrix factorization method for detecting cell types from single-cell RNAseq data. BMC Syst Biol 13 (Suppl 2), 28 (2019). https://doi.org/10.1186/s12918-019-0699-6 Matrix factorization Read count
CommonCrawl
Blue sky-like catastrophe for reversible nonlinear implicit ODEs DCDS-S Home Recurrent equations with sign and Fredholm alternative August 2016, 9(4): 923-958. doi: 10.3934/dcdss.2016035 Mesochronic classification of trajectories in incompressible 3D vector fields over finite times Marko Budišić 1, , Stefan Siegmund 2, , Doan Thai Son 3, and Igor Mezić 4, Department of Mathematics, Clarkson University, Potsdam, NY, United States Center for Dynamics & Institute of Analysis, Department of Mathematics, TU Dresden, Dresden, Germany Department of Probability and Statistics, Institute of Mathematics, Vietnam Academy of Science and Technology, Hanoi, Vietnam Department of Mechanical Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States Received October 2015 Revised April 2016 Published August 2016 The mesochronic velocity is the average of the velocity field along trajectories generated by the same velocity field over a time interval of finite duration. In this paper we classify initial conditions of trajectories evolving in incompressible vector fields according to the character of motion of material around the trajectory. In particular, we provide calculations that can be used to determine the number of expanding directions and the presence of rotation from the characteristic polynomial of the Jacobian matrix of mesochronic velocity. In doing so, we show that (a) the mesochronic velocity can be used to characterize dynamical deformation of three-dimensional volumes, (b) the resulting mesochronic analysis is a finite-time extension of the Okubo--Weiss--Chong analysis of incompressible velocity fields, (c) the two-dimensional mesochronic analysis from Mezić et al. ``A New Mixing Diagnostic and Gulf Oil Spill Movement'', Science 330, (2010), 486-–489, extends to three-dimensional state spaces. Theoretical considerations are further supported by numerical computations performed for a dynamical system arising in fluid mechanics, the unsteady Arnold--Beltrami--Childress (ABC) flow. Keywords: Nonautonomous dynamical systems, hyperbolicity, time averaging, volume-preserving flows., finite-time dynamics. Mathematics Subject Classification: Primary: 37N10, 37B55; Secondary: 37Ex. Citation: Marko Budišić, Stefan Siegmund, Doan Thai Son, Igor Mezić. Mesochronic classification of trajectories in incompressible 3D vector fields over finite times. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 923-958. doi: 10.3934/dcdss.2016035 L. Y. Adrianova, Introduction to Linear Systems of Differential Equations, vol. 146 of Translations of Mathematical Monographs,, American Mathematical Society, (1995). Google Scholar M. Allshouse and J.-L. Thiffeault, Detecting coherent structures using braids,, Physica D: Nonlinear Phenomena, 241 (2012), 95. doi: 10.1016/j.physd.2011.10.002. Google Scholar H. Aref and E. P. Flinchem, Dynamics of a vortex filament in a shear-flow,, Journal of Fluid Mechanics, 148 (1984), 477. doi: 10.1017/S0022112084002457. Google Scholar S. Balasuriya, Explicit invariant manifolds and specialised trajectories in a class of unsteady flows,, Physics of Fluids (1994-present), 24 (2012). doi: 10.1063/1.4769979. Google Scholar D. Blazevski and G. Haller, Hyperbolic and elliptic transport barriers in three-dimensional unsteady flows,, Physica D: Nonlinear Phenomena, 273/274 (2014), 46. doi: 10.1016/j.physd.2014.01.007. Google Scholar P. L. Boyland, H. Aref and M. A. Stremler, Topological fluid mechanics of stirring,, Journal of Fluid Mechanics, 403 (2000), 277. doi: 10.1017/S0022112099007107. Google Scholar S. L. Brunton and C. W. Rowley, Fast computation of finite-time Lyapunov exponent fields for unsteady flows,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 20 (2010). doi: 10.1063/1.3270044. Google Scholar M. Budišić and I. Mezić, Geometry of the ergodic quotient reveals coherent structures in flows,, Physica D. Nonlinear Phenomena, 241 (2012), 1255. doi: 10.1016/j.physd.2012.04.006. Google Scholar M. S. Chong, A. E. Perry and B. J. Cantwell, A general classification of three-dimensional flow fields,, Physics of Fluids A: Fluid Dynamics (1989-1993), 2 (1990), 1989. doi: 10.1063/1.857730. Google Scholar W. A. Coppel, Dichotomies in Stability Theory,, Lecture Notes in Mathematics, (1978). Google Scholar M. Dellnitz and O. Junge, On the approximation of complicated dynamical behavior,, SIAM Journal on Numerical Analysis, 36 (1999), 491. doi: 10.1137/S0036142996313002. Google Scholar M. Dellnitz and O. Junge, Set oriented numerical methods for dynamical systems,, in Handbook of dynamical systems, 2 (2002), 221. doi: 10.1016/S1874-575X(02)80026-1. Google Scholar T. Dombre, U. Frisch, J. M. Greene, M. Hénon, A. Mehr and A. M. Soward, Chaotic streamlines in the ABC flows,, Journal of Fluid Mechanics, 167 (1986), 353. doi: 10.1017/S0022112086002859. Google Scholar R. Durrett, Probability: Theory and Examples,, 4th edition, (2010). doi: 10.1017/CBO9780511779398. Google Scholar M. Farazmand and G. Haller, Polar rotation angle identifies elliptic islands in unsteady dynamical systems,, Physica D: Nonlinear Phenomena, 315 (2016), 1. doi: 10.1016/j.physd.2015.09.007. Google Scholar A. M. Fox and J. D. Meiss, Greene's residue criterion for the breakup of invariant tori of volume-preserving maps,, Physica D: Nonlinear Phenomena, 243 (2013), 45. doi: 10.1016/j.physd.2012.09.005. Google Scholar G. Froyland and M. Dellnitz, Detecting and locating near-optimal almost-invariant sets and cycles,, SIAM Journal on Scientific Computing, 24 (2003), 1839. doi: 10.1137/S106482750238911X. Google Scholar G. Froyland, S. Lloyd and N. Santitissadeekorn, Coherent sets for nonautonomous dynamical systems,, Physica D: Nonlinear Phenomena, 239 (2010), 1527. doi: 10.1016/j.physd.2010.03.009. Google Scholar G. Froyland and K. Padberg, Almost-invariant sets and invariant manifolds-connecting probabilistic and geometric descriptions of coherent structures in flows,, Physica D. Nonlinear Phenomena, 238 (2009), 1507. doi: 10.1016/j.physd.2009.03.002. Google Scholar G. Froyland, N. Santitissadeekorn and A. Monahan, Transport in time-dependent dynamical systems: Finite-time coherent sets,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 20 (2010). doi: 10.1063/1.3502450. Google Scholar I. M. Gelfand, M. M. Kapranov and A. V. Zelevinsky, Discriminants, Resultants and Multidimensional Determinants,, Modern Birkhäuser Classics, (2008). Google Scholar I. Goldhirsch, P.-L. Sulem and S. A. Orszag, Stability and Lyapunov stability of dynamical systems: A differential approach and a numerical method,, Physica D: Nonlinear Phenomena, 27 (1987), 311. doi: 10.1016/0167-2789(87)90034-0. Google Scholar S. Gouëzel, Central limit theorem and stable laws for intermittent maps,, Probability Theory and Related Fields, 128 (2004), 82. doi: 10.1007/s00440-003-0300-4. Google Scholar S. Gouëzel and I. Melbourne, Moment bounds and concentration inequalities for slowly mixing dynamical systems,, Electronic Journal of Probability, 19 (2014). doi: 10.1214/EJP.v19-3427. Google Scholar J. M. Greene, Two-dimensional measure-preserving mappings,, Journal of Mathematical Physics, 9 (1968), 760. doi: 10.1063/1.1664639. Google Scholar J. M. Greene, Method for Determining a Stochastic Transition,, Journal of Mathematical Physics, 20 (1979), 1183. doi: 10.1063/1.524170. Google Scholar G. Haller, Lagrangian structures and the rate of strain in a partition of two-dimensional turbulence,, Physics of Fluids, 13 (2001), 3365. doi: 10.1063/1.1403336. Google Scholar G. Haller, A variational theory of hyperbolic Lagrangian Coherent Structures,, Physica D. Nonlinear Phenomena, 240 (2011), 574. doi: 10.1016/j.physd.2010.11.010. Google Scholar G. Haller, Lagrangian coherent structures,, Annual Review of Fluid Mechanics, 47 (2015), 137. doi: 10.1146/annurev-fluid-010313-141322. Google Scholar G. Haller and F. J. Beron-Vera, Geodesic theory of transport barriers in two-dimensional flows,, Physica D: Nonlinear Phenomena, 241 (2012), 1680. doi: 10.1016/j.physd.2012.06.012. Google Scholar G. Haller and A. C. Poje, Finite time transport in aperiodic flows,, Physica D. Nonlinear Phenomena, 119 (1998), 352. doi: 10.1016/S0167-2789(98)00091-8. Google Scholar G. Haller and G. Yuan, Lagrangian coherent structures and mixing in two-dimensional turbulence,, Physica D. Nonlinear Phenomena, 147 (2000), 352. doi: 10.1016/S0167-2789(00)00142-1. Google Scholar R. S. Irving, Integers, Polynomials, and Rings,, Undergraduate Texts in Mathematics, (2004). Google Scholar B. O. Koopman, Hamiltonian systems and transformations in Hilbert space,, Proceedings of National Academy of Sciences, 17 (1931), 315. doi: 10.1073/pnas.17.5.315. Google Scholar Z. Levnajić and I. Mezić, Ergodic theory and visualization. I. Mesochronic plots for visualization of ergodic partition and invariant sets,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 20 (2010). doi: 10.1063/1.3458896. Google Scholar T. Ma and E. M. Bollt, Differential geometry perspective of shape coherence and curvature evolution by finite-time nonhyperbolic splitting,, SIAM Journal on Applied Dynamical Systems, 13 (2014), 1106. doi: 10.1137/130940633. Google Scholar T. Ma and E. M. Bollt, Shape coherence and finite-time curvature evolution,, International Journal of Bifurcation and Chaos, 25 (2015). doi: 10.1142/S0218127415500765. Google Scholar T. Ma, N. T. Ouellette and E. M. Bollt, Stretching and folding in finite time,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 26 (2016). doi: 10.1063/1.4941256. Google Scholar J. A. J. Madrid and A. M. Mancho, Distinguished trajectories in time dependent vector fields,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 19 (2009). doi: 10.1063/1.3056050. Google Scholar N. Malhotra, I. Mezić and S. Wiggins, Patchiness: A new diagnostic for lagrangian trajectory analysis in time-dependent fluid flows,, International Journal of Bifurcation and Chaos, 8 (1998), 1053. doi: 10.1142/S0218127498000875. Google Scholar A. M. Mancho, S. Wiggins, J. Curbelo and C. Mendoza, Lagrangian descriptors: A method for revealing phase space structures of general time dependent dynamical systems,, Communications in Nonlinear Science and Numerical Simulation, 18 (2013), 3530. doi: 10.1016/j.cnsns.2013.05.002. Google Scholar I. Mezić, On the Geometrical and Statistical Properties of Dynamical Systems: Theory and Applications,, Phd thesis, (1994). Google Scholar I. Mezić, Spectral properties of dynamical systems, model reduction and decompositions,, Nonlinear Dynamics. An International Journal of Nonlinear Dynamics and Chaos in Engineering Systems, 41 (2005), 309. doi: 10.1007/s11071-005-2824-x. Google Scholar I. Mezić and A. Banaszuk, Comparison of systems with complex behavior,, Physica D. Nonlinear Phenomena, 197 (2004), 101. doi: 10.1016/j.physd.2004.06.015. Google Scholar I. Mezić, S. Loire, V. A. Fonoberov and P. J. Hogan, A new mixing diagnostic and Gulf oil spill movement,, Science Magazine, 330 (2010), 486. Google Scholar I. Mezić and F. Sotiropoulos, Ergodic theory and experimental visualization of invariant sets in chaotically advected flows,, Physics of Fluids, 14 (2002), 2235. doi: 10.1063/1.1480266. Google Scholar I. Mezić and S. Wiggins, A method for visualization of invariant sets of dynamical systems based on the ergodic partition,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 9 (1999), 213. doi: 10.1063/1.166399. Google Scholar B. A. Mosovsky and J. D. Meiss, Transport in transitory dynamical systems,, SIAM Journal on Applied Dynamical Systems, 10 (2011), 35. doi: 10.1137/100794110. Google Scholar J. Nocedal and S. J. Wright, Numerical Optimization,, 2nd edition, (2006). Google Scholar A. Okubo, Horizontal dispersion of floatable particles in vicinity of velocity singularities such as convergences,, Deep-Sea Research, 17 (1970), 445. doi: 10.1016/0011-7471(70)90059-8. Google Scholar J. M. Ottino, The Kinematics of Mixing: Stretching, Chaos, and Transport,, Cambridge Texts in Applied Mathematics, (1989). Google Scholar K. J. Palmer, A finite-time condition for exponential dichotomy,, Journal of Difference Equations and Applications, 17 (2011), 221. doi: 10.1080/10236198.2010.549005. Google Scholar A. D. Perry and S. Wiggins, KAM tori are very sticky: Rigorous lower bounds on the time to move away from an invariant Lagrangian torus with linear flow,, Physica D. Nonlinear Phenomena, 71 (1994), 102. doi: 10.1016/0167-2789(94)90184-8. Google Scholar K. Petersen, Ergodic Theory, vol. 2 of Cambridge Studies in Advanced Mathematics,, Cambridge University Press, (1989). Google Scholar A. Poje, G. Haller and I. Mezić, The geometry and statistics of mixing in aperiodic flows,, Physics of Fluids, 11 (1999), 2963. doi: 10.1063/1.870155. Google Scholar L. Rey-Bellet and L.-S. Young, Large deviations in non-uniformly hyperbolic dynamical systems,, Ergodic Theory and Dynamical Systems, 28 (2008), 587. doi: 10.1017/S0143385707000478. Google Scholar D. P. Ruelle, Rotation numbers for diffeomorphisms and flows,, Annales de l'Institut Henri Poincaré. Physique Théorique, 42 (1985), 109. Google Scholar R. M. Samelson, Lagrangian motion, coherent structures, and lines of persistent material strain,, Annual Review of Marine Science, 5 (2013), 137. doi: 10.1146/annurev-marine-120710-100819. Google Scholar S. C. Shadden, F. Lekien and J. E. Marsden, Definition and properties of Lagrangian coherent structures from finite-time Lyapunov exponents in two-dimensional aperiodic flows,, Physica D. Nonlinear Phenomena, 212 (2005), 271. doi: 10.1016/j.physd.2005.10.007. Google Scholar J. D. Szezech Jr., A. B. Schelin, I. L. Caldas, S. R. Lopes, P. J. Morrison and R. L. Viana, Finite-time rotation number: A fast indicator for chaotic dynamical structures,, Physics Letters A, 377 (2013), 452. Google Scholar J.-L. Thiffeault, Braids of entangled particle trajectories,, Chaos: An Interdisciplinary Journal of Nonlinear Science, 20 (2010). doi: 10.1063/1.3262494. Google Scholar D. Treschev, Width of stochastic layers in near-integrable two-dimensional symplectic maps,, Physica D. Nonlinear Phenomena, 116 (1998), 21. doi: 10.1016/S0167-2789(97)00253-4. Google Scholar J. Weiss, The dynamics of enstrophy transfer in two-dimensional hydrodynamics,, Physica D. Nonlinear Phenomena, 48 (1991), 273. doi: 10.1016/0167-2789(91)90088-Q. Google Scholar S. Wiggins, Chaotic Transport in Dynamical Systems, vol. 2 of Interdisciplinary Applied Mathematics,, Springer-Verlag, (1992). doi: 10.1007/978-1-4757-3896-4. Google Scholar L.-S. Young, Some large deviation results for dynamical systems,, Transactions of the American Mathematical Society, 318 (1990), 525. doi: 10.2307/2001318. Google Scholar L.-S. Young, Statistical properties of dynamical systems with some hyperbolicity,, Annals of Mathematics, 147 (1998), 585. doi: 10.2307/120960. Google Scholar Arno Berger, Doan Thai Son, Stefan Siegmund. Nonautonomous finite-time dynamics. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 463-492. doi: 10.3934/dcdsb.2008.9.463 Arno Berger. On finite-time hyperbolicity. Communications on Pure & Applied Analysis, 2011, 10 (3) : 963-981. doi: 10.3934/cpaa.2011.10.963 Jianjun Paul Tian. Finite-time perturbations of dynamical systems and applications to tumor therapy. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 469-479. doi: 10.3934/dcdsb.2009.12.469 Grzegorz Karch, Kanako Suzuki, Jacek Zienkiewicz. Finite-time blowup of solutions to some activator-inhibitor systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4997-5010. doi: 10.3934/dcds.2016016 H. E. Lomelí, J. D. Meiss. Generating forms for exact volume-preserving maps. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 361-377. doi: 10.3934/dcdss.2009.2.361 Janusz Mierczyński, Wenxian Shen. Time averaging for nonautonomous/random linear parabolic equations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 661-699. doi: 10.3934/dcdsb.2008.9.661 Fatiha Alabau-Boussouira, Vincent Perrollaz, Lionel Rosier. Finite-time stabilization of a network of strings. Mathematical Control & Related Fields, 2015, 5 (4) : 721-742. doi: 10.3934/mcrf.2015.5.721 Joanna Balbus, Janusz Mierczyński. Time-averaging and permanence in nonautonomous competitive systems of PDEs via Vance-Coddington estimates. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1407-1425. doi: 10.3934/dcdsb.2012.17.1407 Fuzhong Cong, Hongtian Li. Quasi-effective stability for a nearly integrable volume-preserving mapping. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1959-1970. doi: 10.3934/dcdsb.2015.20.1959 Rui Li, Yingjing Shi. Finite-time optimal consensus control for second-order multi-agent systems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 929-943. doi: 10.3934/jimo.2014.10.929 Ta T.H. Trang, Vu N. Phat, Adly Samir. Finite-time stabilization and $H_\infty$ control of nonlinear delay systems via output feedback. Journal of Industrial & Management Optimization, 2016, 12 (1) : 303-315. doi: 10.3934/jimo.2016.12.303 Peter Giesl, James McMichen. Determination of the area of exponential attraction in one-dimensional finite-time systems using meshless collocation. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1835-1850. doi: 10.3934/dcdsb.2018094 Shu Dai, Dong Li, Kun Zhao. Finite-time quenching of competing species with constrained boundary evaporation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1275-1290. doi: 10.3934/dcdsb.2013.18.1275 Emilija Bernackaitė, Jonas Šiaulys. The finite-time ruin probability for an inhomogeneous renewal risk model. Journal of Industrial & Management Optimization, 2017, 13 (1) : 207-222. doi: 10.3934/jimo.2016012 Tingting Su, Xinsong Yang. Finite-time synchronization of competitive neural networks with mixed delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3655-3667. doi: 10.3934/dcdsb.2016115 Peter Giesl. Construction of a finite-time Lyapunov function by meshless collocation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2387-2412. doi: 10.3934/dcdsb.2012.17.2387 Khalid Addi, Samir Adly, Hassan Saoud. Finite-time Lyapunov stability analysis of evolution variational inequalities. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1023-1038. doi: 10.3934/dcds.2011.31.1023 Gang Tian. Finite-time singularity of Kähler-Ricci flow. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1137-1150. doi: 10.3934/dcds.2010.28.1137 Thierry Cazenave, Yvan Martel, Lifeng Zhao. Finite-time blowup for a Schrödinger equation with nonlinear source term. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1171-1183. doi: 10.3934/dcds.2019050 Mickael Chekroun, Michael Ghil, Jean Roux, Ferenc Varadi. Averaging of time - periodic systems without a small parameter. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 753-782. doi: 10.3934/dcds.2006.14.753 Marko Budišić Stefan Siegmund Doan Thai Son Igor Mezić
CommonCrawl
Variational Methods in Image Segmentation Variational Methods in Image Segmentation pp 118-126 | Cite as Semicontinuity Properties of the Hausdorff Measure Jean Michel Morel Sergio Solimini In this chapter, we first define a natural metric on the set of the closed sets of a metric space E: the Hausdorff distance. For this distance, we prove that the set of closed subsets of a compact set is compact. It would be extremely convenient to have the following continuity property: If a sequence of sets An converges for the Hausdorff distance towards a set A, then the Hausdorff measures of the An also converge to the Hausdorff measure of A. We give examples which show that this is in general not true. However, simple and useful conditions can be given on the sequence An in order that the Hausdorff measure is lower semicontinuous, that is, $${{\cal H}^\alpha }(A)\, \le \,\mathop {\lim }\limits_{\,\,\,\,\,\,\,\,\,\,\,\,\,n} \inf \,\,\,\,{{\cal H}^\alpha }({A_n}).$$ We show that this last property is true when the sets An are "uniformly concentrated" (a property which we shall show to be true in Chapter 15 of this book for minimizing sequences of segmentations. © Birkhäuser Boston 1995 1.CEREMADEUniversité Paris-DauphineParis Cedex 16France 2.Srada provinciale Lecce-ArnesanoUniversità degli Studi di LecceLecceItaly Morel J.M., Solimini S. (1995) Semicontinuity Properties of the Hausdorff Measure. In: Variational Methods in Image Segmentation. Progress in Nonlinear Differential Equations and Their Applications, vol 14. Birkhäuser Boston DOI https://doi.org/10.1007/978-1-4684-0567-5_10 Publisher Name Birkhäuser Boston
CommonCrawl
Stable weak solutions of weighted nonlinear elliptic equations CPAA Home Homogenization and correctors for the hyperbolic problems with imperfect interfaces via the periodic unfolding method January 2014, 13(1): 273-291. doi: 10.3934/cpaa.2014.13.273 Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity Tarek Saanouni 1, University Tunis El Manar, Faculty of Sciences of Tunis, Department of Mathematics, 2092, Tunis, Tunisia Received December 2012 Revised May 2013 Published July 2013 Extending previous works [47, 27, 30], we consider in even space dimensions the initial value problems for some high-order semi-linear wave and Schrödinger type equations with exponential nonlinearity. We obtain global well-posedness in the energy space. Keywords: high-order Schrödinger type equation, High-order wave type equation, Moser-Trudinger inequality, well-posedness., energy estimate. Mathematics Subject Classification: Primary: 35Q55; Secondary: 35B3. Citation: Tarek Saanouni. Global well-posedness of some high-order semilinear wave and Schrödinger type equations with exponential nonlinearity. Communications on Pure & Applied Analysis, 2014, 13 (1) : 273-291. doi: 10.3934/cpaa.2014.13.273 S. Adachi and K. Tanaka, Trudinger type inequalities in $R^N$ and their best exponent,, Proc. Amer. Math. Society, 128 (1999), 2051. doi: 10.1090/S0002-9939-99-05180-1. Google Scholar A. Atallah Baraket, Local existence and estimations for a semilinear wave equation in two dimension space,, Boll. Unione Mat. Ital. Sez. B Artic. Ric. Mat, 8 (2004), 1. doi: MR204459. Google Scholar J. Bourgain, Global wellposedness of defocusing critical nonlinear Schrödinger equation in the radial case,, J. Amer. Math. Soc., 12 (1999), 145. doi: 10.1090/S0894-0347-99-00283-0. Google Scholar T. Cazenave, An introduction to nonlinear Schrödinger equations,, Textos de Metodos Matematicos, 26 (1996). Google Scholar T. Cazenave and F. B. Weissler, The Cauchy problem for the critical nonlinear Schrödinger equation in $H^s$,, Nonl. Anal. - TMA, 14 (1990), 807. doi: 10.1016/0362-546X(90)90023-A. Google Scholar J. Colliander, S. Ibrahim, M. Majdoub and N. Masmoudi, Energy critical NLS in two space dimensions,, J. Hyperbolic Differ. Equ, 6 (2009), 549. doi: 10.1142/S0219891609001927. Google Scholar J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $R^3$,, Ann. Math., 167 (2008), 767. doi: 10.4007/annals.2008.167.767. Google Scholar G. M. Constantine and T. H. Savitis, A multivariate Faa Di Bruno formula with applications,, T. A. M. S, 348 (1996), 503. Google Scholar M. Keel and T. Tao, Endpoint Strichartz estimates,, A. M. S, 120 (1998), 955. doi: 10.1016/0362-546X(90)90023-A. Google Scholar V. A. Galaktionov and S. I. Pohozaev, Blow-up and critical exponents for nonlinear hyperbolic equations,, Nonlinear Analysis, 53 (2003), 453. doi: 10.1016/S0362-546X(02)00311-5. Google Scholar J. Ginibre and G. Velo, The Global Cauchy problem for nonlinear Klein-Gordon equation,, Math. Z, 189 (1985), 487. doi: 10.1007/BF01168155. Google Scholar M. Grillakis, Regularity and asymptotic behaviour of the wave equation with a critical nonlinearity,, Annal. of Math., 132 (1990), 485. doi: 10.2307/1971427. Google Scholar E. Hebey and B. Pausader, An introduction to fourth-order nonlinear wave equations,, (2008)., (2008). Google Scholar S. Ibrahim, M. Majdoub and N. Masmoudi, Global solutions for a semilinear $2D$ Klein-Gordon equation with exponential type nonlinearity,, Comm. Pure App. Math, 59 (2006), 1639. doi: 10.1002/cpa.20127. Google Scholar S. Ibrahim, M. Majdoub and N. Masmoudi, Instability of $H^1$-supercritical waves,, C. R. Acad. Sci. Paris, 345 (2007), 133. doi: 10.1016/j.crma.2007.06.008. Google Scholar S. Ibrahim, M. Majdoub, N. Masmoudi and K. Nakanishi, Scattering for the $2D$ energy critical wave equation,, Duke Math., 150 (2009), 287. doi: 10.1215/00127094-2009-053. Google Scholar V. I. Karpman, Stabilization of soliton instabilities by higher-order dispersion fourth-order nonlinear Schrödinger equations,, Phys. Rev. E, 53 (1996), 1336. doi: 10.1103/PhysRevE.53.R1336. Google Scholar V. I. Karpman and A. G. Shagalov, Stability of soliton described by nonlinear Schrödinger type equations with higher-order dispersion,, Phys D, 144 (2000), 194. doi: 10.1016/S0167-2789(00)00078-6. Google Scholar C. E. Kenig and F. Merle, Global well-posedness, scattering and blow up for the energy-critical, focusing, nonlinear Schrödinger equation in the radial case,, Invent. Math., 166 (2006), 645. doi: 10.1007/s11511-008-0031-6. Google Scholar J. Kim, A. Arnold and X. Yao, Global estimates of fundamental solutions for higher-order Schrödinger equations,, \arXiv{0807.0690v2}., (). Google Scholar J. Kim, A. Arnold and X. Yao, Estimates for a class of oscillatory integrals and decay rates for wave-type equations,, \arXiv{1109.0452v2}, (). Google Scholar S. P. Levandosky, Stability and instability of fourth-order solitary waves,, J. Dynam. Differential Equations, 10 (1998), 151. doi: 1040-7294/98/0100-0151S15.00/0. Google Scholar S. P. Levandosky, Deacy estimates for fourth-order wave equations,, J. Differential Equations, 143 (1998), 360. doi: 10.1006/jdeq.1997.3369. Google Scholar S. P. Levandosky and W. A. Strauss, Time decay for the nonlinear beam equation,, Methods and Applications of Analysis, 7 (2000), 479. doi: Zbl 1212.35476. Google Scholar H. A Levine, Instability and nonexistence of global solutions to nonlinear wave equations of the form $Pu_{t t} = ..Au + F(u)$,, T. A. M. S, 192 (1974), 1. Google Scholar G. Lebeau, Nonlinear optics and supercritical wave equation,, Bull. Soc. R. Sci. Li\`ege, 70 (2001), 267. Google Scholar G. Lebeau, Perte de régularité pour l'équation des ondes surcritique,, Bull. Soc. Math. France, 133 (2005), 145. Google Scholar J. L. Lions, Une remarque sur les problèmes d'évolution non linéaires dans des domaines non cylindriques,, Revue Roumaine Math. Pur. Appl., 9 (1964), 129. Google Scholar O. Mahouachi and T. Saanouni, Global well posedness and linearization of a semilinear wave equation with exponential growth,, Georgian Math. J., 17 (2010), 543. doi: 10.1515/gmj.2010.026. Google Scholar O. Mahouachi and T. Saanouni, Well and ill posedness issues for a class of $2D$ wave equation with exponential nonlinearity,, J. P. D. E, 24 (2011), 361. doi: 10.4208/jpde.v24.n4.7. Google Scholar M. Majdoub and T. Saanouni, Global well-posedness of some critical fourth-order wave and Schrödinger equation,, preprint., (). Google Scholar C. Miao, G. Xu and L. Zhao, Global well-posedness and scattering for the focusing energy-critical nonlinear Schrödinger equations of fourth-order in the radial case,, \arXiv{0807.0690v2 }., (). Google Scholar C. Miao, G. Xu and L. Zhao, Global well-posedness and scattering for the defocusing energy-critical nonlinear Schrödinger equations of fourth-order in dimensions $d\geq9$,, \arXiv{0807.0692v2}., (). Google Scholar J. Moser, A sharp form of an inequality of N. Trudinger,, Ind. Univ. Math. J., 20 (1971), 1077. Google Scholar M. Nakamura and T. Ozawa, Nonlinear Schrödinger equations in the Sobolev space of critical order,, Journal of Functional Analysis, 155 (1998), 364. doi: 10.1006/jfan.1997.3236. Google Scholar M. Nakamura and T. Ozawa, Global solutions in the critical Sobolev space for the wave equations with nonlinearity of exponential growth,, Math. Z, 231 (1999), 479. doi: 10.1007/PL00004737. Google Scholar T. Ogawa and T. Ozawa, Trudinger type inequalities and uniqueness of weak solutions for the nonlinear Schrödinger mixed problem,, J. Math. Anal. Appl., 155 (1991), 531. Google Scholar T. Ozawa, On critical cases of Sobolev's inequalities,, J. Funct. Anal., 127 (1995), 259. doi: 10.1006/jfan.1995.1012. Google Scholar B. Pausader, Global well-posedness for energy critical fourth-order Schrödinger equations in the radial case,, Dynamics of PDE, 4 (2007), 197. doi: 01/2007.4:197-225. Google Scholar B. Pausader, The cubic fourth-order Schrödinger equation,, Journal of Functional Analysis, 256 (2009), 2473. doi: 10.1016/j.jfa.2008.11.009. Google Scholar B. Pausader, Scattering and the Levandosky-Strauss conjecture for fourth-order nonlinear wave equations,, J. Differential Equations, 241 (2007), 237. doi: 10.1016/j.jde.2007.06.001. Google Scholar B. Pausader and W. Strauss, Analyticity of the scattering operator for the beam equation,, Discrete Contin. Dyn. Syst., 25 (2009), 617. doi: 10.3934/dcds.2009.25.617. Google Scholar L. Peletier and W. C. Troy, Higher order models in Physics and Mechanics,, Prog in Non Diff Eq and App, 45 (2001). Google Scholar B. Ruf, A sharp Moser-Trudinger type inequality for unbounded domains in $R^2$,, J. Funct. Analysis, 219 (2004), 340. doi: 10.1016/j.jfa.2004.06.013. Google Scholar B. Ruf and S. Sani, Sharp Adams-type inequalities in $R^n$,, Trans. Amer. Math. Soc., 365 (2013), 645. doi: 10.1090/S0002-9947-2012-05561-9. Google Scholar E. Ryckman and M. Visan, Global well-posedness and scattering for the defocusing energy-critical nonlinear Schrödinger equation in $R^{1+4}$,, Amer. J. Math., 129 (2007), 1. Google Scholar T. Saanouni, Global well-posedness and scattering of a 2D Schrödinger equation with exponential growth,, Bull. Belg. Math. Soc., 17 (2010), 441. Google Scholar T. Saanouni, Decay of Solutions to a $2D$ Schrödinger Equation,, J. Part. Diff. Eq., 24 (2011), 37. doi: 10.4208/jpde.v24.n1.3. Google Scholar T. Saanouni, Scattering of a $2D$ Schrödinger equation with exponential growth in the conformal space,, Math. Meth. Appl. Sci, 33 (2010), 1046. doi: 10.1002/mma.1237. Google Scholar T. Saanouni, Remarks on the semilinear Schrödinger equation,, J. Math. Anal. Appl., 400 (2013), 331. doi: 10.1016/j.jmaa.2012.11.037. Google Scholar I. E. Segal, The global Cauchy problem for a relativistic scalar field with power interaction,, Bull. Soc. Math. France, 91 (1963), 129. Google Scholar Shangbin Cui and Cuihua Guo, Well-posedness of higher-order nonlinear Schrödinger equations in Sobolev spaces $H^s(\mathbbR^n)$ and applications,, Nonlinear Analysis, 67 (2007), 687. doi: 10.1016/j.na.2006.06.020. Google Scholar J. Shatah et M. Struwe, Well-posedness in the energy space for semilinear wave equation with critical growth,, IMRN, 7 (1994), 303. doi: 10.1155/S1073792894000346. Google Scholar W. A. Strauss, On weak solutions of semi-linear hyperbolic equations,, Anais Acad. Brasil. Cienc., 42 (1970), 645. Google Scholar M. Struwe, Semilinear wave equations,, Bull. Amer. Math. Soc, 26 (1992), 53. Google Scholar M. Struwe, The critical nonlinear wave equation in 2 space dimensions,, J. European Math. Soc. (to appear)., (). Google Scholar M. Struwe, Global well-posedness of the Cauchy problem for a super-critical nonlinear wave equation in two space dimensions,, Math. Ann, 350 (2011), 707. doi: 10.1007/s00208-010-0567-6. Google Scholar T. Tao, Global well-posedness and scattering for the higher-dimensional energycritical non-linear Schrödinger equation for radial data,, New York J. of Math., 11 (2005), 57. Google Scholar N. S. Trudinger, On imbedding into Orlicz spaces and some applications,, J. Math. Mech., 17 (1967), 473. Google Scholar M. Visan, The defocusing energy-critical nolinear Schrödinger equation in higher dimensions,, Duke. Math. J., 138 (2007), 281. doi: 10.1215/S0012-7094-07-13825-0. Google Scholar Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093 Lela Dorel. Glucose level regulation via integral high-order sliding modes. Mathematical Biosciences & Engineering, 2011, 8 (2) : 549-560. doi: 10.3934/mbe.2011.8.549 Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. Journal of Industrial & Management Optimization, 2018, 14 (2) : 673-686. doi: 10.3934/jimo.2017068 Zhong Wang. Stability of Hasimoto solitons in energy space for a fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4091-4108. doi: 10.3934/dcds.2017174 Marc Wolff, Stéphane Jaouen, Hervé Jourdren, Eric Sonnendrücker. High-order dimensionally split Lagrange-remap schemes for ideal magnetohydrodynamics. Discrete & Continuous Dynamical Systems - S, 2012, 5 (2) : 345-367. doi: 10.3934/dcdss.2012.5.345 Raymond H. Chan, Haixia Liang, Suhua Wei, Mila Nikolova, Xue-Cheng Tai. High-order total variation regularization approach for axially symmetric object tomography from a single radiograph. Inverse Problems & Imaging, 2015, 9 (1) : 55-77. doi: 10.3934/ipi.2015.9.55 Marc Bonnet. Inverse acoustic scattering using high-order small-inclusion expansion of misfit function. Inverse Problems & Imaging, 2018, 12 (4) : 921-953. doi: 10.3934/ipi.2018039 Phillip Colella. High-order finite-volume methods on locally-structured grids. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4247-4270. doi: 10.3934/dcds.2016.36.4247 Andrey B. Muravnik. On the Cauchy problem for differential-difference parabolic equations with high-order nonlocal terms of general kind. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 541-561. doi: 10.3934/dcds.2006.16.541 Kolade M. Owolabi, Abdon Atangana. High-order solvers for space-fractional differential equations with Riesz derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 567-590. doi: 10.3934/dcdss.2019037 Abdelwahab Bensouilah, Sahbi Keraani. Smoothing property for the $ L^2 $-critical high-order NLS Ⅱ. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2961-2976. doi: 10.3934/dcds.2019123 Zheng Sun, José A. Carrillo, Chi-Wang Shu. An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems. Kinetic & Related Models, 2019, 12 (4) : 885-908. doi: 10.3934/krm.2019033 Hiroyuki Hirayama, Mamoru Okamoto. Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity. Communications on Pure & Applied Analysis, 2016, 15 (3) : 831-851. doi: 10.3934/cpaa.2016.15.831 Ahmed El Kaimbillah, Oussama Bourihane, Bouazza Braikat, Mohammad Jamal, Foudil Mohri, Noureddine Damil. Efficient high-order implicit solvers for the dynamic of thin-walled beams with open cross section under external arbitrary loadings. Discrete & Continuous Dynamical Systems - S, 2019, 12 (6) : 1685-1708. doi: 10.3934/dcdss.2019113 Yuanyuan Ren, Yongsheng Li, Wei Yan. Sharp well-posedness of the Cauchy problem for the fourth order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 487-504. doi: 10.3934/cpaa.2018027 Changliang Zhou, Chunqin Zhou. Extremal functions of Moser-Trudinger inequality involving Finsler-Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2309-2328. doi: 10.3934/cpaa.2018110 Prosenjit Roy. On attainability of Moser-Trudinger inequality with logarithmic weights in higher dimensions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5207-5222. doi: 10.3934/dcds.2019212 Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843 Daomin Cao, Hang Li. High energy solutions of the Choquard equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 3023-3032. doi: 10.3934/dcds.2018129 Florian Schneider, Jochen Kall, Graham Alldredge. A realizability-preserving high-order kinetic scheme using WENO reconstruction for entropy-based moment closures of linear kinetic equations in slab geometry. Kinetic & Related Models, 2016, 9 (1) : 193-215. doi: 10.3934/krm.2016.9.193 HTML views (0) Tarek Saanouni
CommonCrawl
Skip to main content Skip to sections October 2012 , Volume 89, Issue 1–2, pp 87–122 | Cite as Sequential approaches for learning datum-wise sparse representations Gabriel Dulac-Arnold Ludovic Denoyer Philippe Preux Patrick Gallinari In supervised classification, data representation is usually considered at the dataset level: one looks for the "best" representation of data assuming it to be the same for all the data in the data space. We propose a different approach where the representations used for classification are tailored to each datum in the data space. One immediate goal is to obtain sparse datum-wise representations: our approach learns to build a representation specific to each datum that contains only a small subset of the features, thus allowing classification to be fast and efficient. This representation is obtained by way of a sequential decision process that sequentially chooses which features to acquire before classifying a particular point; this process is learned through algorithms based on Reinforcement Learning. The proposed method performs well on an ensemble of medium-sized sparse classification problems. It offers an alternative to global sparsity approaches, and is a natural framework for sequential classification problems. The method extends easily to a whole family of sparsity-related problem which would otherwise require developing specific solutions. This is the case in particular for cost-sensitive and limited-budget classification, where feature acquisition is costly and is often performed sequentially. Finally, our approach can handle non-differentiable loss functions or combinatorial optimization encountered in more complex feature selection problems. Classification Features selection Sparsity Sequential models Reinforcement learning Editors: Dimitrios Gunopulos, Donato Malerba, and Michalis Vazirgiannis. Feature Selection is one of the main contemporary issues in Machine Learning and has been approached from many directions. One modern approach to feature selection in linear models consists in minimizing an L 0 regularized empirical risk. This particular risk encourages the model to have a good balance between a low classification error and a high sparsity (where only a few features are used for classification). As the L 0 regularized problem is combinatorial, many approaches such as the LASSO (Tibshirani 1994) try to address the combinatorial problem by using more practical norms such as L 1. These classical approaches to sparsity aim at finding a sparse representation of the feature space that is global to the entire dataset. We propose a new approach to sparsity where the goal is to limit the number of features per datapoint classified, thus datum-wise sparse classification model (DWSM). Our approach allows the choice of features used for classification to vary relative to each datapoint; data points that are easy to classify can be inferred on with only a few features, and others which might require more information can be classified using more features. The underlying motivation is that, while classical approaches balance between accuracy and sparsity at the dataset level, our approach optimizes this balance at the individual datum level. This approach differs from the usual feature selection paradigm which operates at a global level on the dataset. We believe that several problems could benefit from our approach. For many applications, a decision could be taken by observing only a few features for most items, whereas other items would require closer examination. In these situations, datum-wise approaches can achieve higher overall sparsity than classical methods. In our opinion, however, this is not the only important aspect of this model, as there are two primary domains where this alternative approach to sparsity can also be beneficial. First, this is a natural framework for sequential classification tasks, where a decision regarding an item class is taken as soon as enough information has been gathered on this item (Louradour and Kermorvant 2011; Dulac-Arnold et al. 2011). Second, the proposed framework naturally adapts to a variety of sparsity or feature selection problems that would require new specific solutions with classical approaches. It also allows for the handling of situations where the loss function is not continuous—situations that are difficult to cope with through classical optimization. It can be easily adapted, for example, to limited budget or cost sensitive problems where the acquisition of features is costly, as it is often the case in domains such as diagnosis, medicine, biology or even for information extraction problems (Kanani and McCallum 2012). DWSM also allows handling easily complex problems where features admit a certain inherent structure. In order to solve the combinatorial feature selection problem, we propose to model feature selection and classification as a single sequential Markov Decision Process (MDP). A scoring function associated to the MDP policy will return at any time the best possible action. Each action consists either in choosing a new feature for a given datum or in deciding on the class of x if enough information is available for deciding so. During inference, this score function will allow us to greedily choose which features to use for classifying each particular datum. Learning the policy is performed using an algorithm inspired by Reinforcement Learning (Sutton and Barto 1998). In this sequential decision process, datum-wise sparsity is obtained by introducing a penalizing reward when the agent chooses to incorporate an additional feature into the decision process. We show that an optimal policy in this MDP corresponds to an optimal classifier w.r.t. the datum wise loss function which incorporates a sparsity inducing term. The contributions of the paper are as follows: We formally introduce the concept of datum-wise sparse classification, and propose a new classifier model whose goal is two-fold: maximize the classification accuracy, while using as few features as possible to represent each datum. The model considers classification as a sequential process, where the model's choice of features is dependent on the current datum's values. We formalize this sequential model using a Markov Decision Process. This allows us to use an MDP, we show the equivalence between learning to maximize a reward function and minimizing a datum-wise loss function. This new approach results in a model that obtains good performance in terms of classification while maximizing datum-wise sparsity, i.e. the mean number of features used for classifying the whole dataset. Our model also naturally handles multi-class classification problems, solving them by using as few features as possible for all classes combined. We propose a series of extensions to our base model that allow us to deal with variants of the feature selection problem, such as: hard budget classification, group features, cost-sensitive classification, and relational features. These extensions aim at showing that the proposed sequential model is more than a classical sparse classifier, but can be easily adapted to many different classification tasks where the features selection/acquisition process is complex, all while maintaining good classification accuracy. We perform a series of experiments on many different corpora: 13 for the base model and additional corpora for the extensions. Then we compare the model with those obtained by optimizing the LASSO problem (Efron et al. 2004), an L 1-regularized SVM, and CART decision trees. This provides a qualitative study of the behavior of our algorithm. Additionally, we perform a series of experiments to demonstrate the potential of the various extensions proposed to our model. The paper is organized as follows: First, we define the notion of datum-wise sparse classifiers and explain the interest of such models in Sect. 2. We then describe our sequential approach to classification and detail the learning algorithm in Sect. 3. We then proceed to explain how the model described in Sect. 3 can be extended to handle more complex problems such as cost-sensitive classification in Sect. 4. e discuss the algorithm complexity in Sect. 5 and its scalability to large datasets. We detail experiments and also give a qualitative analysis of the behavior of this base model and the extensions in Sect. 6. Finally, a state-of-the-art in sparsity, as well as an overview of related work is presented in Sect. 7. 2 Datum-wise sparse classifiers We consider the problem of supervised multi-class classification1 where one wants to learn a classification function \(f_{\theta}: \mathcal{X} \rightarrow \mathcal{Y}\). The function f θ associates one category \(y \in \mathcal{Y}\) to a vector \(\mathbf{x} \in\mathcal{X}\), with \(\mathcal{X} = \mathbb{R}^{n}\), n being the dimension of the input vectors. θ is the set of parameters learned from a training set composed of input/output pairs \(\mathcal{T}_{train} = \{(\mathbf{x}_{\mathbf{i}},y_{i})\}_{i \in[1..N]}\). These parameters are commonly found by minimizing the empirical risk defined by: $$ \theta^*=\mathop{\mathrm{argmin}}_\theta\frac{1}{N}\sum_{i=1}^{N} \varDelta\bigl(f_\theta( \mathbf{x}_{\mathbf{i}}),y_i\bigr), $$ where Δ is the loss associated to a prediction error, defined as: $$\varDelta(x,y) = \left \{ \begin{array}{l@{\quad}l} 0 & \text{if } x=y,\\ 1 & \text{if } x \neq y. \end{array} \right . $$ This empirical risk minimization problem does not consider any prior assumption or constraint concerning the form of the solution and can result in overfitting models. Moreover, when facing a very large number of features, obtained solutions usually need to perform computations on all the features for classifying each datum, thus negatively impacting the model's classification speed. We propose a different risk minimization problem where we add a penalization term that encourages the obtained classifier to classify using on average as few features as possible. In comparison to classical L 0 or L 1 regularized approaches where the goal is to constrain the number of features used at the dataset level, our approach performs sparsity at the datum level, allowing the classifier to use different features when classifying different inputs. This results in a datum-wise sparse classifier that, when possible, only uses a few features for classifying easy inputs, and more features for classifying difficult or ambiguous ones. We consider a classifier function that, in addition to predicting a label y given an input x, also provides information about which features have been used for classification. Let us denote \(\mathcal{Z} = \{0;1\}^{n}\). We define a datum-wise classification function f of parameters θ as: $$ f_\theta: \left \{ \begin{array}{ll} \mathcal{X} \rightarrow\mathcal{Y} \times\mathcal{Z}, \\ f_\theta(\mathbf{x}) = (y,\mathbf{z}), \end{array} \right . $$ where y is the predicted output and z is a n-dimensional vector z=(z 1,…,z n ), such that z i =1 implies that feature i has been taken into consideration for computing label y on datum x. By convention, we denote the predicted label as y θ (x) and the corresponding z-vector as z θ (x). Thus, if \(z_{\theta}^{i}(\mathbf{x})=1\), feature i has been used for classifying x into category y θ (x). This definition of data-wise classifiers has two main advantages: First, as we will see in the next section, because f θ can explain its use of features with z θ (x), we can add constraints on the features used for classification. This allows us to encourage datum-wise sparsity which we define below. Second, experimental analysis of z θ (x) gives a qualitative explanation of how the classification decision has been made, which we discuss in Sect. 6. Note that the way we define datum-wise classification is an extension to the usual definition of a classifier. 2.1 Datum-wise sparsity Datum-wise sparsity is obtained by adding a penalization term to the empirical loss defined in Eq. (1) that limits the average number of features used for classifying: $$ \theta^*=\mathop{\mathrm{argmin}}_\theta\frac{1}{N}\sum_{i=1}^{N} \varDelta\bigl(y_\theta( \mathbf{x}_{\mathbf{i}}),y_i\bigr) + \lambda\frac{1}{N} \sum_{i=1}^{N} \bigl\Vert z_\theta( \mathbf{x}_{\mathbf{i}}) \bigr\Vert_0. $$ The term ∥z θ (x i )∥0 is the L 0 norm2 of z θ (x i ), i.e. the number of features selected for classifying x i . In the general case, the minimization of this new risk results in a classifier that on average selects only a few features for classifying, but may use a different set of features w.r.t. to the input being classified. We consider this to be the crux of the DWSM model: the classifier takes each datum into consideration differently during the inference process. Note that the optimization of the loss defined in Eq. (2) is a combinatorial problem that becomes quickly intractable. In the next section, we propose an original way to deal with this problem, based on a Markov Decision Process. 3 Datum-wise sparse sequential classification In the next couple of pages, we begin by proposing a model that allows us to solve the problem posed in Eq. (2). Then we explain how we can use this model to classify in a sequential manner that allows for datum-wise sparsity. 3.1 Markov decision process We consider a Markov Decision Process (MDP, Puterman 1994)3 to classify an input x∈ℝ n . At first, we have no information about x, that is, we have no attribute/feature values. Then, step-by-step, we can choose to either acquire a particular feature of x or to classify x. The act of classifying x in the category y ends an "episode" of the sequential process. This process is illustrated with an example MDP in Fig. 1. The classification process is a deterministic process defined by: A set of states \(\mathcal{X} \times\mathcal{Z}\), where the tuple (x,z) corresponds to the state where the agent is considering datum x and has selected features specified by z. The number of currently selected features is thus ∥z∥0. A set of actions \(\mathcal{A}\) where \(\mathcal{A}(\mathbf{x},\mathbf{z})\) denotes the set of possible actions in state (x,z). We consider two types of actions: \(\mathcal{A}_{f}\) is the set of feature selection actions such that, for \(a \in\mathcal{A}_{f}\), choosing action a=a j corresponds to choosing feature f j . Note that the set of possible feature selection actions on state (x,z), denoted \(\mathcal{A}_{f}(\mathbf{x},\mathbf{z})\), is equal to the subset of currently unselected features, i.e. \(\mathcal{A}_{f}(\mathbf{x},\mathbf{z}) = \{a_{j} \in \mathcal{A}_{f}, \text{ s.t. } \mathbf{z}_{j}=0\}\). \(\mathcal{A}_{y}\) is the set of classification actions—one action for each possible class—that correspond to assigning a label to the current datum. Classification actions stop the sequential decision process. A transition function defined only for feature selection actions (since classification actions are terminal): $$\mathcal{T}: \begin{cases} \mathcal{X}\times\mathcal{Z}\times\mathcal{A}_f\rightarrow \mathcal{X}\times\mathcal{Z}, \\ \mathcal{T}((\mathbf{x},\mathbf{z}), a) = (\mathbf{x},\mathbf{z}') \end{cases} $$ where z′ is an updated version of z such that for a=a j , z′=z+e j with e j as the vector whose value is 1 for component j and 0 for the other components. Open image in new window The sequential process for a problem with 4 features (f 1,…,f 4) and 3 possible categories (y 1,…,y 3). Top: The leftmost circle is the initial state for one particular input x. Classification actions are terminal, and therefore end the decision process. In this example, the classification has been made by sequentially choosing to acquire feature 2, feature 4, feature 3 and then classifying x in category y 2. The bold (orange) arrows correspond to the trajectory made by the current policy. Bottom: The values of z(x) for the different states are illustrated. The value on the arrows corresponds to the immediate reward received by the agent assuming that x belongs to category y 2. At the end of the process, the agent has received a total reward of −3λ 3.1.1 Policy We define a parameterized policy π θ , which, for each state (x,z), returns the best action as defined by a scoring function s θ (x,z,a): $$\pi_\theta: \mathcal{X} \times\mathcal{Z} \rightarrow \mathcal{A}\quad \text{and}\quad\pi_\theta(\mathbf{x},\mathbf{z})=\mathop{\mathrm{argmax}}_{a} s_\theta(\mathbf{x},\mathbf{z},a). $$ The policy π θ decides which action to take by applying the scoring function to every action possible from state (x,z) and greedily taking the highest scoring action. The scoring function reflects the overall quality of taking action a in state (x,z), which corresponds to the total reward obtained by taking action a in (x,z) and thereafter following policy π θ :4 $$ s_\theta(\mathbf{x},\mathbf{z},a) = r(\mathbf{x}, \mathbf{z},a) + \sum_{t=1}^{T} r_\theta^t \big\vert\bigl((\mathbf{x},\mathbf{z}),a\bigr). $$ Here, \(\sum_{t=1}^{T} r_{\theta}^{t} \vert ((\mathbf{x},\mathbf{z}),a)\) corresponds to the cumulative reward when starting in state (x,z), choosing action a, and following policy π θ until classification. Here \(r_{\theta}^{t} \mid((\mathbf{x},\mathbf{z}),a)\) corresponds to the reward obtained at step t. Taking the sum of these rewards gives us the total reward from state (x,z) until the end of the episode. Since the policy is deterministic, we may refer to a parameterized policy using only θ. Note that the optimal parameterization θ ∗ obtained after learning (see Sect. 3.4) is the parameterization that maximizes the expected reward over all possible state-action pairs. In practice, the initial state of such a process for an input x corresponds to an empty z vector where no feature has been selected. The policy θ sequentially picks, one by one, a set of features pertinent to the classification task, and then chooses to classify once enough features have been considered. 3.1.2 Reward The reward function reflects the immediate quality of taking action a in state (x,z) relative to the problem at hand. We define a reward function \(\mathcal{R}:\mathcal{X}\times\mathcal{Z}\times\mathcal {A}\rightarrow \mathbb{R}\) for \((\mathbf{x}_{\mathbf{i}},y_{i}) \in\mathcal{T}_{train}\): If a corresponds to a feature selection action i.e. \(a \in \mathcal{A}_{f}\): $$r(\mathbf{x}_{\mathbf{i}},\mathbf{z},a) = -\lambda. $$ If a corresponds to a classification action i.e. \(a \in \mathcal{A}_{y}\): $$r(\mathbf{x}_{\mathbf{i}},\mathbf{z},a) = \left\{ \begin{array}{l@{\quad}l} 0 & \text{if }a = y_i,\\ -1 & \text{if }a \neq y_i. \end{array} \right. $$ With the reward function defined in this way, correctly classifying an input x i by acquiring f features results in obtaining a reward of −f⋅λ while an incorrect classification results in a reward of −f⋅λ−1. A good rule of thumb is to keep λ<1/n in order to avoid situations where classifying incorrectly is a better decision than choosing sufficient features. Of course depending on the particular application and desired sparsity, on can choose larger values for λ. 3.2 Reward maximization and loss minimization As explained in Sect. 2, our ultimate goal is to find the parameterization θ ∗ that minimizes the datum-wise empirical loss defined in Eq. (2). Let us therefore show that maximizing the expected reward is equivalent to minimizing the datum-wise empirical loss. Here, \(\pi_{\theta}(\mathbf{x}_{\mathbf{i}}, \mathbf{z}_{\theta}^{(t)})\) is the action taken at time t by the policy π θ for the training example x i and T θ (x i ) is the number of features acquired by π θ before classifying x i . Such an equivalence between risk minimization and reward maximization shows that the optimal classifier θ ∗ corresponds to the optimal policy in the MDP defined previously. This equivalence allows us to use classical MDP resolution algorithms (Sutton and Barto 1998) in order to find the best classifier. We detail the learning procedure in Sect. 3.4. 3.3 Inference and approximated decision processes Due to the infinite number of possible inputs x, the number of states is also infinite. Moreover, the reward function r(x,z,a) is only known for the values of x that are in the training set and cannot be computed for any other datum. For these two reasons, it is not possible to compute the score function for all state-action pairs in a tabular manner, and this function has to be approximated. The scoring function that underlies the policy s θ (x,z,a) is approximated with a linear model: $$s(\mathbf{x},\mathbf{z},a) = \bigl\langle\varPhi(\mathbf{x},\mathbf{z},a) ; \theta\bigr\rangle. $$ The policy defined by such a function consists in taking in state (x,z) the action a′ that maximizes the scoring function i.e. \(a' = \operatorname{argmax}_{a\in\mathcal{A}} \langle \varPhi(\mathbf{x},\mathbf{z},a) ; \theta\rangle\). The scoring function s(x,z,a) is thus the function which chooses which action to take in any state of the decision process: it decides which features to acquire and when to classify a particular input. In our experiments we have chosen to restrict ourselves to a linear scoring function. The choice of a linear function is natural to be able to compare ourselves with the classification performance of L 1-regularized linear models. Our work could easily be extended to non-linear cases by changing the regression machine's nature. Note that, as explained in Sect. 5, using linear machines allows one to have a very low inference complexity. The state-action pairs are represented in a feature space. We note Φ(x,z,a) the featurized representation of the ((x,z),a) state-action pair. Many definitions may be used for this feature representation, but we propose a simple projection. To begin with, let us restrict the representation of x to only the selected features. Let μ(x,z) be the restriction of x according to z: $$\mu(\mathbf{x},\mathbf{z})^i = \left\{ \begin{array}{l@{\quad}l} x^i & \text{if }z^i=1,\\ 0 & \text{elsewhere}. \end{array} \right. $$ To be able to differentiate between an attribute of x that is not yet known, and an attribute of x that is simply equal to 0, we must keep this information present in z. Let ϕ(x,z)=(z,μ(x,z)) be the intermediate representation that corresponds to the concatenation of z with μ(x,z). Now we simply need to keep the information present in a such that each action can be easily distinguished by a linear classifier. To do this, we use block-vectors (Har-Peled et al. 2002). This consists in projecting ϕ(x,z) into a higher dimensional space, such that the position of ϕ(x,z) inside the global vector Φ(x,z,a) is dependent on action a: $$\varPhi(\mathbf{x},\mathbf{z},a) = \bigl( \mathbf{0}, \ldots, \mathbf{0}, \phi( \mathbf{x},\mathbf{z}), \mathbf{0}, \ldots, \mathbf{0} \bigr). $$ In Φ(x,z,a), the block ϕ(x,z) is at position i a ⋅|ϕ(x,z)| where i a is the index of action a in the set of all the possible actions. Thus, ϕ(x,z) is offset by an amount dependent on the action a. This equates to having a different linear classifier for each possible action in the MDP. 3.4 Learning The goal of the learning phase is to find an optimal policy parameterization θ ∗ which maximizes the expected reward, thus minimizing the datum-wise regularized loss defined in Eq. (2). The combinatorial space consisting of all possible feature subsets for each individual datum in the training set is extremely large. Therefore, we cannot exhaustively explore the state space during training, and thus use a Monte-Carlo approach to sample example states from the learning space. In order to find the optimal policy parameterization θ ∗—which maximizes the expected reward, thus minimizing the regularized empirical loss defined in Eq. (2)—we propose to use an Approximate Policy Iteration learning approach based on Rollouts Classification Policy Iteration (RCPI) (Lagoudakis and Parr 2003). Sampling state-action pairs according to a previous policy \(\pi_{\theta^{(t-1)}}\), RCPI consists in iteratively learning a better policy \(\pi_{\theta^{(t)}}\) by iteratively improving estimations of s θ as defined in Eq. (3). The RCPI algorithm is composed of three main steps that are iteratively repeated: The algorithm begins by sampling a set of random states: the x vector is sampled from a uniform distribution in the training set, and z is also sampled using a uniform binomial distribution. For each sampled state, the policy \(\pi_{\theta^{(t-1)}}\) is used to compute the expected reward of choosing each possible action from that state. We now have a feature vector Φ(x,z,a) for each state-action pair in the sampled set, and the corresponding expected reward denoted \(R_{\theta^{(t-1)}}(\mathbf{x},\mathbf{z},a)\). The parameters θ (t) of the new policy are then computed using classical linear regression on the set of feature vectors—Φ(x,z,a)—and corresponding expected rewards—\(R_{\theta^{(t-1)}}(\mathbf{x},\mathbf{z},a)\)—as regression targets. This classifier gives an estimated score to state-action pairs even if we have never seen them previously. After a certain number of iterations, the parameterized policy converges to a final policy \(\pi_{\hat{\theta}}\) which is used for inference. Convergence is not guaranteed by Approximate Policy Iteration algorithms, but in practice occurs after only a few iterations. Termination of the learning algorithm happens once performance of a new policy no longer significantly improves over the previous iteration's policy. RCPI is based on two different hyper-parameters that have to be tuned manually: the number of states used for Monte-Carlo Simulation and the number of rollout trajectories sampled for each state-action pair. These parameters have a direct influence over the performances of the algorithm and the time spent for learning. As explained in Lazaric et al. (2010), a good choice consists in choosing a high value of sampled state with only a few sampling trajectories. This is the choice we have made for the experiments. 4 Model extensions So far, we have introduced the concept of datum-wise sparsity, and showed how it can be modeled as a Sequential Decision Process. Let us now show how DWSM can be extended to tackle other types of feature selection problems. This section aims to show that the proposed DWSM model is very general and can easily be adapted for many new feature selection problems, while keeping its datum-wise properties. We show how we can address the following classification tasks: hard budget feature selection, cost-sensitive feature acquisition, group feature sparsity, and relational feature sparsity. All of these problems have been derived from real-world applications and have been explored separately in different publications where problem is solved by a particular approach. We show that our model allows us to address all these tasks by making only a few changes to the original formulation. We begin by providing an informal description of each of these tasks and describing the corresponding losses that are to be minimized on a training set. Note that these loss minimization problems are datum-wise variants inspired by losses found in the literature, and are therefore slightly different. We then describe how these losses can be solved by making simple modifications to the structure of the decision process described in Sect. 3. Experimental results are presented in Sect. 6.3. Here are descriptions of the different problems we address, and the corresponding datum-wise loss minimization problems: Hard budget feature selection (Kapoor and Greiner 2005) considers that there is a fixed budget on feature acquisition, be it during the training, the inference, per-datum, or globally. We choose to put in place this constraint as a per-datum hard budget during inference. The goal is to maximize classification accuracy while respecting this strict limit on the feature budget. The corresponding loss minimization problem can be written as: Cost-sensitive feature acquisition and classification (Turney 1995; Greiner 2002; Ji and Carin 2007) is an important domain in both feature selection and active learning. The problem is defined by assigning a fixed cost to each feature the classifier can consider. Moreover, the cost of misclassification errors depends on the error made. For example, false positive errors will have a different cost than false negatives. The goal is thus to minimize the overall cost which is composed of both the misclassification cost and also the sum of the cost of all the features acquired for classifying. This task is well-suited for some medical applications where there is a cost associated with each medical procedure (blood test, x-ray, etc.) and a cost depending on the quality of the final diagnosis. Let us denote ξ the vector that contains the cost of each of the possible features, the datum-wise minimization problem can be written as: $$ \theta^*=\mathop{\mathrm{argmin}}_\theta \frac{1}{N} \sum_{i=1}^{N} \varDelta_{cost}\bigl(y_\theta(\mathbf{x}_{\mathbf{i}}),y_i \bigr) + \frac{1}{N} \sum_{i=1}^{N} \bigl\langle\xi; z_\theta(\mathbf{x}_{\mathbf{i}}) \bigr\rangle, $$ where Δ cost is a cost-sensitive error loss. Let C be a classification cost matrix such that C i,j is the cost of classifying a datum as i when its real class is j. This cost is generally positive5 for i≠j, and negative or zero for i=j. We can thus define Δ cost as: $$ \varDelta_{cost}(i,j) = C_{i,j}. $$ The matrix C is defined a priori by the problems one wants to solve. Group feature selection has been previously considered in the context of the Group Lasso (Yuan and Lin 2006). In this problem, feature selection is considered in the context of groups of features; the classifier can choose to use a certain number of groups of features, but cannot select individual features. Many feature selection tasks present a certain organization in the feature space. For example, a subset of features f s may all be somehow correlated, and need to be selected together. For example, f s may represent a discretized real variable, or an ensemble of values that correspond to a single physical test. These groups can either be defined relative to a certain structure already present in the data (Jenatton et al. 2011), or can be used to reduce the dimensionality of the problem. Let us consider the set of n features denoted \(\mathcal{F}\) and a set of g groups of features denoted \(\mathcal{F}_{1} \ldots\mathcal{F}_{g}\) such that \(\bigcup^{g}_{i=1} \mathcal{F}_{i} = \mathcal{F}\). Let us define the set of selected features for a particular datum x i as \(\mathcal{Z}_{\theta}(\mathbf{x}_{\mathbf{i}}) = \{j \in \mathcal{F} \text{ s.t. } z^{j}_{\theta}(\mathbf{x}_{\mathbf{i}})=1\}\). The corresponding datum-wise loss, inspired by the Group Lasso, can be now be written as: This loss tries to minimize the number of \(\mathcal{F}_{t}\) groups present in the actual set of selected features. We use Open image in new window as a truth function: This allows us to quantify the number of groups that have been chosen in \(\mathcal{Z}_{\theta}(\mathbf{x}_{\mathbf{i}})\), so that we may minimize their number. Relational feature selection: Finally, we consider a more complex problem where features are organized in a complex structure. This problem, which we call relational feature selection, is inspired by structured sparsity (Huang et al. 2009). We imagine a couple of problems that can fall into this category: Conditional features, where one or a subset of features can only be selected depending on the previously acquired features. Constrained features, where the cost of acquiring a particular feature depends on the previously acquired features. For example, in computer vision, one can constrain a system to acquire values of pixels that are close in an image—see Sect. 6.3.3. Let us define a boolean function that tells us if two features are related: $$ \mathit{Related}{:}\quad \begin{cases} \mathcal{A}_f \times\mathcal{A}_f \rightarrow\{1,0\},\\ \mathit{Related}(f, f') = 1\quad \mbox{if related, else } 0. \end{cases} $$ In the case of this function, the relation can be any imaginable constraint that can be calculated simply by considering f and f′. The underlying idea is that acquiring features that are somehow related is less expensive than acquiring features that do not share a relation. The corresponding loss can be written as: Here, the term Related(f,f′)(λ−γ)+γ equals λ if f and f′ are related, and γ otherwise. In that definition, the cost of acquiring non related features is γ while the cost of related features is λ. Therefore, to encourage the use of related features, one simply needs to set γ>λ. The different problems are summarized in Table 1. Proposed tasks and corresponding learning problems Loss minimization problem Hard budget \(\begin{array}{l} \theta^{*}=\mathop{\mathrm{argmin}}_{\theta}\frac{1}{N}\sum_{i=1}^{N} \varDelta(y_{\theta}(\mathbf{x}_{\mathbf{i}}),y_{i}) + \lambda\frac{1}{N} \sum_{i=1}^{N} \Vert z_{\theta}(\mathbf{x}_{\mathbf{i}}) \Vert_{0} \\ \quad\text{subject to } \Vert z_{\theta}(\mathbf{x}_{\mathbf{i}}) \Vert_{0} \leq M \end{array}\) Cost-sensitive \(\theta^{*}=\mathop{\mathrm{argmin}}_{\theta}\frac{1}{N}\sum_{i=1}^{N} \varDelta(y_{\theta}(\mathbf{x}_{\mathbf{i}}),y_{i}) + \frac{1}{N} \sum_{i=1}^{N} \langle\xi; z_{\theta}(\mathbf{x}_{\mathbf{i}}) \rangle\) Grouped features Relational features \(\begin{array}{ll} \theta^{*}=\mathop{\mathrm{argmin}}_{\theta}\frac{1}{N}\sum_{i=1}^{N} \varDelta(y_{\theta}(\mathbf{x}_{\mathbf{i}}),y_{i})\\[3pt] \qquad{}+ \frac{1}{N} \sum_{i=1}^{N} \sum_{f,f' \in\mathcal{Z}_{\theta}(x_{i})} \mathit{Related}(f,f')(\lambda- \gamma) + \gamma \end{array}\) 4.2 Adapting the datum-wise classifier In the rest of this section, we will show how these different tasks can be easily solved by making slight modifications to the model proposed in Sect. 3. We will first cover the general idea underlying these modifications, and then detail for each of the previously described tasks how they can be handled by the sequential process. This section aims at showing that our approach is not only a novel way to compute sparse models, but also an original and flexible approach that allows one to easily imagine many different models for solving complex classification problems. 4.2.1 General idea Our model is based on the idea6 of proposing a sequential decision process where the long term reward obtained by a policy is equal to the negative loss obtained by the corresponding classifier. With such an equivalence, the optimal policy obtained through learning is thus equivalent to an optimal classifier for the particular loss considered. In order to deal with the previously proposed classification problems, we simply modify DWSM's MDP in order to correspond to the new loss function. The main advantage of making only small changes to the structure of the decision process is that we do not need to change the principles of the learning and inference algorithms, thus resulting in new classifiers that are very simple to specify and implement. We believe that this approach is well suited for solving real-world classification tasks that often corresponds to complex loss functions. In order to deal with the four different proposed tasks, we have to make modifications to the MDP by changing: the reward function r(⋅,⋅,⋅), the action set \(\mathcal{A(\cdot, \cdot)}\), and/or the transition function \(\mathcal{T}(\cdot,\cdot)\). To be able to adapt our model to these new problems, we do not need to modify either the feature projector, Φ(⋅,⋅), the actual definition of the state space, the learning algorithm, or the inference algorithm. In the next part, we detail which modifications we have made on the original decision process for each of the four new tasks. We do not detail the derivation for the equivalence between the long term reward and the optimized loss function. However, these derivations are easy to achieve following the steps presented in Sect. 3.2. A summary of the modifications is made in Table 2. Summary of the modifications made for incorporating the different variants in the original model Decision process modification \({\mathcal{A}(\mathbf{x},\mathbf{z}) = \left\{ \begin{array}{l} \mathcal{A}_{f}(\mathbf{x},\mathbf{z}) \cup\mathcal{A}_{y}(\mathbf{x},\mathbf{z}) \text{ if } \Vert \mathbf{z}\Vert_{0} < M \\ \mathcal{A}_{y}(\mathbf{x},\mathbf{z}) \text{ if } \Vert\mathbf {z}\Vert_{0} = M \end{array} \right.} \) Allows users to choose a minimum level of sparsity. Reduces training complexity. \(r(\mathbf{x}_{\mathbf{i}},\mathbf{z},a) = \begin{cases} -\xi_{i} \text{ if } a \in\mathcal{A}_{f} \\ -C_{a,y_{i}} \text{ if } a \in\mathcal{A}_{y} \end{cases} \) Well-suited for features with variable costs. \(\begin{array}{l} \mathcal{A}_{f} = \mathcal{A}_{group}\\ \mathcal{T}\bigl((\mathbf{x},\mathbf{z}), a_{j}\bigr) = \biggl( \mathbf{x},\mathbf{z} + \sum_{i \in\mathcal{F}_{j}} \mathbf{e}_{\mathbf{i}}\biggr) \end{array} \) Well adapted to features presenting a grouped nature. Complexity is reduced. \(r(\mathbf{x},\mathbf{z},a_{j}) = \begin{cases} -\lambda\text{ if } \forall f \in\mathcal{Z}(\mathbf{x}), \mathit{Related}(f_{j},f) = 1 \\ -\gamma\text{ otherwise} \end{cases} \) Naturally suited for complex feature inter-dependencies. 4.2.2 Hard budget In order to integrate the hard budget constraint into our model, we modify the set of possible actions \(\mathcal{A}(\mathbf{x},\mathbf {z})\) such that: $$\mathcal{A}(\mathbf{x},\mathbf{z}) = \left\{ \begin{array}{l@{\quad}l} \mathcal{A}_f(\mathbf{x},\mathbf{z}) \cup\mathcal {A}_y(\mathbf{x},\mathbf{z}) & \text{if } \Vert\mathbf{z}\Vert_0 < M, \\ \mathcal{A}_y(\mathbf{x},\mathbf{z}) & \text{if } \Vert\mathbf {z}\Vert_0 = M. \end{array} \right. $$ This new action set function allows the model to either choose a new feature, or classify if the number of selected features ∥z∥0 is inferior to M. When M−1 features have been selected, only classification actions may be performed by the classifier. One advantage of this constraint is to reduce the complexity of the training algorithm, since the maximum size of a trajectory in the decision process is now M. This has the effect of limiting the length of each rollout, thus making the training simulation much faster to compute. 4.2.3 Cost-sensitive We express variable feature costs in our model by modifying the reward function. While DWSM uses a −λ reward for all feature actions, we use a variable reward depending on the feature being selected. Additionally, we modify the reward obtained when classifying to make it dependent on the cost matrix C. The reward function can now be written as: $$r(\mathbf{x}_{\mathbf{i}},\mathbf{z},a) = \left\{ \begin{array}{l@{\quad}l} -\xi_i & \text{if } a \in\mathcal{A}_f, \\ -C_{a,y_i} & \text{if } a \in\mathcal{A}_y, \end{array} \right. $$ where ξ i is the cost of acquiring feature i, as used in Eq. (6), and C is the cost-sensitive error loss defined in Eq. (7). 4.2.4 Group features In the context of our model, implementing group features corresponds only to a slight modification of the action space. We simply define a mapping that allows a specific feature selection action to correspond to the addition of a whole subset of actual datum attributes. To do this, let us define a new set of features actions \(\mathcal{A}_{group}\) which will replace \(\mathcal{A}_{f}\): $$\mathcal{A}_{group} = \bigl\{a'_1, \ldots,a'_{n'}\bigr\}, $$ where each \(a'_{j}\) corresponds to acquiring the jth group of features—instead of the jth feature. To allow this simultaneous acquisition of features, we can modify the transition function \(\mathcal{T}\), and define a new transition function for grouped features: $$\mathcal{T}\bigl((\mathbf{x},\mathbf{z}), a_j\bigr) = \biggl( \mathbf{x},\mathbf{z} + \sum_{i \in\mathcal{F}_j} \mathbf{e}_{\mathbf{i}}\biggr), $$ where \(\mathcal{F}_{j}\) is the jth group of features as defined previously in Sect. 4.1. The new set of possible actions is then the union between \(\mathcal{A}_{group}\) and \(\mathcal{A}_{y}\), allowing the classification process to choose, at each step, to either classify or acquire simultaneously a whole group of features. Because the size of \(\mathcal{A}_{group}\) is inferior to the size of \(\mathcal{A}_{f}\), the new MDP corresponds to a learning and inference complexity which is greatly reduced relative to he original problem. This aspect allows us to deal both with datasets where features are "naturally" grouped, but also, to deal with datasets with large number of features, by artificially grouping the features into groups. We consider both of these approaches experimentally in Sect. 6.3.3. 4.2.5 Relational features Contrary to group features where the classifier can choose amongst a set of predefined groups of features, constrained features allow the classifier to discover coherent groups of features in the feature space. This type of constraint is more interesting for particular datasets that represent a spatial or relational representation. For every action \(a_{j} \in\mathcal{A}_{f}\),7 let us define the reward as: $$r(\mathbf{x},\mathbf{z},a_j) = \left\{ \begin{array}{l@{\quad}l} -\lambda &\text{if } \forall f \in\mathcal{Z}(\mathbf{x}),\ \mathit{Related}(f_j,f) = 1, \\ -\gamma &\text{otherwise}. \end{array} \right. $$ Thus, the classifier is encouraged to choose features that are related to the features previously chosen. Nevertheless, for a certain penalty γ>λ, the classifier can choose to start a new, unrelated feature group. One can use a very high value for γ in order to force the classifier to choose only related features. In that case, the relational features can be easily implemented by reducing the set of the possible features acquisition actions, reducing the complexity of the system. This variant is not described in this paper. 4.3 Overview of model extensions As we have seen in this section, our model is easily adaptable to a large set of sparsity-inspired problems. What we believe to be of particular interest is not only that our model can function under complex constraints, but that adapting it for these constraints is relatively straightforward. Indeed, one can imagine many more sparsity-inspired problems requiring constraints that are not easily dealt with traditional classification approaches, yet that could be easily expressed with our model. For this reason, we strongly believe that our model's adaptability to more complex problems is one of its strong points. 5 Complexity analysis and scaling Let us focus on the analysis of the complexity of the proposed model. We detail the complexity concerning the initial datum-wise sparse model proposed in Sect. 2, and then detail the complexity of each proposed extension. We discuss the ability of our approach to deal with datasets with a large number of features and propose different possible extensions for reducing the complexity of the approach. 5.1 Complexity of the datum-wise sparse classifier Inference complexity Inference on an input x consists in sequentially choosing features, and then classifying x. Having acquired t features, on a dataset with n features and c categories in total, one has to perform (n−t)+c linear computations through the s θ function in order to choose the best action at each state. The inference complexity is thus O(N f ⋅n), where N f is the mean number of features chosen by the system before classifying. In fact, due to the shape of the Φ function presented in Sect. 3.3 and the linear nature of s θ , the score of the actions can be efficiently incrementally computed at each step of the process by only adding the contribution of the newly added feature to each action's score. This complexity makes the model able to quickly classify very large datasets with many examples. The inference complexity is the same for all the proposed variants. Learning complexity As detailed in Sect. 3.4, the learning method is based on Rollouts Classification Policy Iteration. The computational cost of one iteration of RCPI is composed of a simulation cost which corresponds to the time spent making Monte Carlo Simulation using the current policy. This cost takes the following general form: $$ \mathit{Cost}(\mathit{RCPI}) = N_s A\times TA. $$ A=(n+c) is the total number of actions in the MDP (feature selection plus classification actions), TA is therefore the cost of evaluating one trajectory of length T, where each of the T state transition requires querying the value of each of the A actions from the scoring function s θ . N s A×TA is the overall cost of executing the Monte Carlo simulation by evaluating a trajectory for each of the A actions of each of the N s states. The complexity of each iteration is therefore \(\mathcal{O}(N_{s} \cdot T\cdot(n+c)^{2})\), with T≈n. This implies a learning method which is quasi-cubic w.r.t. the number of features; the base variant of our method is limited to problems with features in the hundreds. The extensions proposed have different learning complexities presented in Table 3; this allows some of them to be used with datasets containing thousands of features. Learning complexity of the different variants of the Datum-Wise Model. n is the number of features, c the number of classes, N s is the number of states used for rollouts Sparse model \(\mathcal{O}(N_{s} \cdot T\cdot(n+c)^{2})\) Limited to hundreds of features, T≈n. \(\mathcal{O}(N_{s} \cdot\bar{T}\cdot(n+c)^{2})\) Same complexity as the base model, shorter learning time with the budget \(\bar{T} \ll n\). \(\mathcal{O}(N_{s} \cdot T\cdot(\bar{n}+c)^{2})\) Same complexity as the base model, much shorter learning time with the number of groups \(\bar{n} \ll n\), and \(T\approx\bar{n}\). \(\mathcal{O}(N_{s}\cdot T\cdot(n+c)^{2})\) to \(\mathcal{O}(N_{s} \cdot T\cdot c^{2})\) The complexity depends on the structure of the features. In the extreme case, where features have to be acquired in a fixed order, the complexitya is O(N s ⋅T⋅c 2) and allows the model to scale linearly relative to features. aSince there is no choice in the features, the number of actions is only 1+c 5.2 Scalability If the learning complexity of our model is higher than baseline global linear methods, inference is linear. In practice, during training, most of the baseline methods select a subset of variables in a couple seconds to a couple minutes, whereas our method is an order of magnitude slower. The problem encountered is the increase in training time relative to the number of features. Inference, however, is indeed performed at the same speed as baseline methods, which is in our opinion the important factor. The proposed approach is clearly more suitable for dealing with classification where the problem is complex, involving for example cost-sensitive features, relational features or features that naturally present some structure. There are nevertheless a couple ways to handle datasets with many features using our method: The complexity of the hard budget variant is clearly an order of magnitude lower than the complexity of the initial model. This variant is useful when one wants to obtain a very high sparsity by limiting the maximum number of used features to ten or twenty. In this case, this model is faster than the DWSM model and can be learned on larger datasets The grouped-features model has a complexity which depends on the number of groups of features. So one possible solution when dealing with many features, is to group these features in a hundred of packets. These groups can be formed randomly, or by hand if the features are naturally organized in a complex structure. Such a use of the model on a large dataset is illustrated in Table 7. When dealing with sparse datasets, the learning algorithm can be easily adapted for reducing its complexity. This acquisition of features can thus be restricted (during learning) to the subset of non-null values, strongly reducing the number of possible actions in the MDP to the number of non-null features. At last, the use of faster Reinforcement Learning techniques can be a possible solution to fasten the learning phase. Recent techniques have been developed (Dulac-Arnold et al. 2012) allowing to reduce the complexity of our model from O(N s ⋅(n+c)3) to O(N s ⋅log(n+c)) at the price of a final sub-optimal classification policy. These methods will be tested on this task in a future work 6 Experiments The experimental section is organized as follows: First, we present the results obtained by basic DWSM on 8 binary classification datasets in Sect. 6.1. We analyze the performance and behavior of this algorithm and compare with state-of-the-art methods. We then present results for multiclass classification with this base model in Sect. 6.2. After that, we describe experiments performed with the four extensions to the base model proposed in Sect. 4 on the binary datasets and additional corpora. For brevity, we present only representative results in the core article, while providing results obtained on all binary dataset with the DWSM and its variants in the Appendix. 6.1 Sparse binary classification The first set of experiments focuses on binary classification. Experiments were run on 8 different UCI (Frank and Asuncion 2010) datasets obtained from the LibSVM Website.8 The datasets are described in Table 4. For each dataset, we randomly sampled different training sets by taking 10 %, 20 %, 50 % and 90 % of the examples as training examples, with the remaining examples being kept for testing. This sampling was performed 30 times, thus generating 30 train/test pairs for each split of the dataset. We did not use k-fold cross-validation as it imposes the number of possible train/test splits for a particular percentage split, and is therefore not applicable in our situation. Binary classification datasets characteristics Number of examples Liver Disorders We performed experiments with four different models: LARS was used as a baseline linear model with L 2 loss and L 1 regularization. L 1-SVM is an SVM classifier with L 1 regularization, effectively providing LASSO-like sparsity.9 CART (Breiman et al. 1984) was used as a baseline decision tree model. Datum-Wise Sequential Model (DWSM) is the Datum-Wise Sparse model presented above. For evaluation, we used a classical accuracy measure which corresponds to 1-errorrate on the test set of each dataset. The sparsity has been measured as the proportion of features not used for the LARS and SVM-L 1 models, and the mean proportion of features not used to classify testing examples in DWSM and CART. Each model was run with many different hyper-parameters values—the C and ϵ values for LARS and SVM-L 1, the pruning value for CART and λ for DWSM. Concerning our method, the number of rollout states (step 1 of the learning algorithm) is set to ten states for each learning example and the number of policy iterations is set to ten.10 Note that experiments with more rollout states and/or more iterations give similar results. Experiments were made using an α-mixture policy 11 with α=0.7 to ensure the stability of the learning process—a lower α-value involves less stability while a higher value makes more learning iterations necessary for convergence. The following figures present accuracy/sparsity curves averaged over the 30 runs and also show the variance obtained over the 30 runs. Figures within this experimental section present average accuracies over the 30 different splits. In the case of L 1-SVM and LARS, models have a fixed number of features, however, in the case of DWSM or decision trees, where the number of features is variable, results are actually representative of multiple sparsities within a same experiment. Horizontal error bars are therefore presented for our models. By looking at the different curves presented in Appendix,12 one can see that DWSM outperforms SVM-L 1 and LARS at equivalent sparsity on 7 of the 8 binary datasets. Figure 2 illustrates two sparsity/accuracy curves comparing L 1 approaches with DWSM while Fig. 3 also compare results obtained with CART to L 1 approaches and DWSM. On Fig. 2 (left)—corresponding to experiments on the breast cancer dataset—one can see that at a level of sparsity of 70 %, we obtain 96 % accuracy while the two baselines obtain about 88 % to 90 %. The same observation also holds for other datasets as shown on Fig. 2 (right). Looking at all the curves given in the Appendix, one can see that our model tends to outperform L 1 models—LARS and SVM-L 1—on seven of eight datasets.13 In these cases, at similar levels of global sparsity, our approach tends to maintain higher accuracy.14 These results show that DWSM can be competitive w.r.t. L 1 methods, and can outperform these baseline methods on a certain number of datasets. DWSM vs L1 models: accuracy w.r.t.to sparsity. In both plots, the left side on the x-axis corresponds to a low sparsity, while the right side corresponds to a high sparsity. The performance of the models is usually decreasing when the sparsity increases, except in case of overfitting We have also compared DWSM with CART. The latter shares some similarities with our sequential approach in the sense that, for both algorithms only some of the features will be considered before classifying a data point. Aside from this point, the two methods have strong differences; in particular, CART builds a global tree with a fixed number of possible paths whereas, DWSM adaptively decides for each pattern which features to use. Note that CART does not incorporate a specific sparsity mechanism and has never been fully compared to L 1 based methods in term of accuracy and sparsity. Figure 3 gives two illustrative results obtained on two datasets. On Fig. 3 (left), one can see that our method outperforms decision trees in term of accuracy at the same level of sparsity. Moreover, DWSM allows one to easily obtain different models at different levels of sparsity, while this is not the case for CART where sparsity could only be controlled indirectly by the pruning mechanism. For some datasets, CART outperforms both DWSM and L 1 based models. An example is given in Fig. 3 (right), where at the 0.9 level of sparsity, CART achieves about 90 % in term of accuracy, while DWSM achieves similar performance to the baseline methods. CART's advantage is most certainly linked to its ability to create a highly non-linear decision boundary. DWSM and decision trees: while CART can sometimes obtain a better accuracy, control over sparsity is difficult to achieve Qualitative results Figure 4 gives qualitative results on the breast-cancer dataset obtained by two models (DWSM and LARS) for a sparsity level of 50 %. This figure is representative of the behavior of the methods on most datasets. The left histogram gives the proportion of testing examples classified by DWSM and L 1 models using a particular feature. The right part of the figure represents the proportion of testing examples that have been classified using a given number of features. From the left histogram, one can see that, whereas some features are used for all data with DWSM (as they are for the global LARS), many others are evenly distributed among the samples. This shows a clear difference in the behavior of global and sequential methods. This would also suggest that global methods do not make the best use of the features. Note that the features that are used for every datum by DWSM are also features used by L 1 based approaches. For this example, the sparsity gain w.r.t. the baseline model is obtained through features 1 and 9 that are used only for about 20 % of the DWSM decisions, while they are used in 100 % of the decisions of the LARS model. Qualitative results: Left: The distribution of use of each feature for label prediction. For example, LARS uses feature 2 for classifying 100 % of the test examples while DWSM uses feature 2 for only 88 % of the test examples. Right: The mean proportion of features used for classifying. For example DWSM classifies 20 % of the examples using exactly 2 features while LARS uses 5 features for all the test examples The histogram in Fig. 4 (Right) describes the average number of testing examples classified with a particular amount of acquired features. For example, DWSM classifies around 60 % of the examples using no more than 3 features. DWSM mainly uses 1, 2, 3 or 10 features, meaning that it identifies two "levels of difficulty"; some easy examples can be classified using less than 3 features, while for the hard to classify examples, all the features have to be used. A deeper analysis of this behavior shows that almost all the classification mistakes have been made after looking at all 10 features. Note that DWSM is able to attain better accuracy than the LARS for equivalent sparsity levels. This is due to the fact that DWSM is not bound to a strict set of features for classification, whereas the LARS is. Therefore, it can request more features than the LARS has available for a particularly ambiguous datum. This allows DWSM to have more information in difficult regions of the decision space, while maintaining sparsity for a "simple" datum. We have analyzed the correlation between the value of λ and the obtained sparsity—Fig. 5. The higher λ is, the higher the sparsity. It is interesting to note that, when the value of λ is too high, the model chooses to select no features and associates each input to the larger category. This plot shows the value of λ for all the experiments run with DWSM, and their corresponding sparsity. The average sparsities are calculated for each λ; they are biased towards a sparsity of 1 by the large number of experiments with no selected features, especially for large values of λ. There are an equal amount of experiments for each value of λ. λ∈{0,0.001,0.01,0.1,0.15,0.2,0.3}. The two yellow hexagons on the left actually represent 2 different values of λ: 0, and 0.001 6.2 Sparse multiclass classification Multiclass experiments were performed with the same experimental protocal as in Sect. 6.1. The baseline method is a One-vs-All L 1-regularized SVM, and experiments were performed on four different datasets described in Table 5. The experiments show the same general behavior as for the binary datasets. The full results on the four datasets is presented in the Annex. Figure 6 shows example performance for the Wine and Vowel datasets. Sparsity for the SVM-L 1 model is calculated using the total number of features considered while classifying a datum. DWSM vs L 1 models in multiclass: accuracy w.r.t.to sparsity. We can see that as sparsity decreases, performance converges. However, for intermediate sparsities, performance decreases more rapidly with SVM-L 1 models Multiclass classification datasets The sequential model is particularly interesting with low sparsities, as it is able to maintain good accuracy even with high sparsity. We can see this in Fig. 6 (left), with DWSM able to maintain ∼90 % accuracy while at sparsity of 0.8, whereas the SVM-L 1 model has already sharply decreased in performance. Additionally, extending the model to the multiclass case is completely natural and requires nothing more than adding additional classification actions to the MDP. 6.3 Model extensions In this section we will present a series of results for the model extensions described in Sect. 4. Contrary to what was done for the base model, we did not attempt to perform extensive comparisons with baseline models but rather wanted to show—using some of the datasets used previously—that the extensions were indeed sound and efficient. The reason for this is twofold: First, there is no baseline able to cope with all the extensions, so comparison would require for each problem a specific algorithm for each extension. Second, for some of the extensions, not all the datasets are pertinent. To demonstrate the effects of a hard feature budget, we have set hard budget limits on the datasets described in Table 4. All experiments were performed with λ=0, since sparsity in this extension is controlled through the value of M. We have run experiments for M∈{2,5,10,20}. Figure 7 presents the results obtained on two particular datasets—all other results are provided in the Appendix. In many cases, the addition of a hard constraint allows us to obtain an equivalent or better accuracy relative to the original model, particularly for high levels of sparsity. For example, on Fig. 7 (left), with M=5 and a level of sparsity of 0.8, the hard budget model achieves 77 % in term of accuracy while LARS is about 72 % and L 1-SVM 65 %. The same effect can be seen on Fig. 7 (right). The Hard Budget extension allows for much finer-grained control of sparsity within the model, and this is indeed what is expressed in these figures. The Hard Budget model allows us to increase the accuracy of the DWSM while reducing its learning and inference complexity: this model is many times faster to learn and to infer with—its complexity is O(M⋅(n+c)) instead of O(N f ⋅(n+c)) where M is the budget, n the input dimension, c the number of classes and N f the mean number of features used by the base DWSM model. Hard budget can be suitable for datasets with many features when one wants to obtain a high sparsity. Hard Budget: Hard Budget models corresponding to different feature budgets M, with λ=0. Hard Budget models with a low value of M outperform DWSM and LARS models at high level of sparsity while reducing the learning and inference complexity. Hard-Budget models also allow a more specific tuning of the model's sparsity 6.3.2 Cost-sensitive feature acquisition For the cost sensitive extension, we perform experiments on the diabetes dataset, as it has already been used by different authors for cost sensitive classification tasks. This dataset defines one feature extraction cost and one misclassification cost—see Sect. 4.2.3. Experiments were performed with the cost table used by Ji and Carin (2007), originally established by Turney (1995).15 We used 5-fold cross-validation, and produced the results in Table 6. Experiments were run with two different misclassification costs16 of 400 and 800. We refer to this cost as "Error Penalty" in the results table. We use "Average Cost" to refer to the average per-datum cost, where per-datum cost corresponds to the sum of incurred feature acquisition costs plus the cost of classification. Finally, "Accuracy" is the standard measure of accuracy.17 In this situation, a smaller cost is better. This table presents results obtained on the cost-sensitive Pima Diabetes dataset. As a reference, results from one of the cost-parameterization of Li and Carin is presented as well Error penalty DWSM Li and Carin aNumerical values from Li and Carin are estimated graphically from their paper The reference results in Table 6 were extracted from page 18 of the article (Ji and Carin 2007). We can see that our cost-sensitive model obtains, in each of the two cases18 the same average cost as the one obtained by Li and Carin. Accuracy is also equivalent for both models, showing that DWSM is competitive w.r.t. to a cost-sensitive-specific algorithm, while only needing a slight modification to its MDP. 6.3.3 Grouped and relational features In order to test the ability of our approach to deal with grouped and relational features, we have performed three sets of experiments: Artificial random groups The first set consists in considering artificial groups of features on binary classification datasets. These groups of features have been generated randomly and we aim to show that our model is able to learn with grouped features. Although not detailed here, results are generally lower than that of the base models. For example, the best accuracy obtained by Grouped DWSM on the Sonar set is of 75 % with 3 groups, which is 5 points lower than both the baseline and standard DWSM. This is not a surprise since groups of features have been chosen randomly and relevant features can be present in different groups; the model may need to acquire all the groups in order to acquire the necessary features, thus decreasing the sparsity of the obtained model. The advantage of grouped features is to decrease the complexity of the model which now only depends on the number of groups, instead of the number of features, thus allowing the model to be trained on datasets with many features. The resulting complexity is O(N g ⋅(N g +c)) in comparison to O(n⋅(n+c)), where N g is the number of groups. All these experiments only involve a small number of features. In order to test the ability of the Grouped model to handle many features, we performed experiments using the Gisette dataset created by Isabelle Guyon for the NIPS 2003 Feature Selection Challenge (Guyon et al. 2005). This is a 2-class dataset with 5000 features. We used 10 feature groups—with each group therefore containing 500 features—to train DWSM on Gisette. Results for varying values of λ can be found in Table 7. Note that we do not aim at comparing ourselves with the state of the art on this dataset (which can be found on the challenge website19 or in the results analysis Guyon et al. 2005) as the best techniques are obtained by complex kernelized methods. From Table 7, we can see that our model obtains good performance ranging from 90 % to 93.2 % accuracy while using on average 4.5 to 7.4 groups of features. Note that Gisette contains many probe features that do not carry any information about the category. This explains why a classical LASSO model is able to obtain a very low sparsity while the grouped model—which learns to select groups that contain relevant features—has a lower sparsity: when a group contains one or more relevant features, it also contains many probe features. This table describes group-feature results for the Gisette and Adult datasets # groups Sparsity # of features Gisette 100 features 8.75 groups 95 features aThe final accuracy for the Adult dataset represents the default accuracy of putting all test elements into the biggest class Grouped features based on the structure of the features One advantage of the Grouped Features model is that it can consider datasets where features are naturally organized into groups. This is for example the case for the Adult20 dataset from UCI, which has 14 attributes, some of which are categorical. These categorical attributes have been expanded to continuous features using discretized quantiles—each quantile is represented by a binary feature. The set of features that corresponds to the quantile of a particular categorical attribute naturally corresponds to a group of features. The continuous dataset is composed of 123 real features grouped in 14 groups. We have created a mapping from the expanded dataset back to the original set of features, and run experiments using this feature grouping. We use the LASSO as a baseline. Table 7 presents the results obtained by our model at three different levels of sparsity and show that our method achieves a 79 % accuracy while selecting on average 4.83 of the 14 groups of features. Results for the LASSO are also presented, although the LASSO's results are not constrained to respect the group structure, and furthermore its sparsity corresponds to the sparsity obtained on the expanded dataset, not the sparsity over the initial dataset. Finally, we present an experiment performed on image data which allows us to consider both group and relational constraints. Experiments have been performed on the MNIST (LeCun et al. 1998) dataset with the relational model described in Sect. 4.2.5. We have used the following two constraints: (i) First, we have put in place a group mapping on the pixels that corresponds to a 4×4 grid of blocks. (ii) Secondly, we have forced the model to focus on contiguous blocks of pixels i.e. the cost of acquiring a block of pixels touching a previously acquired block is lower than the cost of acquiring a block which is further away. We then make use of spatial relations between groups of pixels. Referring to the relational model in Sect. 4.2.5, the Related(⋅,⋅) function tells us whether two feature blocks are contiguous in the image. A visualization of the feature acquisition frequencies is presented in Fig. 8. For a given class (a digit 0–9) we compute the mean selection frequency of each block in the image for all correctly identified images. We then represent this frequency overlaid onto a sample image. For a given sample, the brighter a block of pixels, the more it has been used for classifying this digit. Thus, we can see what information the classifier is attempting to access when it is correctly classifying a datum into a particular class. For example: The number 1 is quickly identified by just looking at the two central blocks of features, while the system tends to explore all the possible blocks of features when classifying a 5, which is a more complex digit. Average utilization of each feature block for correctly classified data, grouped by class. The corresponding sparsity is below the image. Lighter squares represent more frequent usage 7 Related work We begin by providing an overview of feature selection techniques that correspond that are close to the datum-wise sparse model presented in Sect. 2. Then, for each of the proposed extensions, we describe the works that address similar problems. Note that none of the following citations correspond to a model that can deal with all these classification problem in an unified way. Features selection and sparsity Datum-Wise Feature Selection positions itself in the field of feature selection, a field that has seen a good share of approaches (Guyon and Elisseefi 2003). Our approach positions itself between two main veins in feature selection: embedded approaches and wrapper approaches. Embedded approaches include feature selection as part of the learning machine. These include algorithms solving the LASSO problem (Tibshirani 1994), and other linear models involving a regularizer with a sparsity-inducing norm (L p∈[0;1]-norms such as Elastic Net, Zou and Hastie 2005 and group LASSO, Yuan and Lin 2006). These methods are very different from our proposed method, and rely on the direct minimization of a regularized empirical risk. These approaches are very effective at these specific tasks, but are difficult to extend to more complex risks that are neither continuous nor differentiable. Nevertheless, some interesting work has been done along the lines of finding surrogate losses for more structured forms of sparsity (Jenatton et al. 2011). We believe our method is nevertheless more naturally expressive for these types of problems, as its optimization criteria is not subject to any constraints on continuity or derivability. Wrapper approaches aim at searching the feature space for an optimal subset of features that maximizes the classifier's performance. Searching the entire feature space is very quickly intractable and therefore various recent approaches have been proposed to restrict the search using genetic programming (Girgin and Preux 2008) or UCT-based algorithms (Gaudel and Sebag 2010). We were encouraged by these works, as the use of a learning approach to direct the search for feature subsets in the feature graph is very similar in spirit to our approach. We differentiate our approach from these through its datum-wise nature, which is not considered by either of the aforementioned articles. Regarding the datum-wise nature of our algorithm, the classical model that shares some similarities in terms of inference process is the Decision Tree (Quinlan 1993). During inference with a Decision Tree, feature usage is in effect datum-dependent. In contrast to our method, Decision Trees are highly non-linear and as far as we know, have never been studied in terms of sparsity. Moreover, the learning algorithm is very different to the one proposed in this paper, and Decision Trees are not easily generalizable to more complex problems described in Sect. 4. Nevertheless, Decision Trees prove to be perform very well in situations where strong sparsity is imposed. Cost sensitive classification The particular extensions presented in Sect. 4 have been inspired by various recent works. Cost-sensitive classification problems have been studied by Turney (1995) and Greiner (2002). The model proposed by Turney (1995) is an extension of decision trees to cost-sensitive problems, using a certain heuristic to discourage the use of costly features. Greiner (2002) models this task as a sequential problem. However, the formalism used by Greiner is different from the one we propose, and restricted to cost-sensitive problems. Hard budget classification Hard Budget classification has been considered before, Kapoor and Greiner (2005), in the context of Active Learning. Modelization as an MDP is suggested in the article, but is not performed. Hard Budget classification is primarily motivated by its more fine-grained ability to tune the sparsity, as well as the inherent speedups it provides in the complexity of the learning phase. Many datasets inherently provide some form of group or relational structure, and grouped features have been recently proposed as an extension to the LASSO problem called Group-LASSO (Yuan and Lin 2006). Relational features have also been studied in different papers about structured sparsity (Huang et al. 2009; Jenatton et al. 2011), which also base themselves on LASSO-derived resolution algorithms. Additionally, these papers consider global sparsity and are not datum-wise. They are based on a continuous convex formulation of the sparse L 1 regularized loss and are thus very different from our approach. DWSM provides a much richer expressivity relative to these methods, at the cost of a more complex resolution algorithm. All the different approaches to our extensions have not been previously brought together under one framework as far as we can tell, additionally many more extensions can be imagined, with the ability to adapt to the fine-grained constraints of real-world problems. Classification as a sequential problem At last, the idea of using sequential models for classical machine learning tasks has recently seen a surge of interest. For example, there have been sequential models proposed for structured classification (Daumé and Marcu 2005; Maes et al. 2009). These methods leverage Reinforcement Learning approaches to solved more 'traditional' Structured Prediction tasks. Although they are specialized in the prediction of structured data, and do not concentrate on aspects of sparsity or feature selection, the general idea of applying RL to ML tasks is in the same vein of work as DWSM. The authors have previously presented an original sequential model for text classification (Dulac-Arnold et al. 2011), and there has been similar work using Reinforcement Learning techniques for self-terminating anytime classification (Póczos et al. 2009). These approaches can be considered as more constrained versions of the problem proposed in this paper, since the only criteria being learned is when to stop asking for more information, but not what information to ask for. Nevertheless, these approaches provide the base intuition for datum-wise approaches. The most similar Reinforcement Learning works are the paper by Ji and Carin (2007) and the (still unpublished) paper by Rückstieß et al. (2011) which proposes MDP models for cost-sensitive classification. Both of these papers have formalizations that are similar to ours, yet concentrate on cost-sensitive problems. We compare ourselves to experiments performed by Ji and Carin in Sect. 6.3.2. In this article we have introduced the concept of datum-wise classification, where we learn simultaneously a classifier and a sparse representation of the data that adapts to each new datum being classified. For solving the combinatorial feature selection problem, we have proposed a sequential approach where we have modeled the selection—classification problem as a MDP. Learning this MDP is performed using an algorithm inspired by Reinforcement Learning. Solving the MDP is shown to be equivalent to minimizing a L 0 datum wise regularized loss for the classification problem. This base model has then been extended to different families of feature selection problems: cost-sensitive, grouped features, hard budget and structured features. The proposed formalism can be easily adapted to any of these problems and thus provides a fairly general framework for datum wise sparsity. Experimental results on 12 datasets have shown that the base model is indeed able to learn data dependent sparse classifiers while maintaining a good classification accuracy. The potential of the 4 extensions to the base model, has been demonstrated on different datasets. All of them solve a specific sparsity problem while requiring only slight changes to the initial model. We believe that this model might be easily adapted to other complex classification problems while requiring only slight changes to the MDP. For inference, the model complexity is similar to a classical—non datum dependent—sparse classifier. Training the MDP remains however more costly than for global classification approaches. A couple of directions for future work are being considered: the first one consists in using more efficient RL-inspired algorithm such as Fitted Q-Learning, which could greatly reduce the time spent during training. Another possible extension is to remove—during learning—features that are generally judged as irrelevant by the system i.e. features that are never or rarely used for classifying data. In that case, the system only keeps in memory a subset of the possible features and thus reduces the dimensionality of the training space. Finally, a more prospective research direction is to consider a sequential process that is also able to create new features—by combining existing features—opening the way to feature construction. Note that this includes the binary supervised classification problem as a special case. The L 0 'norm' is not a proper norm, but we will refer to it as the L 0 norm in this paper, as is common in the sparsity community. The MDP is deterministic in our case. This corresponds to the classical Q-function in Reinforcement Learning. We have however removed the expectation since our MDP is deterministic. Note that a positive cost implies a penalty in the minimization sense. See Sect. 3. This corresponds to choosing feature f j . http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. Using LIBLINEAR (Fan et al. 2008). Stability of the learned policy is usually obtained after 4 to 5 iterations θ t is chosen with probability 1−α, otherwise the previous α-mixture policy is used. As said before, only representative results are proposed in the core article. Those datasets are: Australian, breast-cancer, diabetes, heart, ionosphere, sonar and liver. DWSM outperforms L1-norm based approaches when blue curves are above red and black curves. We did not use Turney's actual costs as they are difficult to compare to with our metrics. Misclassification cost corresponds to \(C_{a,y_{i}}\) for a≠y i —see Sect. 4.2.3. Note that the classifier's goal is to optimize per-datum cost as in Eq. (6), and not accuracy. Misclassification cost is 800 or 400. http://www.nipsfsc.ecs.soton.ac.uk/results/?ds=gisette. http://archive.ics.uci.edu/ml/datasets/Adult. This work was partially supported by the French National Agency of Research (Lampada ANR-09-EMER-007). The authors gratefully acknowledge the many discussions with Dr. Francis Maes. This appendix provide all the accuracy/sparsity curves obtained for the 8 binary datasets, on 4 different training sizes. The models presented in the curves are: The two L 1-based models: LARS and SVM-L 1. The base sparse sequential model: DWSM. The CART decision trees. The Hard Budget model HB with budget values in {2,5,10,20}. For these models, we only report the values obtained with λ=0. For the multiclass datasets, the DWSM model has been run with λ∈{0,0.01,0.05,0.1} over the 30 runs. In short, when the blue curves are above the red and black curves, our datum-wise model outperforms classical L 1 based models. When the gray curve is above the other curves, it means that CART is the best sparse algorithm. For each dataset, we present 4 figures corresponding to 4 different training sizes. Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Belmont: Wadsworth. zbMATHGoogle Scholar Daumé, H. III, & Marcu, D. (2005). Learning as search optimization: approximate large margin methods for structured prediction. In Proceedings of ICML (pp. 169–176). New York: ACM. Google Scholar Dulac-Arnold, G., Denoyer, L., & Gallinari, P. (2011). Text classification: a sequential reading approach. In Lecture notes in computer science: Vol. 6611. Proceedings of ECIR (pp. 411–423). Berlin: Springer. Google Scholar Dulac-Arnold, G., Denoyer, L., Preux, P., & Gallinari, P. (2012). Fast reinforcement learning with large action sets using error-correcting output codes for MDP factorization. In Proc. of ECML. Google Scholar Efron, B., Hastie, T., & Johnstone, I. (2004). Least angle regression. Annals of Statistics, 52(4), 1390–1400. doi: 10.1016/j.neuroimage.2010.05.017. MathSciNetGoogle Scholar Fan, R., Chang, K., Hsieh, C., Wang, X., & Lin, C. (2008). LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research, 9, 1871–1874. zbMATHGoogle Scholar Frank, A., & Asuncion, A. (2010). UCI machine learning repository. University of California, Irvine, School of Information and Computer Sciences. http://archive.ics.uci.edu/ml. Gaudel, R., & Sebag, M. (2010). Feature selection as a one-player game. In Proceedings of ICML. Google Scholar Girgin, S., & Preux, P. (2008). Feature discovery in reinforcement learning using genetic programming. In Proceedings of European conference on genetic programming, November. Google Scholar Greiner, R. (2002). Learning cost-sensitive active classifiers. Artificial Intelligence, 139(2), 137–174. doi: 10.1016/S0004-3702(02)00209-6. MathSciNetCrossRefGoogle Scholar Guyon, I., & Elisseefi, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3(7–8), 1157–1182. doi: 10.1162/153244303322753616. zbMATHGoogle Scholar Guyon, I., Gunn, S., & Ben-Hur, A. (2005). Result analysis of the NIPS 2003 feature selection challenge. In Proceedings of NIPS. Google Scholar Har-Peled, S., Roth, D., & Zimak, D. (2002). Constraint classification: a new approach to multiclass classification. In Proceedings of NIPS. Google Scholar Huang, J., Zhang, T., & Metaxas, D. (2009). Learning with structured sparsity. In Proceedings of ICML. Google Scholar Jenatton, R., Audibert, J. Y., & Bach, F. (2011). Structured variable selection with sparsity-inducing norms. Journal of Machine Learning Research, 12, 2777–2824. MathSciNetGoogle Scholar Ji, S., & Carin, L. (2007). Cost-sensitive feature acquisition and classification. Pattern Recognition, 40(5), 1474–1485. doi: 10.1016/j.patcog.2006.11.008. zbMATHCrossRefGoogle Scholar Kanani, P. H., & McCallum, A. K. (2012). Selecting actions for resource-bounded information extraction using reinforcement learning. In Proceedings of ACM international conference on web search and data mining, WSDM'12 (pp. 253–262). New York: ACM. CrossRefGoogle Scholar Kapoor, A., & Greiner, R. (2005). Learning and classifying under hard budgets. In Proceedings ECML (pp. 170–181). Berlin: Springer. Google Scholar Lagoudakis, M. G., & Parr, R. (2003). Reinforcement learning as classification: leveraging modern classifiers. In Proceedings of ICML. Google Scholar Lazaric, A., Ghavamzadeh, M., & Munos, R. (2010). Analysis of a classification-based policy iteration algorithm. In Proceedings of ICML (pp. 607–614). Google Scholar LeCun, Y., Bottou, L., & Bengio, Y. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324. CrossRefGoogle Scholar Louradour, J., & Kermorvant, C. (2011). Sample-dependent feature selection for faster document image categorization. In Proceedings of ICDAR (pp. 309–313). Google Scholar Maes, F., Denoyer, L., & Gallinari, P. (2009). Structured prediction with reinforcement learning. Machine Learning Journal, 77(2–3), 271–301. CrossRefGoogle Scholar Póczos, B., Abbasi-Yadkori, Y., Szepesvári, C., Greiner, R., & Sturtevant, N. (2009). Learning when to stop thinking and do something! In Proceedings of ICML, NYC, USA. doi: 10.1145/1553374.1553480. Google Scholar Puterman, M. L. (1994). Markov decision processes: discrete stochastic dynamic programming. New York: Wiley-Interscience. zbMATHGoogle Scholar Quinlan, J. (1993). C4.5: programs for machine learning. San Mateo: Morgan Kaufmann. Google Scholar Rückstieß, T., Osendorfer, C., & van der Smagt, P. (2011). Sequential feature selection for classification. In Australasian conference on artificial intelligence. Google Scholar Sutton, R., & Barto, A. (1998). Reinforcement learning: an introduction. Cambridge: MIT Press. Google Scholar Tibshirani, R. (1994). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B, 58, 267–288. MathSciNetGoogle Scholar Turney, P. (1995). Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm. The Journal of Artificial Intelligence Research, 2, 369–409. Google Scholar Yuan, M., & Lin, Y. (2006). Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society. Series B, 68(1095), 49–67. doi: 10.1111/j.1467-9868.2005.00532.x. MathSciNetzbMATHGoogle Scholar Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society. Series B. Statistical Methodology, 67(2), 301–320. doi: 10.1111/j.1467-9868.2005.00503.x. MathSciNetzbMATHCrossRefGoogle Scholar 1.UPMC, LIP6Université Pierre et Marie CurieParisFrance 2.LIFL (UMR CNRS) & INRIA Lille Nord-EuropeUniversité de LilleVilleneuve d'AscqFrance Dulac-Arnold, G., Denoyer, L., Preux, P. et al. Mach Learn (2012) 89: 87. https://doi.org/10.1007/s10994-012-5306-7 Received 04 November 2011 Accepted 13 June 2012 Publisher Name Springer US
CommonCrawl
Boundedness in a quasilinear chemotaxis–haptotaxis model of parabolic–parabolic–ODE type Long Lei1 & Zhongping Li1 This paper deals with the boundedness of solutions to the following quasilinear chemotaxis–haptotaxis model of parabolic–parabolic–ODE type: $$ \textstyle\begin{cases} u_{t}=\nabla \cdot (D(u)\nabla u)-\chi \nabla \cdot (u\nabla v)- \xi \nabla \cdot (u\nabla w)+\mu u(1-u^{r-1}-w),& x\in \varOmega , t>0, \\ v_{t}=\Delta v-v+u^{\eta },& x\in \varOmega , t>0, \\ w_{t}=-vw, &x\in \varOmega , t>0, \end{cases} $$ under zero-flux boundary conditions in a smooth bounded domain \(\varOmega \subset \mathbb{R}^{n}(n\geq 2)\), with parameters \(r\geq 2\), \(\eta \in (0,1]\) and the parameters \(\chi >0\), \(\xi >0\), \(\mu >0\). The diffusivity \(D(u)\) is assumed to satisfy \(D(u)\geq \delta u^{-\alpha }\), \(D(0)>0\) for all \(u>0\) with some \(\alpha \in \mathbb{R}\) and \(\delta >0 \). It is proved that if \(\alpha <\frac{n+2-2n\eta }{2+n}\), then, for sufficiently smooth initial data \((u_{0},v_{0},w_{0})\), the corresponding initial-boundary problem possesses a unique global-in-time classical solution which is uniformly bounded. Chemotaxis is the motion of cells moving towards the higher concentration of a chemical signal. A classical mathematical model for chemotaxis was proposed by Keller and Segel [9]. In the recent 40 years, a large quantity of the Keller–Segel system were proposed and have been extensively studied; see Hillen and Painter [15] for example. Another important extension of the classical Keller–Segel model to a more complex cell migration mechanism was proposed by Chaplain and Lolas [4, 5] in order to describe processes of cancer cell invasion of surrounding healthy tissue. In addition to random motion, cancer cells bias their movement toward increasing concentrations of a diffusible enzyme as well as according to gradients of non-diffusible tissue by detecting matrix molecules such as vitronectin adhered therein. The latter type of directed migration toward immovable cues is commonly referred to as haptotaxis. Apart from that, in this modeling context the cancer cells are usually also assumed to follow a logistic growth competing for space with healthy tissue. The enzyme is produced by cancer cells and it is supposed to be influenced by diffusion and degradation. The tissue, also named extracellular matrix, can be degraded by enzyme upon contact; on the other hand, the tissue might possess the ability to remodel the healthy level. In [10, 21, 28, 29, 46], authors studied the following parabolic–parabolic–ODE chemotaxis–haptotaxis model: $$ \textstyle\begin{cases} u_{t}=\nabla \cdot (D(u)\nabla u)-\chi \nabla \cdot (u\nabla v)\\ \hphantom{u_{t}=}{}- \xi \nabla \cdot (u\nabla w)+\mu u(1-u-w),& x\in \varOmega ,t>0, \\ v_{t}=\Delta v-v+u,& x\in \varOmega ,t>0, \\ w_{t}=-vw,& x\in \varOmega ,t>0, \\ D(u)\frac{\partial u}{\partial \nu }-\chi u\frac{\partial v}{\partial \nu }-\xi u\frac{\partial w}{\partial \nu }=\frac{\partial v}{\partial \nu }=0,& x\in \partial \varOmega ,t>0, \\ u(x,0)=u_{0}(x),\qquad v(x,0)= v_{0}(x),\qquad w(x,0)= w_{0}(x),& x \in \varOmega , \end{cases} $$ in smoothly bounded domain \(\varOmega \subset \mathbb{R}^{n}\), \(n\geq 2\), where \(\chi >0\), \(\xi >0\), \(\mu >0\) are parameters, the variables u, v and w represent the density of cancer cells, the enzyme concentration and the density of the extracellular matrix, \(D(u)\) describes the density-dependent motility of cancer cells through the extracellular matrix, χ and ξ represent the chemotactic and haptotactic sensitivities, μ is the proliferation rate of cells. For the special case \(D(u)=1\) in (1.1), Tao and Wang [19] proved that model (1.1) possesses a unique global-in-time classical solution for any \(\chi >0\) in one space dimension, or for small \(\frac{\chi }{\mu }>0\) in two and three space dimensions. Later, Tao [17] improved the result of [19] for any \(\mu >0\) in two space dimensions. Hillen, Painter and Winkler [6] studied the global boundedness and asymptotic behavior of the solution to (1.1) in one space dimension. Tao [18] proved that the model has a unique classical solution which is global-in-time and bounded in two space dimensions. Cao [3] proved that the model has a unique classical solution which is global-in-time and bounded in three space dimensions. Tao and Winkler [26] claimed that if \(n\leq 3\) and \((u,v, w)\) is a bounded global classical solution, then under the fully explicit condition \(\mu >\frac{\chi _{2}}{8}\) the solution \((u,v, w)\) approaches the spatially uniform state \((1,1,0)\) as time goes to infinity. Then Wang and Ke [30] proved that the model possesses a unique global-in-time classical solution that is bounded in the case \(3\leq n\leq 8\) and μ is appropriately large. When \(D\in C^{2}([0,\infty ))\), \(D(0)>0\) and \(D(u)\geq \delta u^{-\alpha }\) for all \(u\geq 0\) with some \(\delta >0\), the global existence of a unique classical solution to (1.1) was proved by Tao and Winkler in [21] under the assumption that either \(n\leq 8\) and \(\alpha <\frac{4-n^{2}}{n^{2}+4n}\) or \(n\geq 9\) and \(\alpha <( \sqrt{8n(n+1)}-n^{2}-n-2)/(n^{2}+2n)\). When \(D\in C^{2}([0,\infty ))\), \(D(u) \geq \delta u^{-\alpha }\) for all \(u\geq 0\) with some \(\delta >0\) and \(\alpha <0\), Zheng et al. [46] studied model (1.1) and found that (1.1) possesses a unique global classical solution which is uniformly bounded in the case of non-degenerate diffusion (i.e. \(D(0)>0\)) and possesses at least one nonnegative global weak solution in the case of degenerate diffusion (\(D(0)\geq 0\)) in two space dimensions. Li and Lankeit [10] proved that for sufficiently regular initial date global bounded solutions exist whenever \(\alpha <\frac{2}{n}-1\) in two, three and four space dimensions. When \(D\in C^{2}([0,\infty )) \), \(D(u)\geq \delta (u+1)^{-\alpha }\) for all \(u\geq 0\) with some \(\delta >0\), Wang clarified the issue of the global boundedness to solutions of (1.1) without any restriction on the space dimension with \(\alpha <\frac{2-n}{n+2}\) in [28, 29]. When the second PDE in (1.1) is replaced by \(0=\Delta v-v+u\) and \(D(u) = 1\) in (1.1), Tao and Wang [20] proved that model (1.1) possesses a unique global bounded classical solution for any \(\mu >0\) in two space dimension, and for large \(\mu >0\) in three space dimensions. Tao and Winkler [23] proved that model (1.1) possesses a unique global smooth solution for first-order compatibility conditions in two space dimension. For all \(n\geq 1\), Tao and Winkler [24] proved that model (1.1) possesses a unique global bounded classical solution for \(\mu >\chi \). In particular, the global solution \((u,v,w)\) approaches the spatially uniform state \((1,1,0)\) as time goes to infinity under an additional assumption on the size of μ and the initial data \(u_{0}\) and \(w_{0}\). Later, Tao and Winkler [25] studied global boundedness for model (1.1) under the condition \(\mu >\frac{(n+2)_{+}}{n}\chi \). Furthermore, in addition to the explicit smallness on \(w\equiv 0\), they gave the exponential decay of w in the large time limit. Zheng [41] considered the following chemotaxis–haptotaxis model with generalized logistic source: $$ \textstyle\begin{cases} u_{t}=\nabla \cdot (D(u)\nabla u)-\chi \nabla \cdot (u\nabla v)\\ \hphantom{u_{t}=}{}- \xi \nabla \cdot (u\nabla w)+u(1- u^{r-1}-w),& x\in \varOmega ,t>0, \\ v_{t}=\Delta v-v+u,& x\in \varOmega ,t>0, \\ w_{t}=-vw,& x\in \varOmega ,t>0, \\ \frac{\partial u}{\partial \nu }=\frac{\partial v}{\partial \nu }=\frac{ \partial w}{\partial \nu }=0,& x\in \partial \varOmega ,t>0, \\ u(x,0)=u_{0}(x),\qquad v(x,0)= v_{0}(x),\qquad w(x,0)= w_{0}(x),& x \in \varOmega , \end{cases} $$ in smoothly bounded domain \(\varOmega \subset \mathbb{R}^{n}\), \(n\geq 3\), where \(\chi >0\), \(\xi >0\), \(r>1\) are parameters. Zheng [41] proved that model (1.2) possesses a unique global classical solution which is uniformly bounded in \(\varOmega \times (0,\infty )\) in the case of \(D(u)\geq \delta (u+1)^{-\alpha }\) for all \(u>0\) with some \(\delta >0 \) and some $$ \alpha \textstyle\begin{cases} < \frac{2}{n}-1, &\text{if } 1< r< \frac{n+2}{n}, \\ < -\frac{(n+2-2r)_{+}}{n+2}, &\text{if } \frac{n+2}{2}\geq r \geq \frac{n+2}{n}, \\ \leq 0,&\text{if } r>\frac{n+2}{2}. \end{cases} $$ For the special case \(D(u)=1\) in (1.2) and the logistic source replaced by \(u(a-\mu u^{r-1}-w), a\in \mathbb{R}\), \(n\geq 1\), \(\mu >0\), Zheng [43] has shown that, when \(r>2\), or $$ \mu >\mu ^{*}= \textstyle\begin{cases} \frac{(n-2)_{+}}{n}\chi C^{\frac{1}{\frac{n}{2}+1}}_{\frac{n}{2}+1}, &\text{if } r=2 \text{ and } n\leq 4, \\ \text{is appropriately large}, &\text{if } r=2 \text{ and } n>5, \end{cases} $$ the problem (1.2) possesses a global classical solution which is bounded, where \(C^{\frac{1}{\frac{n}{2}+1}}_{\frac{n}{2}+1}\) is a positive constant which corresponds to the maximal Sobolev regularity. When \(D(u)=1\) in (1.2) and the logistic source is replaced by \(u(a-\mu u^{r-1}-\lambda w),a\in \mathbb{R}\), \(n\geq 1\), \(\mu >0\), \(\lambda >0\), Zheng [44] has shown that when \(r>2\), or \(r=2\), with \(\mu > \mu ^{\star }=\frac{(n-2)_{+}}{n}(\chi +C_{\beta }) C^{\frac{1}{ \frac{n}{2}+1}}_{\frac{n}{2}+1}\) the problem (1.2) possesses a global classical solution which is bounded, where \(C_{\beta }\) and \(C_{\frac{n}{2}+1}\) are positive constants. Many authors considered the following Keller–Segel system: $$ \textstyle\begin{cases} u_{t}=\nabla \cdot (D(u)\nabla u)-\nabla \cdot (S(u)\nabla v)+f(u),& x\in \varOmega ,t>0, \\ v_{t}=\Delta v-v+g(u),& x\in \varOmega ,t>0, \\ \frac{\partial u}{\partial \nu }=\frac{\partial v}{\partial \nu }=0,& x\in \partial \varOmega ,t>0, \\ u(x,0)=u_{0}(x),\qquad v(x,0)= v_{0}(x),& x\in \varOmega , \end{cases} $$ in a smoothly bounded domain \(\varOmega \subset \mathbb{R}^{n}\), \(n\geq 2\). Here the positive function \(D(u)\) represents the diffusivity of the cells, and the nonnegative function \(S(u)\) measures the chemotactic sensitivity. The functions \(f(u)\) and \(g(u)\) are the growth of u and the production of v, respectively. For \(D(u)=(1+u)^{-\alpha }(\alpha \in \mathbb{R})\), \(S(u)=u(1+u)^{ \beta -1}(\beta \in \mathbb{R})\) and \(g(u)=u^{\eta }(\eta >0)\), Tao et al. [16] proved that model (1.3) possesses a uniform-in-time boundedness of solutions in the case of \(f\equiv 0\), \(\eta \in (0,1]\) and \(\alpha +\beta +\eta <1+\frac{2}{n}\) or in the case of \(f(u)=\gamma u- \mu u^{r}\), \(\gamma \in \mathbb{R}\), \(r>1\) and \(\beta +\eta < r\) or \(\beta +\eta =r\), \(\mu \geq \mu _{0}\) for some \(\mu _{0}>0\); Wang et al. [27] found that model (1.3) possesses a unique global-in-time classical solution for \(0<\alpha +\beta <\frac{2}{n}\) when \(f(u)=\gamma u-\mu u^{r}\), \(g(u)=u\), \(\gamma \in \mathbb{R}\), \(r>1\); Zheng [39] proved that model (1.3) possesses a unique global-in-time classical solution that is bounded in the case \(0<\alpha +\beta <\max \{r-1+\alpha ,\frac{2}{n}\}\), \(n\geq 1\), or \(\beta =r-1\) and μ is large enough. Afterwards,Wang and Liu [31] improved the previous results on the boundedness of solutions to (1.3). When \(f(u)=u(1-u)\), \(g(u)=u\), \(n\geq 3\), Zheng [42] shown that model (1.3) possesses a unique global-in-time classical solution that is bounded in the case \(0<\alpha +\beta <\frac{4}{n+2}\). When \(f(u)=\gamma u-\mu u^{2}\), \(g(u)=u\), \(D(u)\geq \delta u^{-\alpha }\), \(\delta u^{\beta }\leq S(u)\leq \delta _{1} u^{\beta }\) \(\alpha ,\beta \in \mathbb{R}\), \(\delta _{1}>\delta >0\), \(u\geq u_{0}\) with some \(u_{0} > 1\), Cao [2] proved that model (1.3) possesses a unique global-in-time classical solution that is bounded in the case \(\beta <1\). For research on the corresponding quasilinear parabolic-elliptic problems, we refer to [37, 40] and the references therein. For the special case \(f\equiv 0\), \(g(u)=u\) in (1.3), Winkler [32] found that if \(\frac{S(u)}{D(u)}\) grows faster than \(u^{\frac{2}{n}}\) as \(u\rightarrow \infty \) and some further technical conditions are fulfilled, then there exist solutions that blow up in either finite or infinite time. Afterwards, Tao and Winkler [22] proved that solutions (1.3) remain bounded under the condition that \(\frac{S(u)}{D(u)}\leq cu^{\alpha }\) with \(\alpha < \frac{2}{n}\) and \(c>0\) for all \(u>1\), provided that Ω is a convex domain and \(D(u)\) satisfies some other technical conditions. Then Ishida et al. [8] generalized the result obtained in [22] to non-convex domains. For the special case \(D(u)=1\), \(S(u)=u\), \(f(u)=0\) and \(g(u)= u^{\eta }\) in (1.3), Liu and Tao [11] shown the global boundedness of solutions when \(0<\eta <\frac{2}{n}\). As to the case \(D(u)=1\), \(S(u)= \chi u\), \(f(u)=u-\mu u^{r}\) and \(g(u)= u(u+1)^{\eta -1}\) in (1.3) for all \(u\geq 0\) with some \(\chi >0\), \(r>1\), \(\eta >0\), Zhuang et al. [47] proved that model (1.3) possesses a globally bounded classical solution if \(r>\eta +1\), or \(r=\eta +1\) and μ is large enough. When \(D(u)=1\), \(S(u)=\chi u\), \(f(u)=\gamma u-\mu u^{2}\), \(g(u)=u\), Osaki et al. [14] proved the solutions of (1.3) are globally bounded in two space dimension regardless of the size of \(\mu > 0\). Later, Winkler [33] found that model (1.3) possesses a global solution that is bounded under the condition \(n\leq 3, \varOmega \) convex, and \(\mu > 0\) sufficiently large. When the second PDE in (1.3) is replaced by \(0=\Delta v-\frac{1}{| \varOmega |}\int _{\varOmega }u+u\) and \(D(u)=1\), \(S(u)=\chi u\), \(f(u)=\gamma u- \mu u^{r}\), in (1.3) for all \(u\geq 0\) with some \(\chi >0\), \(\gamma \in \mathbb{R}\), \(\mu \geq 0 \), \(r\geq 1\), Winkler [34] found that model (1.3) possesses a local-in-time solution of (1.3) that blows up in finite time for \(r<\frac{3}{2}+ \frac{1}{2n-2}\) in dimension \(n\geq 5\). When the second PDE in (1.3) is replaced by \(0=\Delta v-v+u\) and \(D(u)=1\), \(S(u)=u\), \(f(u)= \gamma u-\mu u^{r}\), \(g(u)= u\) in (1.3) for all \(u\geq 0\) with some \(\gamma \in \mathbb{R}\), \(\mu \geq 0 \), \(r\geq 1\), Winkler [36] shown that model (1.3) possesses a corresponding solution of (1.3) blows up in finite time for \(r<\frac{7}{6}\) in dimension \(n=3,4\) or for \(r<1+\frac{1}{2n-2}\) in dimension \(n\geq 5\). When the second PDE in (1.3) is replaced by \(0=\Delta v-\frac{1}{| \varOmega |}\int _{\varOmega }g(u)+g(u)\) and \(D(u)=1\), \(S(u)=u\), \(f(u)=0\), \(g(u)= u^{\eta }\) in (1.3) for all \(u\geq 0\) with some \(\eta >0 \), Winkler [35] proved the global boundedness of solutions when \(0<\eta <\frac{2}{n}\). Moreover, it is presented in [35] that if Ω is a ball and then there exists initial data such that the corresponding radially symmetric solution blows up in finite time if \(\eta >\frac{2}{n}\), hence \(\eta =\frac{2}{n}\) is critical. In addition, for the studies on the parabolic-elliptic version, we suggest the reader to read the recent papers [7, 12, 38, 45]. Motivated the above papers, we consider the boundedness of solutions to the following quasilinear chemotaxis–haptotaxis model of parabolic–parabolic–ODE type: $$ \textstyle\begin{cases} u_{t}=\nabla \cdot (D(u)\nabla u)-\chi \nabla \cdot (u\nabla v)\\ \hphantom{u_{t}=}{}- \xi \nabla \cdot (u\nabla w)+\mu u(1-u^{r-1}-w),& x\in \varOmega ,t>0, \\ v_{t}=\Delta v-v+u^{\eta },& x\in \varOmega ,t>0, \\ w_{t}=-vw,& x\in \varOmega ,t>0, \\ D(u)\frac{\partial u}{\partial \nu }-\chi u\frac{\partial v}{\partial \nu }-\xi u\frac{\partial w}{\partial \nu }=\frac{\partial v}{\partial \nu }=0,& x\in \partial \varOmega ,t>0, \\ u(x,0)=u_{0}(x),\qquad v(x,0)= v_{0}(x),\qquad w(x,0)= w_{0}(x),& x \in \varOmega , \end{cases} $$ under zero-flux boundary conditions in a smooth bounded domain \(\varOmega \subset \mathbb{R}^{n}(n\geq 2)\), with parameters \(r\geq 2\), \(\eta \in (0,1]\) and the parameters \(\chi >0\), \(\xi >0\), \(\mu >0\). This paper mainly aims to understand the competition among the nonlinear diffusion, the haptotaxis, the nonlinear logistic source and the nonlinear production. The functions \(u_{0}\), \(v_{0}\), \(w_{0}\) are supposed to satisfy the smoothness assumptions $$ \textstyle\begin{cases} u_{0} \in C(\bar{\varOmega }) & \text{with } u_{0}\geq 0 \text{ in } \varOmega \text{ and } u _{0}\neq 0, \\ v_{0} \in W^{1,\infty }(\varOmega ) & \text{with } v_{0}\geq 0 \text{ in } \varOmega , \\ w_{0} \in C^{2+\vartheta }(\bar{\varOmega }) & \text{for some } \vartheta \in (0,1) \text{ with } w_{0}\geq 0 \text{ in } \bar{\varOmega } \text{ and } \frac{\partial w_{0}}{ \partial \nu }=0 \text{ on } \partial \varOmega . \end{cases} $$ We furthermore assume that $$ D\in C^{2}\bigl([0,\infty)\bigr), \quad D(0)>0 $$ $$ D(u)\geq \delta u^{-\alpha } \quad \text{for all }u \ge 0 $$ with some \(\alpha \in \mathbb{R}\) and \(\delta >0 \). The main result of this paper reads as follows. Let \(n\geq 2 \), \(\chi >0\), \(\xi >0\), \(\mu >0\), \(r\geq 2\) and \(\eta \in (0,1]\), and let D be a function satisfying (1.6) and (1.7) with \(\alpha <\frac{n+2-2n\eta }{2+n}\). Then, for any initial data fulfilling (1.5), the problem (1.4) admits a unique classical solution which is global and bounded in \(\varOmega \times (0,\infty )\). From our results, it is worth to point out that the nonlinear production affect the nonlinear diffusion to guarantee the global boundedness of the solution to (1.4). Obviously, since (1.6) and (1.7) are equivalent to \(D(u)\geq \delta (u+1)^{-\alpha }\), for \(r=2\) and \(\eta =1\), Theorem 1.1 agrees with Wang [28, 29], who proved the boundedness of the solutions in the case \(n\geq 2\). This paper is structured as follows. In Sect. 2, we collect basic facts which will be used later. Section 3 is devoted to proving global existence and boundedness by using some \(L^{p}\)-estimate techniques and Moser–Alikakos iteration (see e.g. [1] and Lemma A.1 in [22]). We first state one result concerning local-in-time existence of a classical solution to model (1.4). Let \(\chi >0\), \(\xi >0\) and \(\mu >0 \), and assume that \(u_{0}\), \(v_{0}\) and \(w_{0}\) satisfy (1.5). Then the problem (1.4) admits a unique classical solution $$ \textstyle\begin{cases} u\in C^{0}(\bar{\varOmega }\times [0,T_{\max }))\cap C^{2,1}(\bar{\varOmega }\times (0,T_{\max })), \\ v\in C^{0}(\bar{\varOmega }\times [0,T_{\max }))\cap C^{2,1}(\bar{\varOmega }\times (0,T_{\max })), \\ w\in C^{2,1}(\bar{\varOmega }\times (0,T_{\max })), \end{cases} $$ with \(u\geq 0 \), \(v\geq 0\) and \(0\leq w \leq \|w_{0}\|_{L^{\infty }( \varOmega )}\) for all \((x,t)\in \varOmega \times [0,T_{\max })\), where \(T_{\max } \) denotes the maximal existence time. In addition, if \(T_{\max } <+\infty \), then $$ \bigl\Vert u(\cdot ,t) \bigr\Vert _{L^{\infty }(\varOmega )} \rightarrow \infty \quad \textit{as }t\nearrow T_{\max }. $$ The local-in-time existence of classical solution to model (1.4) is well established by a fixed point theorem in the context of chemotaxis–haptotaxis systems. By the maximum principle, it is easy to obtain \(u\geq 0\) and \(v\geq 0\) for all \((x,t)\in \varOmega \times [0,T_{\max })\). Integrating the third equation in (1.4), it follows from (1.5) and \(v\geq 0\) that \(0\leq w \leq \|w_{0}\| _{L^{\infty }(\varOmega )}\) for all \((x,t)\in \varOmega \times [0,T_{\max })\). The proof is quite standard, for details, we refer the reader to [46]. □ For reference, we begin with Young's inequality, which states, for any positive numbers p and q with \(\frac{1}{p}+\frac{1}{q}=1\), that $$ ab\leq \frac{a^{p}}{p}+\frac{b^{q}}{q},\quad \forall a,b\geq 0. $$ This immediately yields the so-called Young inequality with ϵ. (Young's inequality with ϵ) Let p and q be two given positive numbers with \(\frac{1}{p}+\frac{1}{q}=1\). Then, for any \(\epsilon >0\), $$ ab\leq \epsilon a^{p}+\frac{b^{q}}{(\epsilon p)^{\frac{q}{p}}q}, \quad \forall a,b\geq 0. $$ In the proof of main result, we will frequently use the following version of the Gagliardo–Nirenberg inequality, for detail we refer to the reader to [10]. (Gagliardo–Nirenberg inequality) Let \(\varOmega \subset \mathbb{R}^{n}\) be a bounded smooth domain and \(r\geq 1\), \(0< q\leq p\leq \infty \), \(s>0 \) be such that $$ \frac{1}{r}\leq \frac{1}{n}+\frac{1}{p}. $$ Then there exists \(c>0\) such that $$ \Vert u \Vert _{L^{p}(\varOmega )}\leq c\bigl( \Vert \nabla u \Vert ^{a}_{L^{r}(\varOmega )} \Vert u \Vert ^{1-a}_{L^{q}(\varOmega )}+ \Vert u \Vert _{L^{s}(\varOmega )}\bigr) \quad \textit{for all } u\in W^{1,r}(\varOmega )\cap L^{q}(\varOmega ), $$ $$ a=\frac{\frac{1}{q}-\frac{1}{p}}{\frac{1}{q}+\frac{1}{n}-\frac{1}{r}}. $$ This can be found in [10, Lemma 2.3]. □ The following lemma provides the basic estimates of solutions to (1.4). Let \((u,v,w)\) be the solution of (1.4). Then there exists \(C>0 \) depending on \(n,\|v_{0}\|_{L^{1}(\varOmega )}\) and \(\|u_{0}\|_{L ^{1}(\varOmega )}\) such that $$ \begin{aligned} &\bigl\Vert u(\cdot ,t) \bigr\Vert _{L^{1}(\varOmega )}\leq C, \qquad \bigl\Vert v(\cdot ,t) \bigr\Vert _{L^{1}( \varOmega )}\leq C,\\ & \bigl\Vert \nabla v(\cdot ,t) \bigr\Vert _{L^{2}(\varOmega )}\leq C \quad \textit{for all } t\in (0,T_{\max }). \end{aligned} $$ (i) Integrating the first equation in (1.4) with respect to \(x\in \varOmega \), we have $$ \frac{d}{dt} \int _{\varOmega } u \leq \mu \int _{\varOmega }u-\mu \int _{ \varOmega }u^{r}. $$ $$ \frac{d}{dt} \int _{\varOmega } u + \int _{\varOmega }u \leq (\mu +1) \int _{ \varOmega }u-\mu \int _{\varOmega }u^{r}, $$ since \(w\geq 0\) by Lemma 2.1. Moreover, by Young's inequality (2.3), we get $$ \frac{d}{dt} \int _{\varOmega } u + \int _{\varOmega }u\leq \widetilde{C}_{1}, $$ where \(\widetilde{C}_{1}>0\), as all subsequently appearing constants \(\widetilde{C}_{2}>0\) and \(\widetilde{C}_{3}>0\) are depending on \(n,\|v_{0}\|_{L^{1}(\varOmega )}\) and \(\|u_{0}\|_{L^{1}(\varOmega )}\). Upon ODE comparison, we can prove that \(\|u(\cdot ,t)\|_{L^{1}(\varOmega )}\leq C\). (ii) Integrating the second equation in (1.4) with respect to \(x\in \varOmega \) yields $$ \frac{d}{dt} \int _{\varOmega }v+ \int _{\varOmega }v= \int _{\varOmega }u^{\eta }. $$ Moreover, if \(\eta \in (0,1)\), by Young's inequality(2.3), we get $$ \int _{\varOmega }u^{\eta } \leq \int _{\varOmega }u+(1-\eta ) \vert \varOmega \vert \leq \widetilde{C}_{2}. $$ If \(\eta =1\), we get $$ \frac{d}{dt} \int _{\varOmega }v+ \int _{\varOmega }v= \int _{\varOmega }u \leq C. $$ In summary, upon ODE comparison, we can prove \(\|v(\cdot ,t)\|_{L^{1}( \varOmega )}\leq C\). (iii) Multiplying the second equation in (1.4) by \(-\Delta v\) and integrating over Ω, and using Young's inequality, we find $$ \frac{1}{2}\frac{d}{dt} \int _{\varOmega } \vert \nabla v \vert ^{2}+ \int _{\varOmega } \vert \Delta v \vert ^{2} + \int _{\varOmega } \vert \nabla v \vert ^{2} =- \int _{\varOmega }u^{\eta } \Delta v \leq \int _{\varOmega } \vert \Delta v \vert ^{2}+ \frac{1}{4} \int _{\varOmega }u ^{2\eta } $$ and thus $$ \frac{d}{dt} \int _{\varOmega } \vert \nabla v \vert ^{2}+2 \int _{\varOmega } \vert \nabla v \vert ^{2} \leq \frac{1}{2} \int _{\varOmega }u^{2\eta }. $$ Combining this with (2.5), we obtain $$ \frac{d}{dt} \int _{\varOmega }\bigl( \vert \nabla v \vert ^{2}+u \bigr)+2 \int _{\varOmega }\bigl( \vert \nabla v \vert ^{2}+u \bigr) \leq \frac{1}{2} \int _{\varOmega }u^{2\eta }+(\mu +2) \int _{ \varOmega }u-\mu \int _{\varOmega }u^{r}. $$ Moreover, by Young's inequality (2.3), we get $$ \frac{d}{dt} \int _{\varOmega }\bigl( \vert \nabla v \vert ^{2}+u \bigr)+2 \int _{\varOmega }\bigl( \vert \nabla v \vert ^{2}+u \bigr) \leq \widetilde{C}_{3}. $$ Upon ODE comparison, we can prove \(\|\nabla v(\cdot ,t)\|_{L^{2}( \varOmega )}\leq C\). □ Let \((u,v,w)\) be the classical solution of (1.4) in \(\varOmega \times (0,T_{\max })\). Then, for any \(k>1\), $$ - \int _{\varOmega }u^{k-1}\nabla \cdot (u\nabla w)\leq c_{1} \biggl( \int _{ \varOmega }u^{k}+ \int _{\varOmega }u^{k}v+ k \int _{\varOmega }u^{k-1} \vert \nabla u \vert \biggr) $$ with constant \(c_{1}>0\) independent of k. Firstly, we follow the well-known precedent in [18] and give the estimate for Δw. Since the third equation in (1.4) is an ODE, we have $$\begin{aligned}& \begin{gathered} w(x,t)=w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds}, \\ \nabla w(x,t)= \nabla w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds}-w_{0}(x)e^{- \int ^{t}_{0}v(x,s)\,ds} \int ^{t}_{0}\nabla v(x,s)\,ds, \end{gathered} \end{aligned}$$ $$ \begin{aligned} \Delta w(x,t) ={}& \Delta w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds}-2e^{-\int ^{t}_{0}v(x,s)\,ds} \nabla w_{0}(x)\cdot \int ^{t}_{0}\nabla v(x,s)\,ds \\ &{} +w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds}\cdot \biggl\vert \int ^{t}_{0} \nabla v(x,s)\,ds \biggr\vert ^{2} -w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \int ^{t}_{0}\Delta v(x,s)\,ds \end{aligned} $$ $$ \begin{aligned}[b] \Delta w(x,t)\geq{} &\Delta w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds}-2e^{-\int ^{t}_{0}v(x,s)\,ds} \nabla w_{0}(x)\cdot \int ^{t}_{0}\nabla v(x,s)\,ds \\ &{} -w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \int ^{t}_{0}\Delta v(x,s)\,ds. \end{aligned} $$ Note that \(\frac{\partial w_{0}}{\partial \nu }=0\) and \(\frac{\partial v}{\partial \nu }=0\), (2.7) shows that \(\frac{\partial w}{ \partial \nu }=0\). Therefore, the zero-flux boundary condition in (1.4) becomes $$ \frac{\partial u}{\partial \nu }=\frac{\partial v}{\partial \nu }=\frac{ \partial w}{\partial \nu }=0, \quad x\in \partial \varOmega , t>0. $$ Hence, for any \(k\geq 1\), integrating by parts and using (2.8), we obtain $$ \begin{aligned}[b] &- \int _{\varOmega }u^{k-1}\nabla \cdot (u\nabla w)\,dx \\ &\quad =(k-1) \int _{\varOmega }u^{k-1}\nabla u\cdot \nabla w\,dx \\ &\quad =-\frac{k-1}{k} \int _{\varOmega }u^{k}\Delta w\,dx \\ &\quad \leq \frac{k-1}{k} \int _{\varOmega }u^{k}\biggl(-\Delta w_{0}(x)e ^{-\int ^{t}_{0}v(x,s)\,ds}+2e^{-\int ^{t}_{0}v(x,s)\,ds} \nabla w_{0}(x) \cdot \int ^{t}_{0}\nabla v(x,s)\,ds \\ & \qquad{} +w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \int ^{t}_{0}\Delta v(x,s)\,ds\biggr)\,dx \\ &\quad =:J_{1}+J_{2}+J_{3} , \end{aligned} $$ $$\begin{aligned}& J_{1}=-\frac{k-1}{k} \int _{\varOmega }u^{k}\Delta w_{0}(x)e^{-\int ^{t} _{0}v(x,s)\,ds}\,dx, \\& J_{2}=\frac{2(k-1)}{k} \int _{\varOmega }u^{k}e^{-\int ^{t}_{0}v(x,s)\,ds} \nabla w_{0}(x)\cdot \int ^{t}_{0}\nabla v(x,s)\,ds\,dx \end{aligned}$$ $$ J_{3}=\frac{k-1}{k} \int _{\varOmega }u^{k}w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \int ^{t}_{0}\Delta v(x,s)\,ds\,dx. $$ Now, since \(v\geq 0\) leads to $$ -\Delta w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \leq \Vert \Delta w_{0} \Vert _{L ^{\infty }(\varOmega )}\quad \text{for all } (x,t)\in \varOmega \times (0,T_{\max }), $$ $$\begin{aligned}& J_{1}\leq \Vert \Delta w_{0} \Vert _{L^{\infty }(\varOmega )} \int _{\varOmega }u^{k}\,dx, \end{aligned}$$ $$\begin{aligned}& \begin{aligned}[b] J_{2} ={}&{-} \frac{2(k-1)}{k} \int _{\varOmega }u^{k}\nabla e^{-\int ^{t}_{0}v(x,s)\,ds} \cdot \nabla w_{0}(x)\,dx \\ ={}&2(k-1) \int _{\varOmega }u^{k-1}\nabla u\cdot \nabla w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \,dx \\ &{}+\frac{2(k-1)}{k} \int _{\varOmega }u^{k}\Delta w_{0}(x)e ^{-\int ^{t}_{0}v(x,s)\,ds} \,dx \\ \leq{}& c_{1} k \int _{\varOmega }u^{k-1} \vert \nabla u \vert \,dx+c_{1} \int _{\varOmega }u^{k}\,dx, \end{aligned} \end{aligned}$$ $$ \begin{aligned}[b] J_{3} &= \frac{k-1}{k} \int _{\varOmega }u^{k}w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \int ^{t}_{0} \bigl(v_{s}(x,s)+v(x,s)-u^{\eta }(x,s) \bigr)\,ds\,dx \\ &\leq \frac{k-1}{k} \int _{\varOmega }u^{k}w_{0}(x)e^{-\int ^{t}_{0}v(x,s)\,ds} \biggl(v(x,s)-v_{0}(x)+ \int ^{t}_{0} v(x,s)\,ds \biggr)\,dx \\ &\leq c_{1} \int _{\varOmega }u^{k}v\,dx + c_{1} \int _{\varOmega }u^{k}\,dx \end{aligned} $$ for all \((x,t)\in \varOmega \times (0,T_{\max })\), where we have used the facts that \(ze^{-z}\leq \frac{1}{e}\) for all \(z\in \mathbb{R}\) and \(0< e^{-\int ^{t}_{0} v(x,s)\,ds}\leq 1\) thanks to \(v\geq 0\). Inserting (2.10)–(2.12) into (2.9) yields $$ - \int _{\varOmega }u^{k-1}\nabla \cdot (u\nabla w)\,dx\leq c_{1} \int _{ \varOmega }u^{k}\,dx+c_{1} \int _{\varOmega }u^{k}v\,dx+c_{1} k \int _{\varOmega }u ^{k-1} \vert \nabla u \vert \,dx. $$ Let \(n\geq 2\), \(\eta \in (0,1]\), \(\alpha <\frac{n+2-2n\eta }{2+n}\), \(\theta _{1}=\frac{2(k+1)}{1-\alpha }\), \(\theta _{2}= \frac{2(k+1)(m-1)}{k-2\eta +1}\) and \(\kappa _{i}=\frac{\frac{m}{2}-\frac{m}{ \theta _{i}}}{\frac{m}{2}-\frac{1}{2}+\frac{1}{n}}\), \(i=1,2\). Then, for all sufficiently large \(k>1\), there exists a large \(m>1\) such that the following inequalities are valid: $$ \theta _{i}>2,\qquad m>\frac{n-2}{2n}\theta _{i},\qquad 2m>\max \{\theta _{i}\kappa _{i},k+1\} \quad \textit{for } i=1,2. $$ Since \(2m>\theta _{i}\kappa _{i}\) is equivalent to \(m>\frac{\theta _{i}}{2}-\frac{2}{n}\), it is sufficient to show that if \(\alpha <\frac{n+2-2n\eta }{2+n}\), then, for all sufficiently large \(k>1\), there exists a large \(m>1\) satisfying \(m>\frac{\theta _{i}}{2}- \frac{2}{n}(i=1,2)\) and \(2m>k+1\), which can be achieved by the fact that \(m>\frac{\theta _{i}}{2}-\frac{2}{n}(i=1,2)\) is equivalent to \(\frac{k+1}{1-\alpha }-\frac{2}{n}< m<\frac{(k+1)(n+2)}{2n\eta }- \frac{2}{n}\). □ In this section, we are going to establish an iteration step to develop the main ingredient of our result. The iteration depends on a series of a priori estimates. Firstly, based on the estimates in Lemma 2.3, we use test function arguments to derive the bound of u in \(L^{k}(\varOmega )\) and ∇v in \(L^{2m}(\varOmega )\) for all sufficiently large \(k,m>1\), which is the main step towards our proof of Theorem 1.1. Assume that D satisfies (1.6) and (1.7) with \(\alpha <\frac{n+2-2n \eta }{2+n}\). Then, for all large numbers \(m>2\), \(k>1\) as provided by Lemma 2.6, there exists \(C>0\) such that $$ \bigl\Vert u(\cdot ,t) \bigr\Vert _{L^{k}(\varOmega )}\leq C, \qquad \bigl\Vert \nabla v(\cdot ,t) \bigr\Vert _{L^{2m}}\leq C \quad \textit{for all } t\in (0,T_{\max }). $$ Multiplying the first equation in (1.4) by \(ku^{k-1}\) and integrating over Ω, we get $$ \begin{aligned}[b] &\frac{d}{dt} \Vert u \Vert ^{k}_{L^{k}(\varOmega )}+k(k-1) \int _{\varOmega }u^{k-2}D(u) \vert \nabla u \vert ^{2}+ k\mu \int _{\varOmega }u^{k+r-1} \\ &\quad \leq -k\chi \int _{\varOmega }\nabla \cdot (u\nabla v)u^{k-1}-k \xi \int _{\varOmega }\nabla \cdot (u\nabla w)u^{k-1} +k\mu \int _{\varOmega }u^{k} . \end{aligned} $$ By (1.7), we have $$ \delta k(k-1) \int _{\varOmega }u^{k-2-\alpha } \vert \nabla u \vert ^{2}\leq k(k-1) \int _{\varOmega }u^{k-2}D(u) \vert \nabla u \vert ^{2}. $$ By Young's inequality, the first item on the right side of the inequality (3.2) becomes $$ \begin{aligned}[b] & -k\chi \int _{\varOmega }\nabla \cdot (u\nabla v)u^{k-1} \\ & \quad =k\chi \int _{\varOmega }u\nabla v\cdot \nabla u^{k-1} \\ & \quad =k(k-1)\chi \int _{\varOmega }u^{k-1}\nabla u\cdot \nabla v \\ & \quad \leq \frac{\delta k(k-1)}{2} \int _{\varOmega }u^{k-2-\alpha } \vert \nabla u \vert ^{2} +\frac{\chi ^{2}k(k-1)}{2\delta } \int _{\varOmega }u^{k+\alpha } \vert \nabla v \vert ^{2} . \end{aligned} $$ The second item of the right side of the inequality (3.2), combining with (2.6), yields $$ \begin{aligned}[b] &-k\xi \int _{\varOmega }u^{k-1}\nabla \cdot (u\nabla w) \leq c_{1}k \xi \int _{\varOmega }u^{k}+ c_{1}k\xi \int _{\varOmega }u^{k}v+ c_{1}k^{2} \xi \int _{\varOmega }u^{k-1} \vert \nabla u \vert \\ & \quad \leq c_{1}k\xi \int _{\varOmega }u^{k}+ c_{1}k\xi \int _{\varOmega }u^{k}v+\frac{ \delta k(k-1)}{4} \int _{\varOmega }u^{k-2-\alpha } \vert \nabla u \vert ^{2} +\frac{c ^{2}_{1}\xi ^{2}k^{3}}{\delta (k-1)} \int _{\varOmega }u^{k+\alpha } . \end{aligned} $$ Hence, inserting (3.3)–(3.5) into (3.2) yields $$ \begin{aligned}[b] &\frac{d}{dt} \Vert u \Vert ^{k}_{L^{k}(\varOmega )}+\frac{\delta k(k-1)}{4} \int _{\varOmega }u^{k-2-\alpha } \vert \nabla u \vert ^{2} +k\mu \int _{\varOmega }u^{k+r-1} \\ & \quad \leq \frac{\chi ^{2}k(k-1)}{2\delta } \int _{\varOmega }u^{k+\alpha } \vert \nabla v \vert ^{2}+ c_{1}k\xi \int _{\varOmega }u^{k}+ c_{1}k\xi \int _{\varOmega }u^{k}v \\ & \qquad{} +\frac{c^{2}_{1}\xi ^{2}k^{3}}{\delta (k-1)} \int _{\varOmega }u^{k+\alpha } +k\mu \int _{\varOmega }u^{k} . \end{aligned} $$ Removing the nonnegative number on the left of the inequality (3.6), we have $$ \begin{aligned}\frac{d}{dt} \Vert u \Vert ^{k}_{L^{k}(\varOmega )}+k\mu \int _{\varOmega }u^{k+r-1} \leq{}& \frac{\chi ^{2}k(k-1)}{2\delta } \int _{\varOmega }u^{k+\alpha } \vert \nabla v \vert ^{2}+ c_{1}k\xi \int _{\varOmega }u^{k}+ c_{1}k\xi \int _{\varOmega }u^{k}v \\ & {} +\frac{c^{2}_{1}\xi ^{2}k^{3}}{\delta (k-1)} \int _{\varOmega }u^{k+ \alpha } +k\mu \int _{\varOmega }u^{k}. \end{aligned} $$ Furthermore, using Young's inequality, we can find $$ \frac{d}{dt} \Vert u \Vert ^{k}_{L^{k}(\varOmega )}+c_{2} \int _{\varOmega }u^{k+1} \leq \frac{\chi ^{2}k(k-1)}{2\delta } \int _{\varOmega }u^{k+\alpha } \vert \nabla v \vert ^{2}+ c_{2} \int _{\varOmega }v^{k+1}+c_{2}, $$ where \(c_{2}>0\), as all subsequently appearing constants \(c_{3},c_{4}, \ldots,c_{16}\) possibly depend on k, m, μ, ξ, r, η, \(|\varOmega |\) and δ. Differentiating the second equation in (1.4), we obtain $$ \frac{d}{dt} \vert \nabla v \vert ^{2}=2\nabla v\cdot \nabla \Delta v-2 \vert \nabla v \vert ^{2}+2 \nabla u^{\eta }\cdot \nabla v , $$ and hence, according to the identity $$ \Delta \vert \nabla v \vert ^{2}=2\nabla v\cdot \nabla \Delta v+2 \bigl\vert D^{2}v \bigr\vert ^{2}, $$ we obtain $$ \frac{d}{dt} \vert \nabla v \vert ^{2}=\Delta \vert \nabla v \vert ^{2}-2 \bigl\vert D^{2}v \bigr\vert ^{2}+2 \nabla u^{\eta }\cdot \nabla v. $$ Testing this by \(m|\nabla v|^{2m-2} \) yields $$ \begin{aligned}[b] &\frac{d}{dt} \int _{\varOmega } \vert \nabla v \vert ^{2m}+m(m-1) \int _{\varOmega } \vert \nabla v \vert ^{2m-4} \bigl\vert \nabla \vert \nabla v \vert ^{2} \bigr\vert ^{2}\\ &\qquad {} +2m \int _{ \varOmega } \vert \nabla v \vert ^{2m-2} \bigl\vert D^{2}v \bigr\vert ^{2}+2m \int _{\varOmega } \vert \nabla v \vert ^{2m} \\ & \quad \leq 2m \int _{\varOmega } \vert \nabla v \vert ^{2m-2}\nabla u^{\eta }\cdot \nabla v +m \int _{\partial \varOmega }\frac{\partial \vert \nabla v \vert ^{2}}{\partial \nu } \vert \nabla v \vert ^{2m-2}. \end{aligned} $$ On the other hand, based on the estimate of Mizoguchi–Souplet [13], the Gagliardo–Nirenberg inequality and boundedness of ∇v in \(L^{2}(\varOmega )\), we can conclude that $$ m \int _{\partial \varOmega }\frac{\partial \vert \nabla v \vert ^{2}}{\partial \nu } \vert \nabla v \vert ^{2m-2} \leq c_{3} \biggl( \int _{\varOmega } \bigl\vert \nabla \vert \nabla v \vert ^{m} \bigr\vert ^{2} \biggr)^{b}+c_{3} $$ with some \(b\in (0,1)\). Therefore, combining (3.8) with (3.9) and applying Young's inequality, we have $$ \begin{aligned}[b] &\frac{d}{dt} \int _{\varOmega } \vert \nabla v \vert ^{2m}+ \frac{m(m-1)}{2} \int _{ \varOmega } \vert \nabla v \vert ^{2m-4} \bigl\vert \nabla \vert \nabla v \vert ^{2} \bigr\vert ^{2} \\ &\qquad {}+2m \int _{\varOmega } \vert \nabla v \vert ^{2m-2} \bigl\vert D^{2}v \bigr\vert ^{2}+2m \int _{\varOmega } \vert \nabla v \vert ^{2m} \\ & \quad \leq 2m \int _{\varOmega } \vert \nabla v \vert ^{2m-2}\nabla u^{\eta }\cdot \nabla v+c _{4} \end{aligned} $$ due to \(\int _{\varOmega }|\nabla v|^{2m-4}|\nabla |\nabla v|^{2}|^{2} = \frac{4}{m}\int _{\varOmega }|\nabla |\nabla v|^{m}|^{2}\). Hence, due to the pointwise identities \(\nabla |\nabla v|^{2m-2}=(m-1)| \nabla v|^{2m-4}\nabla |\nabla v|^{2}\) and \(|\Delta v|^{2}\leq n|D ^{2}v|^{2}\), and together with an integration by the right part in (3.10) and using Young's inequality, we have $$\begin{aligned} &2m \int _{\varOmega } \vert \nabla v \vert ^{2m-2}\nabla u^{\eta }\cdot \nabla v \\ &\quad =-2m(m-1) \int _{\varOmega }u^{\eta } \vert \nabla v \vert ^{2m-4}\nabla v\cdot \nabla \vert \nabla v \vert ^{2} -2m \int _{\varOmega }u^{\eta } \vert \nabla v \vert ^{2m-2} \Delta v \\ &\quad \leq \frac{m(m-1)}{4} \int _{\varOmega } \vert \nabla v \vert ^{2m-4} \bigl\vert \nabla \vert \nabla v \vert ^{2} \bigr\vert ^{2} +4m(m-1) \int _{\varOmega }u^{2\eta } \vert \nabla v \vert ^{2m-2} \\ &\qquad{} +\frac{m}{n} \int _{\varOmega } \vert \nabla v \vert ^{2m-2} \vert \Delta v \vert ^{2} +mn \int _{\varOmega }u^{2\eta } \vert \nabla v \vert ^{2m-2} \\ &\quad \leq \frac{m(m-1)}{4} \int _{\varOmega } \vert \nabla v \vert ^{2m-4} \bigl\vert \nabla \vert \nabla v \vert ^{2} \bigr\vert ^{2} +\bigl(4m(m-1)+mn\bigr) \int _{\varOmega }u^{2\eta } \vert \nabla v \vert ^{2m-2} \\ &\qquad{} +m \int _{\varOmega } \vert \nabla v \vert ^{2m-2} \bigl\vert D^{2}v \bigr\vert ^{2}. \end{aligned}$$ Hence, inserting (3.11) into (3.10) yields $$ \begin{aligned}[b] &\frac{d}{dt} \int _{\varOmega } \vert \nabla v \vert ^{2m}+(m-1) \int _{\varOmega } \bigl\vert \nabla \vert \nabla v \vert ^{m} \bigr\vert ^{2} +2m \int _{\varOmega } \vert \nabla v \vert ^{2m} \\ & \quad \leq \bigl(4m(m-1)+mn\bigr) \int _{\varOmega }u^{2\eta } \vert \nabla v \vert ^{2m-2}+c_{4}. \end{aligned} $$ Hence combining (3.7) with (3.12) and using Young's inequality, we can find $$ \begin{aligned}[b] &\frac{d}{dt} \int _{\varOmega }\bigl(u^{k}+ \vert \nabla v \vert ^{2m}\bigr) +c_{5} \int _{\varOmega }\bigl( \bigl\vert \nabla \vert \nabla v \vert ^{m} \bigr\vert ^{2}+ \vert \nabla v \vert ^{2m}\bigr) +c_{5} \int _{\varOmega }u^{k+1} \\ & \quad \leq c_{6} \int _{\varOmega }u^{k+\alpha } \vert \nabla v \vert ^{2}+ c_{6} \int _{ \varOmega }u^{2\eta } \vert \nabla v \vert ^{2m-2} +c_{6} \int _{\varOmega }v^{k+1}+c_{6} \\ & \quad \leq \frac{c_{5}}{2} \int _{\varOmega }u^{k+1}+ c_{7} \int _{\varOmega }\bigl( \vert \nabla v \vert ^{\theta _{1}}+ \vert \nabla v \vert ^{\theta _{2}}\bigr) +c_{6} \int _{\varOmega }v ^{k+1}+c_{6} \end{aligned} $$ with \(\theta _{i}\) (\(i=1,2\)) as shown in Lemma 2.6. According to the Gagliardo–Nirenberg inequality, (2.4) and Lemma 2.6, we have $$ \begin{aligned}[b] c_{7} \int _{\varOmega } \vert \nabla v \vert ^{\theta _{i}} &=c_{7} \bigl\Vert \vert \nabla v \vert ^{m} \bigr\Vert ^{\frac{\theta _{i}}{m}}_{L^{\frac{\theta _{i}}{m}}} \\ &\leq c_{8} \bigl( \bigl\Vert \nabla \vert \nabla v \vert ^{m} \bigr\Vert ^{\kappa _{i}}_{L^{2}( \varOmega )} \bigl\Vert \vert \nabla v \vert ^{m} \bigr\Vert ^{1-\kappa _{i}}_{L^{\frac{2}{m}}(\varOmega )} + \bigl\Vert |\nabla v|^{m} \bigr\Vert _{L^{\frac{2}{m}}(\varOmega )} \bigr)^{\frac{\theta _{i}}{m}} \\ & \leq c_{9} \bigl\Vert \nabla \vert \nabla v \vert ^{m} \bigr\Vert ^{ \frac{\theta _{i}\kappa _{i}}{m}}_{L^{2}(\varOmega )} +c_{9} \\ & \leq \frac{c_{5}}{2} \bigl\Vert \nabla \vert \nabla v \vert ^{m} \bigr\Vert ^{2}_{L^{2}(\varOmega )} +c_{10}. \end{aligned} $$ Due to the boundedness of \(\|v\|_{W^{1,2}(\varOmega )} \) (see Lemma 2.4) and Lemma 2.6, and by the Sobolev inequality and Young's inequality, we can find $$\begin{aligned} c_{6} \int _{\varOmega }v^{k+1} \leq& c_{11} \Vert v \Vert ^{k+1}_{L^{\infty }(\varOmega )} \leq c_{12} \Vert v \Vert ^{k+1}_{L^{n+1}(\varOmega )}+c_{12} \\ \leq& c_{13} \Vert v \Vert ^{k+1}_{L^{2m}(\varOmega )}+c_{12} \leq \frac{c_{5}}{2} \int _{\varOmega } \vert \nabla v \vert ^{2m}+c_{14}. \end{aligned}$$ Hence substituting (3.14) and (3.15) into (3.13) yields $$ \frac{d}{dt} \int _{\varOmega }\bigl(u^{k}+ \vert \nabla v \vert ^{2m}\bigr)+\frac{c_{5}}{2} \int _{\varOmega }\bigl(u^{k+1}+ \vert \nabla v \vert ^{2m}\bigr) \leq c_{15}, $$ by Young's inequality, we can find $$ \frac{d}{dt} \int _{\varOmega }\bigl(u^{k}+ \vert \nabla v \vert ^{2m}\bigr)+\frac{c_{5}}{2} \int _{\varOmega }\bigl(u^{k}+ \vert \nabla v \vert ^{2m}\bigr) \leq c_{16} $$ for sufficiently large \(k>1\), \(m>1\). Consequently, \(y(t):=\int _{\varOmega }(u ^{k}+|\nabla v|^{2m})\) satisfies \(y'(t)+\frac{c_{5}}{2}y(t)\leq c_{16}\). Upon an ODE comparison argument, we have \(y(t)\leq \max \{y(0),\frac{2c _{16}}{c_{5}}\} \) for all \(t\in (0,T_{\max })\). The proof of Lemma 3.1 is complete. □ Due to \(\|u(\cdot ,t)\|_{L^{k}(\varOmega )}\leq C\) is bounded for any large k, by the fundamental estimates for Neumann semigroup (see [8, Lemma 2.1]) or the standard regularity theory of parabolic equation, we immediately have the following corollary. Corollary 3.1 Let \(\chi >0\), \(\xi >0 \) and \(\mu >0\), and assume that \((u_{0},v_{0},w _{0})\) satisfy (1.5). Then there exists \(C>0\) such that the solution \((u,v,w)\) of (1.4) satisfies $$ \bigl\Vert v(\cdot ,t) \bigr\Vert _{W^{1,\infty }(\varOmega )}\leq C \quad \textit{for all } t\in (0,T_{\max }). $$ Now we can prove our main result. The derivation of following statement can be obtained by a well-established Moser–Alikakos iteration technique (see e.g. [1] and Lemma A.1 in [22]). We choose (3.6) as a starting point for our proof. Under the same assumption of Lemma 3.1, there exists \(C>0\) such that the solution \((u,v,w)\) of (1.4) satisfies $$ \bigl\Vert u(\cdot ,t) \bigr\Vert _{L^{\infty }(\varOmega )}\leq C \quad \textit{for all } t\in (0,T_{\max }). $$ We begin with (3.6) $$ \begin{aligned} &\frac{d}{dt} \Vert u \Vert ^{k}_{L^{k}(\varOmega )}+\frac{\delta k(k-1)}{4} \int _{\varOmega }u^{k-2-\alpha } \vert \nabla u \vert ^{2} +k\mu \int _{\varOmega }u^{k+r-1} \\ & \quad \leq \frac{\chi ^{2}k(k-1)}{2\delta } \int _{\varOmega }u^{k+\alpha } \vert \nabla v \vert ^{2}+ c_{1}k\xi \int _{\varOmega }u^{k}+ c_{1}k\xi \int _{\varOmega }u^{k}v \\ & \qquad{} +\frac{c^{2}_{1}\xi ^{2}k^{3}}{2\delta (k-1)} \int _{\varOmega }u^{k+\alpha } +k\mu \int _{\varOmega }u^{k}, \end{aligned} $$ which, along with (3.16), implies that $$ \begin{aligned} &\frac{d}{dt} \Vert u \Vert ^{k}_{L^{k}(\varOmega )}+\frac{\delta k(k-1)}{4} \int _{\varOmega }u^{k-2-\alpha } \vert \nabla u \vert ^{2} +k\mu \int _{\varOmega }u^{k+r-1} \\ & \quad \leq c_{17}k(k-1) \int _{\varOmega }u^{k+\alpha }+ c_{17}k \int _{\varOmega }u ^{k} +\frac{c^{2}_{1}\xi ^{2}k^{3}}{2\delta (k-1)} \int _{\varOmega }u^{k+ \alpha } +k\mu \int _{\varOmega }u^{k} , \end{aligned} $$ where \(c_{17}>0\), as all subsequently appearing constants \(c_{18}, c _{19},\ldots\) are independent of k. Due to \(\alpha <1\), \(r>2\) and \(u\geq 0\) and by Young's inequality, we can find $$ \frac{d}{dt} \int _{\varOmega }u^{k}+c_{18} \int _{\varOmega } \bigl\vert \nabla u^{\frac{k- \alpha }{2}} \bigr\vert ^{2} + \int _{\varOmega }u^{k} \leq c_{19}k^{2} \int _{\varOmega }u ^{k}. $$ We now recursively define $$\begin{aligned}& k:=b_{i}=\frac{2}{s}\cdot b_{i-1}+\alpha ,\quad i\geq 1, \end{aligned}$$ $$\begin{aligned}& \epsilon _{i}:=\frac{2b_{i}(1-a)}{s(b_{i}-b_{i}a-\alpha )},\quad i\geq 1, \end{aligned}$$ $$ M_{i}:=\sup_{t\in (0,T)} \int _{\varOmega }u^{b_{i}},\quad i\in \mathbb{N}. $$ Note that \((b_{i})_{i\in \mathbb{N}}\) increases and $$ c_{23}\cdot \biggl(\frac{2}{s} \biggr)^{i}\leq b_{i} \leq c_{24} \cdot \biggl(\frac{2}{s} \biggr)^{i} \quad \text{for all } i \in \mathbb{N}, $$ where we chose \(s\in (0,2)\). Now invoking the Gagliardo–Nirenberg inequality (Lemma 2.3), we find \(c_{20}>0\) independent of k, such that $$ \int _{\varOmega }u^{b_{i}}= \bigl\Vert u^{\frac{b_{i}-\alpha }{2}} \bigr\Vert ^{\frac{2b _{i}}{b_{i}-\alpha }}_{L^{\frac{2b_{i}}{b_{i}-\alpha }}} \leq c_{20} \bigl\Vert \nabla u^{\frac{b_{i}-\alpha }{2}} \bigr\Vert ^{\frac{2b_{i}}{b_{i}-\alpha }a} _{L^{2}(\varOmega )} \cdot \bigl\Vert u^{\frac{b_{i}-\alpha }{2}} \bigr\Vert ^{\frac{2b_{i}}{b _{i}-\alpha }(1-a)}_{L^{s}(\varOmega )} +c_{20} \bigl\Vert u^{ \frac{b_{i}-\alpha }{2}} \bigr\Vert ^{\frac{2b_{i}}{b_{i}-\alpha }}_{L^{s}( \varOmega )} $$ for all \(t\in (0,T_{\max })\), with $$ a=\frac{\frac{1}{s}-\frac{1}{\frac{2b_{i}}{b_{i}-\alpha }}}{ \frac{1}{s}+\frac{1}{n}-\frac{1}{2}} =\frac{\frac{n}{s}-\frac{n}{\frac{2b _{i}}{b_{i}-\alpha }}}{\frac{n}{s}+1-\frac{n}{2}}\in (0,1). $$ Assume \(b_{i}>\max \{\frac{n\alpha }{2},\frac{\alpha }{1-a}\}\), so we have \(\frac{b_{i}a}{b_{i}-\alpha }<1\). Combining (3.18) with (3.23) and using Young's inequality, we obtain $$ \frac{d}{dt} \int _{\varOmega }u^{b_{i}}+ \int _{\varOmega }u^{b_{i}} \leq c _{21}b_{i}^{2} \biggl( \biggl( \int _{\varOmega }u^{ \frac{s(b_{i}-\alpha )}{2}} \biggr)^{\frac{2b_{i}(1-a)}{s(b_{i}- \alpha )}} \biggr)^{\frac{b_{i}-\alpha }{b_{i}-\alpha -b_{i}a}} +c _{21}b_{i}^{2} \biggl( \int _{\varOmega }u^{\frac{s(b_{i}-\alpha )}{2}} \biggr) ^{\frac{2b_{i}}{s(b_{i}-\alpha )}}. $$ To simplify this, we observe that \(\frac{2b_{i}(1-a)}{s(b_{i}-\alpha )}\cdot \frac{b_{i}-\alpha }{b_{i}-\alpha -b_{i}a}>\frac{2b_{i}}{s(b _{i}-\alpha )}\), and thus $$ \frac{d}{dt} \int _{\varOmega }u^{b_{i}}+ \int _{\varOmega }u^{b_{i}} \leq c _{22}b_{i}^{2} \biggl( \int _{\varOmega }u^{\frac{s(b_{i}-\alpha )}{2}} \biggr) ^{\frac{2b_{i}(1-a)}{s(b_{i}-\alpha -b_{i}a)}}. $$ Inserting (3.19)–(3.22) into (3.25) yields $$ \frac{d}{dt} \int _{\varOmega }u^{b_{i}}+ \int _{\varOmega }u^{b_{i}} \leq c _{25}\cdot \biggl(\frac{4}{s^{2}} \biggr)^{i}\cdot M_{i-1}^{\epsilon _{i}}. $$ Upon invoking an ODE comparison argument, we have $$ M_{i} \leq \max \biggl\{ \Vert u_{0} \Vert ^{b_{i}}_{L^{\infty }(\varOmega )} , c _{25}\cdot \biggl( \frac{4}{s^{2}} \biggr)^{i}\cdot M_{i-1}^{\epsilon _{i}} \biggr\} . $$ We easily deduce from (3.19), (3.20) and (3.22) that $$ \epsilon _{i}=\frac{2b_{i}(1-a)}{s(b_{i}-b_{i}a-\alpha )}= \frac{2}{s} \cdot \frac{b_{i}(1-a)}{b_{i}-b_{i}a-\alpha } =\frac{2}{s}\cdot (1+ \varepsilon _{i}),\quad i\geq 1 $$ holds with some \(\varepsilon _{i}\geq 0\) satisfying $$ \varepsilon _{i}=\frac{\alpha }{b_{i}-b_{i}a-\alpha }\leq \frac{c_{26}}{b _{i}}\leq c_{27}\cdot \biggl(\frac{s}{2} \biggr)^{i} $$ for all \(i\geq 1\) and appropriately large \(c_{26}>0\) and \(c_{27}>0\). Now if \(\|u_{0}\|^{b_{i}}_{L^{\infty }(\varOmega )} \geq c_{25}\cdot ( \frac{2}{s})^{i}\cdot M_{i-1}^{\epsilon _{i}}\) for infinitely many \(i\geq 1\), we get (3.17) with \(C=\|u_{0}\|_{L^{\infty }(\varOmega )}\) for all \(t\in (0,T_{\max })\). $$ M_{i}\leq c_{25}\cdot \biggl(\frac{4}{s^{2}} \biggr)^{i}\cdot M_{i-1} ^{\epsilon _{i}} \quad \text{for all } i\geq 1. $$ By a straightforward induction, this yields $$ M_{i}\leq c_{25}^{1+\sum ^{i-2}_{j=0}\prod ^{i}_{l=i-j}\epsilon _{i}} \cdot \biggl( \frac{4}{s^{2}} \biggr)^{i+\sum ^{i-2}_{j=0}(i-j-1) \cdot \prod ^{i}_{l=i-j}\epsilon _{i}} \cdot M_{0}^{\prod ^{i}_{l=1} \epsilon _{i}} $$ for all \(i\geq 2\), and hence in view of (3.22) and (3.26) we obtain $$ \begin{aligned} M_{i}^{\frac{1}{b_{i}}}\leq{}& c_{25}^{\frac{1}{c_{23}}(\frac{s}{2})^{i}+\frac{1}{c _{23}}\cdot \sum ^{i-2}_{j=0}(\frac{s}{2})^{i-j-1}\cdot \prod ^{i}_{l=i-j}(1+ \varepsilon _{i})} \times \biggl(\frac{4}{s^{2}} \biggr)^{i\frac{1}{c _{23}}(\frac{s}{2})^{i}+\frac{1}{c_{23}}\cdot \sum ^{i-2}_{j=0}(i-j-1) (\frac{s}{2})^{i-j-1}\cdot \prod ^{i}_{l=i-j}(1+\varepsilon _{i})} \\ &{} \times M_{0}^{\frac{1}{c_{23}}\cdot \prod ^{i}_{l=1}(1+ \varepsilon _{i})} \end{aligned} $$ for all \(i\geq 2\). Since \(\ln (1+z)\leq z\) for \(z\geq 0\), from (3.27) and the fact that \(s<2\) we get $$ \ln \Biggl(\prod^{i}_{l=1}(1+ \varepsilon _{i}) \Biggr)\leq \sum^{i} _{l=1}\varepsilon _{i}\leq \frac{c_{27}}{1-\frac{s}{2}}, $$ so that using \(\sum^{i-2}_{j=0}(i-j-1) \cdot (\frac{s}{2})^{i-j-1} \leq \sum^{\infty }_{h=1}h (\frac{s}{2})^{h}<\infty \), from this we conclude that also in this case \(\| u(\cdot ,t)\|_{L^{\infty }(\varOmega )}\) is bounded from above by a constant independent of \(t\in (0,T_{ \max })\). This clearly proves (3.17). □ We are now in a position to pass to the proof of Theorem 1.1. First we see that boundedness of u and v follows from Lemma 3.2 and Corollary 3.1, respectively. Therefore the assertion of Theorem 1.1 is immediately obtained from Lemma 2.1. □ Alikakos, N.D.: Lp-bounds of solution of reaction diffusion equation. Partial Differ. Equ. 4, 827–868 (1979) Cao, X.: Boundedness in a quasilinear parabolic-parabolic Keller–Segel system with logistic source. J. Math. Anal. Appl. 412, 181–188 (2014) Cao, X.: Boundedness in a three-dimensional chemotaxis–haptotaxis system. Z. Angew. Math. Phys. 67, 1–13 (2016) Chaplain, M.A.J., Lolas, G.: Mathematical modelling of cancer invasion of tissue: the role of the urokinase plasminogen activation system. Math. Models Methods Appl. Sci. 15, 1685–1734 (2005) Chaplain, M.A.J., Lolas, G.: Mathematical modelling of tissue invasion: dynamic heterogeneity. Netw. Heterog. Media 1, 399–439 (2006) Hillen, T., Painter, K.J., Winkler, M.: Convergence of a cancer invasion model to a logistic chemotaxis model. Math. Models Methods Appl. Sci. 23, 103–165 (2013) Hu, B., Tao, Y.: Boundedness in a parabolic-elliptic chemotaxis-growth system under a critical parameter condition. Appl. Math. Lett. 64, 1–7 (2017) Ishida, S., Seki, K., Yokota, T.: Boundedness in quasilinear Keller–Segel systems of parabolic-parabolic type on non-convex bounded domains. J. Differ. Equ. 256, 2993–3010 (2014) Keller, E.F., Segel, L.A.: Initiation of slim mold aggregation viewed as an instability. J. Theor. Biol. 26, 399–415 (1970) Li, Y., Lankeit, J.: Boundedness in a chemotaxis–haptotaxis model with nonlinear diffusion. Nonlinearity 29, 1564–1595 (2016) Liu, D., Tao, Y.: Boundedness in a chemotaxis system with signal production. Appl. Math. J. Chin. Univ. Ser. 31, 379–388 (2016) Liu, Y., Tao, Y.: Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals. Discrete Contin. Dyn. Syst., Ser. B 22, 465–475 (2017) Mizoguchi, N., Souplet, P.: Nondegeneracy of blow-up points for the parabolic Keller–Segel system. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 31, 851–875 (2014) Osaki, K., Tsujikawa, T., Yagi, A., Mimura, M.: Exponential attractor for a chemotaxis-growth system of equations. Nonlinear Anal. 51, 119–144 (2002) Painter, K.J., Hillen, T.: Volume-filling and quorum-sensing in models for chemosensitive movement. Can. Appl. Math. Q. 10, 501–543 (2002) Tao, X., Zhou, S., Ding, M.: Boundedness of solutions to a quasilinear parabolic-parabolic chemotaxis model with nonlinear signal production. J. Math. Anal. Appl. 474, 733–747 (2019) Tao, Y.: Global existence of classical solutions to a combined chemotaxis–haptotaxis model with logistic source. J. Math. Anal. Appl. 34, 60–69 (2009) Tao, Y.: Boundedness in a two-dimensional chemotaxis–haptotaxis system. J. Donghua Univ. 70, 165–174 (2016) Tao, Y., Wang, M.: Global solutions for a chemotaxis–haptotaxis model of cancer invasion. Nonlinearity 21, 2221–2238 (2008) Tao, Y., Wang, M.: A combined chemotaxis–haptotaxis system: the role of logistic source. SIAM J. Math. Anal. 41, 1533–1558 (2009) Tao, Y., Winkler, M.: A chemotaxis haptotaxis model: the role of nonlinear diffusion and logistic source. SIAM J. Math. Anal. 43, 685–704 (2011) Tao, Y., Winkler, M.: Boundedness in a quasilinear parabolic-parabolic Keller–Segel system with subcritical sensitivity. J. Differ. Equ. 252, 692–715 (2012) Tao, Y., Winkler, M.: Energy-type estimates and global solvability in a two-dimensional chemotaxis–haptotaxis model with remodeling of non-diffusible attractant. J. Differ. Equ. 257, 784–815 (2014) Tao, Y., Winkler, M.: Boundedness and stabilization in a multi-dimensional chemotaxis–haptotaxis mode. Proc. R. Soc. Edinb., Sect. A 144, 1067–1084 (2014) Tao, Y., Winkler, M.: Dominance of chemotaxis in a chemotaxis–haptotaxis model. Nonlinearity 27, 1225–1239 (2014) Tao, Y., Winkler, M.: Large time behavior in a multidimensional chemotaxis–haptotaxis model with slow signal diffusion. SIAM J. Math. Anal. 47, 4229–4250 (2015) Wang, L., Li, Y., Mu, C.: Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. Discrete Contin. Dyn. Syst., Ser. A 34, 789–802 (2014) Wang, Y.: Boundedness in the higher-dimensional chemotaxis–haptotaxis model with non-linear diffusion. J. Differ. Equ. 260, 1975–1989 (2016) Wang, Y.: Boundedness in the multi-dimensional chemotaxis–haptotaxis model with non-linear diffusion. Appl. Math. Lett. 59, 122–126 (2016) Wang, Y., Ke, Y.: Large time behavior of solution to a fully parabolic chemotaxis–haptotaxis model in higher dimensions. J. Differ. Equ. 260, 6960–6988 (2016) Wang, Y., Liu, J.: Boundedness in a quasilinear fully parabolic Keller–Segel system with logistic source. Nonlinear Anal., Real World Appl. 38, 113–130 (2017) Winkler, M.: Does a 'volume-filling effect' always prevent chemotactic collapse? Math. Methods Appl. Sci. 33, 12–24 (2010) Winkler, M.: Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source. Commun. Partial Differ. Equ. 35, 1516–1537 (2010) Winkler, M.: Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction. J. Math. Anal. Appl. 384, 261–272 (2011) Winkler, M.: A critical blow-up exponent in a chemotaxis system with nonlinear signal production. Nonlinearity 31, 2031–2056 (2018) Winkler, M.: Finite-time blow-up in low-dimensional Keller–Segel systems with logistic-type superlinear degradation. Z. Angew. Math. Phys. 69, 1–17 (2018) Winkler, M., Djie, K.: Boundedness and finite-time collapse in a chemotaxis system with volume-filling effect. Nonlinear Anal., Real World Appl. 72, 1044–1064 (2010) Xiang, T.: Dynamics in a parabolic-elliptic chemotaxis system with growth source and nonlinear secretion. Commun. Pure Appl. Anal. 18, 255–284 (2019) Zheng, J.: Boundedness of solutions to a quasilinear parabolic-parabolic Keller–Segel system with a logistic source. J. Math. Anal. Appl. 431, 867–888 (2015) Zheng, J.: Boundedness of solutions to a quasilinear parabolic-parabolic Keller–Segel system with logistic source. J. Differ. Equ. 259, 120–140 (2015) Zheng, J.: Boundedness of the solution of a higher-dimensional parabolic-ODE-parabolic chemotaxis–haptotaxis model with generalized logistic source. Nonlinearity 30, 1987–2009 (2017) Zheng, J.: A note on boundedness of solutions to a higher-dimensional quasi-linear chemotaxis system with logistic source. Z. Angew. Math. Mech. 97, 414–421 (2017) Zheng, J.: Boundedness of solution of a parabolic–ODE–parabolic chemotaxis–haptotaxis model with (generalized) logistic source. arXiv:1711.10084v1 Zheng, J., Ke, Y.: Large time behavior of solutions to a fully parabolic chemotaxis–haptotaxis model in N dimensions. J. Differ. Equ. 266, 1969–2018 (2019) Zheng, P., Mu, C., Hu, X., Tian, Y.: Boundedness of solutions in a chemotaxis system with nonlinear sensitivity and logistic source. J. Math. Anal. Appl. 424, 509–522 (2015) Zheng, P., Mu, C., Song, X.: On the boundedness and decay of solution for a chemotaxis–haptotaxis system with nonlinear diffusion. Discrete Contin. Dyn. Syst., Ser. A 36, 1737–1757 (2016) Zhuang, M., Wang, W., Zheng, S.: Boundedness in a fully parabolic chemotaxis system with logistic-type source and nonlinear production. Nonlinear Anal., Real World Appl. 47, 473–483 (2019) The paper is supported by the National Science Foundation of China (11301419) and the Meritocracy Research Funds of China West Normal University [17YC382]. College of Mathematics and Information, China West Normal University, Nanchong, P.R. China Long Lei & Zhongping Li Search for Long Lei in: Search for Zhongping Li in: This paper is the result of joint work of all authors who contributed equally to the final version of this paper. All authors read and approved the final manuscript. Correspondence to Zhongping Li. Lei, L., Li, Z. Boundedness in a quasilinear chemotaxis–haptotaxis model of parabolic–parabolic–ODE type. Bound Value Probl 2019, 138 (2019) doi:10.1186/s13661-019-1255-4 Chemotaxis Haptotaxis Nonlinear diffusion Boundedness Logistic source Nonlinear production
CommonCrawl
Observable universe equals its Schwarzschild radius (event horizon)? The estimated age of the universe is 14 billion years. The estimated Schwarzschild radius (event horizon) of the observable universe is 14 billion light-years. What are the ramifications? black-hole universe cosmology Dirk HelgemoDirk Helgemo $\begingroup$ Did the provided post answer your question? Otherwise feel free to ask for clarification :) $\endgroup$ – pela Jul 1 '19 at 13:41 $\begingroup$ Related: How can the Schwarzschild radius of the universe be 13.7 billion light years? $\endgroup$ – John Rennie Jul 1 '19 at 14:07 There's an error in your source, but even if there weren't, it wouldn't mean that the Universe is a black hole (see below): The Schwarzschild radius of the observable Universe is not equal to 13.7 Glyr. Wikipedia cites some random, non-refereed paper that uses the Hubble radius as the radius of the observable Universe, which is too small by a factor of $\gtrsim3$. The Schwarzschild radius of the Universe Although the age of the Universe is indeed ~13.8 Gyr, its radius is much larger than 13.8 Glyr, because it expands. In fact, the radius is $R \simeq 46.3\,\mathrm{Glyr}$. The mean density of the Universe is very close to the critical$^\dagger$ density $\rho_\mathrm{c} \simeq 8.6\times10^{-30}\,\mathrm{g}\,\mathrm{cm}^{-3}$. Hence, the total mass (including "normal", baryonic matter, dark matter, and dark energy) is $$ M = \rho_\mathrm{c}V = \rho_\mathrm{c}\frac{4\pi}{3}R^3 \simeq 3.0\times10^{57}\,\mathrm{g}, $$ and the corresponding Schwarzschild radius is $$ R_\mathrm{S} \equiv \frac{2GM}{c^2} \simeq 475\,\mathrm{Glyr}. $$ The Universe is not a black hole Even worse! you might say. If our Universe is much smaller than its Schwarzschild radius, does that mean we live in a black hole? No, it doesn't. A black hole is a region in space where some mass is squeezed inside its Schwarzschild radius, but the Universe is not "a region in space". There's no "outside the Universe". If anything, you might call it a white hole, the time-reversal of a black hole, in which case you could say that the singularity is not something that everything will fall into in the future, but rather something that everything came from in the past. You may call that singularity Big Bang. The error in the source You will see questions like this many places on the internet. As I said above, they all assume the Hubble radius, $R_\mathrm{H} \equiv c/H_0$ for the radius. But this radius is well within our observable Universe, and doesn't really bear any physical significance. In this case, the age in Gyr works out to be exactly equal to the radius in Glyr, by definition. So, what does that tell us? Nothing, really. Except that our Universe is flat, i.e. has $\rho\simeq\rho_\mathrm{c}$, which we already knew and used in the calculation. That is, setting $R = R_\mathrm{H} \equiv c/H_0$, $$ \begin{array}{rcl} R_\mathrm{S} & \equiv & \frac{2GM}{c^2}\\ & = & \frac{2G}{c^2} \rho V \\ & = & \frac{2G}{c^2} \rho \frac{4\pi}{3}R^3 \\ & = & \frac{2G}{c^2} \rho \frac{4\pi}{3} \left(\!\frac{c}{H_0}\!\right)^3 \\ & = & \frac{8\pi G}{3 H_0^2} \rho \frac{c}{H_0} \\ & = & \frac{8\pi G}{3 H_0^2} \rho R, \end{array} $$ so if $R=R_\mathrm{S}$, we have $$ \rho = \frac{3H_0^2}{8\pi G}, $$ which is exactly the expression for the critical density you get from the Friedmann equation. $^\dagger$The density that determines the global geometry of the Universe. pelapela Not the answer you're looking for? Browse other questions tagged black-hole universe cosmology or ask your own question. Why don't Neutron Stars form event horizon? Why is the observable Universe larger than its age would suggest? How are Galaxy Super Clusters Generated Age of the universe Why is there a difference between the cosmic event horizon and the age of the universe? How can the observable universe shrink in a Big Rip? Universe and black holes Is the age of the universe relative to an observer's location in that universe?
CommonCrawl
Optimal decision rules in multilateral aid funds Axel Dreher1, Jenny Simon2 & Justin Valasek3 The Review of International Organizations (2021)Cite this article While existing research has suggested that delegating foreign aid allocation decisions to a multilateral aid fund may incentivize recipient countries to invest in bureaucratic quality, our analysis links the fund's decision rules to recipient-country investment by explicitly modeling the decision-making within multilateral aid funds. We find that majority rule induces stronger competition between recipients, resulting in higher investments in bureaucratic quality. Despite this advantage, unanimity can still be optimal since the increased investment under majority comes at the cost of low aid allocation to countries in the minority. The qualitative predictions of our model rationalize our novel empirical finding that, relative to organizations that use a consensus rule, organizations that use majority are more responsive to changes in recipient-country quality. When allocating foreign aid, donor countries face a problem of incentivizing recipient countries to invest in bureaucratic quality in an effort to increase the effectiveness of aid spending. To address this problem, a new wave of aid conditionality—political conditionality—has emerged with the intention of incentivizing needed investments and reforms (see Molenaers et al. 2015). However, political conditionality suffers from the same difficulty with credible implementation that has led many observers to claim that aid conditionality has failed (see Collier 1997, Alesina and Weder 2002, Dreher 2009 and Öhler et al. 2012). In particular, conditional aid faces a problem of non-contractibility—measures of good governance are partially subjective, and donor countries often face an ex post incentive to circumvent conditionality. The problem of non-contractibility results in a "Samaritan's problem" that occurs when the recipient country, knowing it will receive assistance in any case, has no incentive to implement costly reforms (see Mosley et al. 1995 and Pedersen 1996 for a discussion of problems of time-inconsistency in aid spending).Footnote 1 Given the difficulty of implementing conditionality in bilateral aid, several papers consider the delegation of the allocation decision to a third party as a solution to the Samaritan's problem (see Svensson 2000, Hagen 2006 and Annen and Knack 2018; see also Schneider and Slantchev 2013 for an argument that delegation can solve commitment problems in international cooperation). Delegation is a well-known solution to hold-up problems in general (see Aghion and Tirole 1997), and may facilitate commitment to aid conditionality in the case of the Samaritan's problem. However, this solution depends on the existence of an independent party with verifiable preferences that diverge from the donor country's in the precise direction that facilitates recipient-country investment. Therefore, delegation to international aid agencies may not always mitigate the Samaritan's problem—as argued by Easterly (2003) and Hagen (2006), aid agencies often focus on ease of disbursement or recipient-country need rather than aid effectiveness. In practice, donor countries most commonly retain decision rights on aid allocation, even after committing funds to a multilateral organization. A typical example is the European Development Fund, where country representatives from each of the 27 EU-member states jointly decide on the allocation of aid to recipient countries. As illustrated in the following quote, since the donor countries have divergent preferences over where to allocate aid, discussion and bargaining is required to reach a collective decision: ...the European Union (EU) has to decide on the allocation of aid to individual countries....as can be expected, this process is plagued with difficulties as each stakeholder argues in favour of its own criteria for allocation and country-specific interests. (Negre 2013, p.1) Our paper studies a model of bargaining over aid allocation in multilateral organizations, and illustrates how this bargaining process can lead to a more efficient allocation of aid spending relative to bilateral aid allocation, and subsequently incentivizes recipient countries to invest in measures that increase the effectiveness of aid. Additionally, we analyze how the decision-making rule in the multilateral institution impacts the incentive of recipient countries to invest. We find that multilateral institutions that take decisions via majority rule provide recipient countries with a higher incentive to invest, which is consistent with our novel empirical finding that multilateral organizations that take decisions via majority are more responsive to changes in their recipient countries' governance policies. While the first part of our paper takes a positive approach and compares investment decisions under different institutions, we also consider a normative approach and detail which decision rule is optimal from the donor countries' perspective. Here, we find that since the higher incentive to invest under a majority rule comes at a cost of limiting the total number of projects that are funded, it is not always optimal for multilateral organizations to adopt a majority rule. Instead, our analysis suggests that a majority rule should only be adopted when there are high positive spillovers of recipient countries' investment on other areas, and when investment has an intermediate level of productivity. Our model relies on two key assumptions. First, we assume donor countries have individual biases over which country receives aid—to make the Samaritan's problem as stark as possible, each donor country only values the product of aid spending in one of the recipient countries. Footnote 2 There is a large empirical literature assessing country biases in both bilateral and multilateral aid spending (for a survey of this literature see Hoeffler and Outram 2011, and Fuchs et al. 2014), and this literature provides evidence that idiosyncratic political motivations influence the allocation of aid spending, particularly in bilateral aid. Since different donor countries exhibit different political biases, e.g., due to ties to former colonial countries, this is consistent with the notion that donor countries have individual biases. What is more, the scholarly literature that compares bilateral and multilateral aid typically argues that donor-country political interests are less prevalent for multilateral aid, and takes the relative absence of political motives as a reason why multilateral aid is more effective for promoting development (Headey 2008, Milner and Tingley 2013).Footnote 3 Our model provides a structured explanation for why multilateral aid is more effective: By bargaining over the allocation of aid spending, donor countries commit to a mechanism for allocating aid that mitigates individual donor-country biases. Second, we assume that donor countries can commit to allocate aid spending via a bargaining process. In practice, donor countries most often commit funds to a multilateral organization prior to bargaining over the precise allocation of the aid spending to individual projects. For example, the European Development Fund (EDF) is funded on a voluntary basis according to a pre-set formula. The allocation of the EDF budget to projects in individual recipient countries, however, is determined after funds have already been committed to the general budget. Importantly, after the donor-countries' aid budgets have been allocated to the EDF, the member countries have effectively committed to disburse this aid budget through the EDF—disagreement over where to allocate the joint aid budget does not result in an automatic reimbursement of the EDF's budget and a reversion to bilateral aid.Footnote 4 Our model considers a setting of three donor and three recipient countries. The donor countries each have an aid budget, and commit to implement aid bilaterally or via a multilateral fund with a decision rule of either unanimity or majority.Footnote 5 The recipient countries then choose an observable, but non-contractible, level of investment that increases the expected effectiveness of aid spending, after which the donor countries determine the allocation of aid. In this setting, the donor countries would like to incentivize the recipient countries to invest ex ante to increase the productivity of aid spending. However, knowing that the donor countries will allocate all spending to their preferred recipient country ex post under bilateral aid, the recipient countries have no incentive to invest ex ante. Alternatively, if donor countries commit to allocate via a multilateral aid fund, then the donor countries bargain over allocation outcomes ex post à la Nash. In contrast to bilateral aid, Nash Bargaining results in an allocation outcome that reflects aggregate efficiency (that is, efficiency according to the donor countries' preferences). Therefore, under multilateral aid, a recipient country with a higher effectiveness will receive a higher level of aid, which in turn induces competition over ex ante investments by recipient countries and mitigates the Samaritan's problem.Footnote 6 That is, it is precisely the process of preference aggregation in the multilateral fund that enables multilateral organizations to better implement aid conditionality—when donor countries with heterogenous preferences pool their resources, competition among recipient countries over aid intensifies since bargaining functions as a commitment to reward recipient countries for higher investments. Additionally, by explicitly modeling the bargaining process of the multilateral fund, we are able to link the investment outcomes to the decision rule used to allocate funding. That is, our analysis demonstrates that the recipient countries' incentive to invest in reform is also a function of the decision rule used within the fund. Specifically, we show that majority rule further increases the incentive for the recipient countries to invest, since higher investment increases the probability that a recipient's project will be selected by the endogenous majority coalition. However, the higher incentive to invest comes at a cost of limiting the total number of projects that are funded, which implies that majority will only outperform unanimity when the productivity of donor-country investment is at an intermediate level. Empirically, the predictions of our model are in line with the finding that aid allocated through multilateral agencies is more selective than bilateral aid (Eichenauer and Knack 2018), which suggests that multilateral allocation rewards recipient countries that invest in good governance. Our model also provides a clear prediction regarding the impact of decision rules on aid allocation: Majority results in a more selective budget allocation, and hence provides a greater incentive for recipient countries to invest in good governance. While there exists theoretical literature on optimal decision-rules in international organizations (see Harstad 2005, Maggi and Morelli 2006), empirical evidence regarding the relationship between decision rules and outcomes at the international level is lacking. Accordingly, we provide novel empirical evidence on the relationship between voting rules in multilateral organizations and the selectivity of aid allocation. We find that the qualitative prediction of our model is in line with the empirical evidence—multilateral organizations that take decisions via majority adjust aid allocation more in response to changes in their recipient countries' governance policies, suggesting that recipient countries have a higher incentive to invest in good governance when multilateral funds allocate aid via majority. Our paper's main contribution is to the literature on aid conditionality (see Dreher 2009 and Molenaers et al. 2015 for an overview), showing that delegation to a multilateral fund functions as a commitment to implement conditionality due to the endogenous objective that arises when donor countries bargain over aid allocation. In particular, this result shows that the Samaritan's problem can be mitigated through delegation even in the absence of an independent third party with appropriate preferences. Most closely related to our work is Annen and Knack (2018), who consider a model of delegation to an institution that exogenously maximizes the joint welfare of member nations, and focus on characterizing the decision of whether to join a fund as a function of the level of donor-country bias. Our model differs from theirs in the sense that we explicitly model decision-making within multilateral funds rather than assuming that the multilateral fund maximizes a certain objective, and are therefore able to characterize recipient-country investment as a function of the decision rules of the multilateral fund.Footnote 7 Additionally, our work contributes to the literature on optimal decision rules in international organizations (see Harstad 2005, Maggi and Morelli 2006, and Barbera and Jackson 2006).Footnote 8 In particular, our results regarding the optimal voting rule in multilateral funds are related to Harstad (2005), who shows how a majority rule can mitigate a holdup problem in a setting where investments by members of a club (or countries in a union) are expropriated ex post. A key difference in our findings, however, is that majority rule is not always optimal for donor countries, even though the costs of investment are fully borne by the recipient countries. Instead of simply choosing the decision rule that maximizes investments, unanimity may be optimal for multilateral aid organizations because it ensures that all recipient-country projects receive funding, whereas majority increases investment precisely by providing more funding to the most effective projects. More broadly, our paper also relates to the literature on aid allocation and selectivity. A number of recent contributions investigates what determines the allocation of aid. Schneider and Tobin (2016) show that governments allocate more resources to multilateral organizations that are similar to the donor in terms of the countries they support.Footnote 9 Annen and Knack (2019) show that aid receipts increase with the quality of recipient-country governance.Footnote 10Dietrich (2013) shows that donors bypass national governments when institutional quality is low; Dietrich and Murdie (2017) report that shaming by International Non-Governmental Organizations has the same effect. According to Knack (2013), donors rely on recipient-country aid management systems to a larger degree when such systems are of higher quality, pointing to an allocation of aid that builds administrative capacity in recipient countries. Our analysis contributes to this literature by demonstrating how the design of decision-making institutions within multilateral funds can impact the investment choices of the recipient countries. A number of recent papers also have investigated the conditions under which donors prefer bilateral over multilateral aid. According to the standard view, multilateral aid allows different donors to share the burden of aid-giving, at the cost of losing control over how exactly the aid is spent (Milner and Tingley 2013, Reinsberg et al. 2017). As holds true for multilateral cooperation at large, multilateral aid can realize efficiency gains, pool risks, materialize economies of scale, and encourage wide cost sharing (Abbott and Snidal 1998). However, it is important to note that bilateral aid also has access to benefits of scale in implementation since bilateral aid is often channeled through international institutions (see Eichenauer and Hug 2018 or Reinsberg et al. 2017 for analyses of donors' decisions to participate in trust funds and the implementation of earmarked aid). Additionally, by avoiding the bargaining process inherent in multilateral aid, donors have more direct control and freedom to use bilateral aid as a tool to promote their own political interests. Our paper shows that, despite the ability to achieve economies of scale with bilateral aid, donor countries benefit from delegating aid to a bargaining process as a commitment to reward recipient countries for investment in administrative capacity and bureaucratic quality. Related, Dreher et al. (2020) show that powerful governments refer to multilateral organizations when they aim to hide contentious foreign policies from domestic audiences. They allocate aid bilaterally when extending favors to allies, while they channel funds via multilateral organizations to non-allied recipients, where more visible bilateral aid is more contentious. Our paper proceeds as follows: Section 2 provides illustrating empirical evidence to motivate the formal analysis. Section 3 presents the theoretical framework. Section 4 contains the analysis of aid allocation (Section 4.1) and investment decisions (Section 4.2); Section 4.3 contains our normative analysis and characterizes the optimal decision rule. Section 5 concludes. We present formal proofs for all results in the Appendix.Footnote 11 Empirical analysis of decision rules and multilateral aid Before we proceed with the model, we provide descriptive evidence regarding the relationship between voting rules in multilateral organizations and the sensitivity of aid allocation to changes in their recipient countries' governance policies. To address this question, we draw on the Creditor Reporting System (CRS) of the OECD's Development Assistance Committee (DAC), which provides recipient- and year-specific data on aid given by a broad range of international organizations. We consulted the statutory documents of these international organizations, as well as secondary sources, in order to code the organizations' decision-making rules for allocating funds across members. Footnote 12 Out of the 46 international organizations included in the CRS, only five allocate funds via a strict unanimity rule. Footnote 13 This is too small a sample to provide sufficient empirical evidence on the difference between unanimity and majority. However, an additional 15 organizations take decisions via a consensus rule. While consensus rules have commonly been coded as unanimity in the literature on international organizations (Blake and Lockwood-Payton 2015), they differ from an explicit unanimity rule in the sense that country representatives are strongly encouraged to seek unanimity, but may fall back on a majority decision rule if unanimity cannot be attained (see Gould 2017 for a detailed discussion on the implementation of consensus rules).Footnote 14 Given the small number of organizations in our sample that use a traditional unanimity rule, we separate organizations that allocate funds according to a (weighted or unweighted) majority rule from those that explicitly encourage decisions via consensus in our empirical analysis. Therefore, our assumption is that the bargaining power of countries in the minority is higher under a consensus rule relative to a traditional majority rule—arguably, organizations with the strong norm of decision-making via consensus are closer to unanimity, in terms of our model, than organizations deciding based on majority rule alone. Table A1 in the Online Appendix lists the 46 international organizations included in our data. For each organization, we list the main decision-making rule for allocating funds, in concert with information about whether we code decision-making to conform with the norm of consensus as well as sources on which we base our coding. As can be seen, 41 organizations decide with majority and five require unanimity. 20 organizations included in our data use either consensus or unanimity. In the empirical application, Consensus Rule is thus a binary indicator that is one for these 20 organizations, and zero for the others. One way to test the impact of decision rules on aid allocation is to compare the weighted average governance quality of the aid portfolio across multilateral organizations. However, the multilateral organizations in our sample vary widely in their scope and regional focus, and a large degree of the variance in the allocation of aid spending may be unrelated to their ability to overcome the Samaritan's problem. A test of the sensitivity of aid allocation to recipient country investment in good governance thus has to consider how the aid allocation of a given institution changes in response to changes in the recipient countries' governance policies. Accordingly, we test the relationship between decision rules and aid allocation using the following regression equation: $$ \begin{array}{@{}rcl@{}} Aid_{ijt}=\beta_{1} Bureaucratic \ Quality_{jt}+\beta_{2} Consensus \ Rule_{i}\\ +\beta_{3} Quality_{jt}*Consenus_{i}+\beta_{{4}} \mathbf{X_{jt}}+\eta_{i} +\eta_{j} +\tau_{t} + \epsilon_{ijt}, \end{array} $$ where Aidijt is the amount of funds committed by international organization i to a specific member j in a year t in millions constant 2010 US$, Consensus Rule is a measure of consensual decision-making, Bureaucratic Quality an indicator of recipient country institutional quality, and X constitutes a set of control variables. One difficulty, in terms of translating the model to the empirical setting, is identifying the set of relevant recipient countries for each international organization. For example, most countries of the world are member of the International Bank for Reconstruction and Development (IBRD), but some have never received funding and are thus unlikely to adjust their policies in reaction to the IBRD's decision rules. Our regressions therefore only include countries that have received funding from a specific organization in at least one of the years of our sample.Footnote 15 Our indicator of Bureaucratic Quality is from the International Country Risk Guide (PRS Group 2017), and measures whether bureaucracies are "somewhat autonomous from political pressure and ... have an established mechanism for recruitment and training," on a zero to four scale. We control for the recipient country's (log) GDP per capita (in constant 2010 US$), (log) population, and its share of exports relative to GDP, which are control variables included in most aid allocation regressions.Footnote 16 We estimate the aid allocation models with Tobit, which is well-suited for modeling the allocation of aid if we assume whether or not countries receive aid and, if they do, how much they receive are determined by the same process.Footnote 17 Aid allocation models are also often estimated as two step models, where a first set of regressions focuses on the decision whether or not to provide funding to a recipient in a year and the second stage explains the amount of funding received, given a country was selected to receive funding (e.g., Fleck and Kilby 2010). We provide such analyses for completeness. As is typical in the related literature, we focus on logged values of aid (adding a value of one to keep zero observations in the sample). For comparison, we also report results using an inverse hyperbolic sine transformation and a regression that does not transform aid (i.e., that focuses on absolute dollar values).Footnote 18 We estimate the model with a set of fixed effects.Footnote 19 Our preferred specification includes dummies for recipient countries (ηj) and years (τt), given that the decision-making rule is constant over time within donor organizations and thus collinear with donor fixed effects. We however also report a specification including dummies for donor organizations (ηi), where the coefficient of Bureaucratic Quality is captured by the fixed effects. The coefficient of the interaction is then exclusively identified by changes in Bureaucratic Quality within recipient-countries over time. We cluster standard errors at the international organization level. Table 1 shows the results. Column 1 reports the basic Tobit regression, excluding fixed effects for donors, with log aid commitments as dependent variable.Footnote 20 Focusing on the coefficient of Qualityjt ∗ Consensusi, our results show that consensus rule results in a less selective budget allocation compared to majority rule.Footnote 21 This result holds when we use an inverse hyperbolic sine transformation (in column 2) or focus on the level of aid commitments (in column 3) rather than taking logs, include fixed effects for donors (in column 4) or exclude control variables (in column 5). Table 1 Consensual Decision-making and Bureaucratic Quality, 1985-2016 Results are very similar when we focus on the second stage of the aid allocation process, using the sample with positive aid commitments only. Both in terms of statistical significance and in quantitative terms, results from this OLS regression (shown in column 6) are very similar to those of column 1. To the contrary, results from a linear probability model of the binary selection stage shows no significant interaction (column 7).Footnote 22 It thus seems that the results from the Tobit analysis are driven by the second rather than the first stage. As we show in the robustness section of our formal model, this is consistent with a setting where donor countries have a partial bias so that recipient countries with high bureaucratic quality receive more, but not all, of the aid funds. Quantitatively, the OLS results of column 6 imply that organizations which decide via consensus commit around 40 percent lower amounts of aid to countries that improve their bureaucratic score by one point compared to organizations that follow a strict majority rule. The corresponding decrease according to the Tobit estimates of column 1 (for countries that receive positive values in a year) is around 30 percent. In absolute numbers, the estimate of column 3 implies that organizations which decide consensually commit around US$ 50 million less to countries that improve their bureaucratic score by one point compared to organizations that follow a strict majority rule (with mean commitments of around US$ 30 million and a mean of US$ 106 million for positive commitments). We conclude this section by testing whether the effect of Qualityjt ∗ Consensusi depends on the size of an international organizations' resources in a year. To this end, we add an interaction of Qualityjt ∗ Consensusi with the yearly total of funds committed by an organization, to proxy their aid budget.Footnote 23 As one would expect, the effect turns stronger (i.e., more negative) when total commitment size increases.Footnote 24 This is in line with the interpretation that budgets have to be sufficiently large in order to work as incentive. In summary, our conditional correlations provide some evidence that organizations with majority rule are more selective compared to those adopting decisions by consensus. This novel empirical finding can be rationalized in the context of our model: As we show below in Section 4.3, majority rule induces stronger competition between recipient countries since, relative to unanimity, the bargaining solution under majority is more responsive to changes in recipient country governance quality. In this section we introduce our model. Specifically, there are three donor countries and three recipient countries, denoted as i = 1, 2, 3 and j = 1, 2, 3 respectively. Each donor country has an identical budget for foreign aid, x, to allocate across a set of recipient country aid projects.Footnote 25 We use the notation ai, j to indicate the amount of aid that donor country i allocates to recipient country j. Prior to making the aid allocation decision, the donor countries commit to distribute their aid budget via a specific institution; in particular, we compare bilateral aid allocation with a multilateral institution with a unanimity rule, and a multilateral institution with majority rule. Preferences and actions We model a situation in which donor countries are biased towards a particular recipient country. This bias could reflect preferences due to geographical proximity, trade relations or historical ties; as we discuss in the introduction, empirical research has shown that donors disproportionately allocate aid to countries that are former colonies and countries with which they have strong trade ties, suggesting a bias in preferences over allocations. To make the Samaritan's problem as stark as possible, we make the assumption that each donor country i only values aid spending in one of the recipient countries, and denote the recipient country that donor country i values as recipient country j = i. This assumption also allows us to simply and clearly illustrate the main points of the model, and we discuss the robustness of our results to this assumption at the end of our analysis. Additionally, each recipient country has a project quality, qj, that is either high quality, qj = h2 > 1, or low quality, qj = 1. One may for example think about governance structures that impact the effectiveness of aid.Footnote 26 Each recipient country can influence its own project quality by investing in δj ∈ [0,1] according to the cost function \(c(\delta _{j})={\delta _{j}^{2}}\). The quality of the project is in turn a stochastic function of the level of investment chosen by country j. Footnote 27 $$ p(q_{j} = h^{2} | \delta_{j}) = \delta_{j}. $$ Conceptually, this is consistent with our example of investment in good governance: e.g., indirectly investing in good governance by decreasing corruption increases the probability that a recipient country's project realizes as high quality. Investment, however, is observable but non-contractible, which implies only a limited scope for donor countries to condition allocations on investments. Let aj denote the total amount of aid received by country j; i.e. \(a_{j} = {\sum }^{i} a_{i,j}\). Donor countries have utility functions over aj=i (we use the notation ai instead of aj=i henceforth) that are increasing and concave: $$ u_{i}(a_{i}, q_{i}, \delta_{i}) = \sqrt{q_{i} a_{i}} + g\delta_{i}. $$ Note that we account for the possibility that donor countries directly value investment in good governance in recipient countries, independently from its impact on the quality of the project, by including the term gδi in the donor-country utility function. That is, given that good governance in recipient countries is frequently a policy goal of donor countries (as discussed in the introduction) we find it relevant to account for donor countries' preferences for, say, decreased corruption or more democracy. However, since donor countries distribute aid spending after observing recipient-country investment, the level of g does not impact the final allocation of aid spending (that is, the direct utility from δi is independent of the aid allocation, and therefore does not impact the Nash Bargaining solution). Therefore, our descriptive analysis of the impact of decision rules on aid allocation and recipient-country investment is independent of g. That is, the results of Sections 4.1 and 4.2 hold for any level of g, and therefore g will not feature in the positive analysis of the model. However, g plays an important role when it comes to determining the optimal decision rule; in our normative analysis in Section 4.3 we show that majority will be the preferred decision rule when donor countries place a high direct value on good governance in recipient countries. Recipient countries have linear preferences over aid spending \(a_{j} = {\sum }^{i} a_{i,j}\): $$ v_{j}(a_{j}, \delta_{j}) = a_{j} - {\delta_{j}^{2}}. $$ Institutions for aid allocation: We consider three possible institutions for allocating aid: Bilateral, Multilateral-Unanimity and Multilateral-Majority. Under all institutions, donor countries make aid allocation decisions after recipient countries choose investment levels and {qj} is revealed. Under the multilateral institutions, we assume that donor countries bargain over aid allocation decisions, and that the donor countries have access to utility-transfers. We discuss this assumption in detail in the following subsection. Under Bilateral aid allocation, donor countries simultaneously allocate their individual budgets, x, over the set of recipient-country projects. Country i's choices are represented by the set {ai, j}, with \({\sum }^{j} a_{i,j} \leq x\). Multilateral-unanimity Under multilateral aid allocation, donor countries commit to allocating the joint aid budget, 3x, centrally. Under Multilateral-Unanimity, donor countries bargain over the allocation of the centralized aid budget à la Nash, given the condition that \({\sum }^{j} a_{j} \leq 3x\). Multilateral-majority If the donor countries use a majority rule then the budget allocation is determined according to the following procedure: One donor country is randomly chosen as formateur, F. Each donor country has an equal probability of being chosen. The formateur then selects a majority coalition, and bargaining over project funding and utility transfers occurs within the majority coalition.Footnote 28 Therefore, allocation under Majority is comparable to a two-country fund, consisting of the majority coalition, bargaining over the allocation of aid funding via Unanimity. However, since the majority coalition is chosen endogenously, Majority features the additional step of the formateur selecting the majority coalition that maximizes her expected utility. We assume that if the formateur is indifferent regarding which countries to include in the majority coalition, she chooses each country with equal probability. Also, utility transfers are restricted to the majority coalition, which implies that countries outside the majority coalition are not fully expropriated. To summarize, the timing of Multilateral-Majority is as follows: Recipient countries choose δj and qj realizes. A donor country is randomly chosen as formateur, F. The formateur selects a majority coalition, \({\mathcal{M}}\). The majority coalition bargains over the allocation of project funding à la Nash. We denote the subset of donor-country strategies that pertain to the choice of a majority coalition as \({\mathcal{M}}\).Equilibrium: The equilibrium we utilize is analogous to sub-game perfect Nash equilibrium, with the exception that, if the donor countries allocate aid via a multilateral aid fund, the allocation decision is determined via Nash Bargaining. We restrict the analysis to symmetric equilibria, where all target nations choose the same level of investment and, under Multilateral-Majority, all donor countries play a symmetric strategy, \({\mathcal{M}}(\{q_{j}\})\), conditional upon being selected as the formateur, which specifies the majority coalition chosen as a function of the realized project qualities. Lastly, we assume that the donor countries have access to utility transfers, and have a threat-point payoff of zero—in the sense that if donor countries do not agree on an allocation, then no aid is allocated and donor countries receive a payoff of ui(ai, qi, δi) = gδi. Throughout the analysis, we consider the objective of the donor countries rather than, say, an objective of maximizing aggregate utility. We argue that this is a natural objective to consider when analyzing the political economy of multilateral aid funds. However, this does not imply that investments carried out by the recipient countries should be considered as non-productive for the population of these countries: While we assume that investment in good governance is costly for the regime of the recipient country, it is possible that these reforms provide utility benefits to the recipient countries' population by increasing the effectiveness of the public sector.Discussion of key assumptions: Our analysis relies on two key assumptions: (1) that donor countries can commit their aid budgets to a multilateral organization, and (2) that the multilateral organization bargains over the allocation of aid spending and that the donor countries are able to transfer utility to each other through some dimension other than aid allocation. We discuss the robustness of our analysis at the end of the analysis section, and show how our results are robust to relaxing assumptions such as symmetric donor countries and partial bias. Also, assumption (1) is a basic premise of our analysis, and is consistent with the observation discussed above that donor countries cannot typically unilaterally withdraw aid funding that they commit to an international organization. Our second key assumption, however, deserves a more detailed discussion. First, regarding the choice of an appropriate threat point, we follow the prescription of Binmore et al. (1986), who provide a non-cooperative foundation for the Nash Bargaining solution. Specifically, threat-point payoffs of zero are appropriate for modeling bargaining when (1) there is no risk of an exogenous breakdown of negotiations, or (2) agents receive payoffs of zero from bargaining in case of an exogenous breakdown. Note that in our model, a threat-point of zero does not imply that donor countries receive a utility of zero; rather, if bargaining breaks down a threat point of zero corresponds to donor countries receiving no payoff from aid spending (i.e., there is no utility surplus from the bargaining), but they may still receive positive utility from the direct value of recipient-country investment (gδi). The direct value of recipient-country investment received by the donor countries, however, does not impact the bargaining outcome, since the donor countries receive gδi whether they agree to a bargain or not. In cases where there is both a risk of exogenous breakdown, and budget contributions are returned to agents following a breakdown, then the relevant threat-point is the utility countries receive from spending their budget contribution bilaterally. However, if the fund retains contributions even if negotiations were to break down, then a threat-point of zero is appropriate. Footnote 29 As we discuss in the introduction, the example of multilateral aid funds is consistent with a threat-point of zero: In most cases, disagreement over the allocation of aid funding does not result in an automatic liquidation of the multilateral fund and a reimbursement of the donor countries' contributions. While some funds, in particular funds tied to regional banks, do have stipulations to return donations in the case of liquidation, the liquidation decisions are generally made via a weighted majority rule and are independent of allocation decisions. Second, regarding the assumption that donor countries have access to utility transfers, we believe this to be an appropriate assumption when it comes to international organizations. The assumption of utility transfers effectively implies that donor countries can jointly bargain over aid allocation and some other dimension where the donor countries can effectively compensate other donor countries by making concessions in these other dimensions. Therefore, utility transfers are typically considered to be an appropriate assumption in settings where agents interact across multiple different areas, or where agents are able to directly transfer money to each other. Arguably, donor countries do interact in other dimensions than just aid (the EDF discussed in the introduction is one such example); however, in the context of multilateral aid organizations the "other dimension" has a clear interpretation as the contributions to the central budget. That is, while we have assumed that donor countries simply give their aid budgets to the multilateral organization, a more realistic (but more complicated) model would also take into account bargaining over who contributes to the aid budget. Note that if countries simultaneously bargain over how much each donor country contributes to the aid budget and how to allocate aid spending, then budget contributions will function as utility transfers, and the allocation of aid spending will be identical to the prediction of our model (which assumes utility transfers). Accordingly, our model can be considered a simplified version of a more general setting where country representatives bargain over both allocations and contributions to a fixed budget. This may be especially relevant for multilateral funds that need to raise money for their budget on a regular basis—in these cases, it is likely that bargaining over the budget allocation and budget contributions is linked. However, in some cases, multilateral funds may bargain over budget contributions and aid allocation independently, and it is unclear that donor countries have access to another method of transferring utility. In the appendix of a previous version of this paper (Dreher et al. 2018), we analyze a model with the alternative assumption of no utility transfers and show that the Nash Bargaining solution is responsive to recipient-country investment even when utility is non-transferable. However, our comparison of Unanimity and Majority does rely on the assumption of some degree of utility transfers between donor countries since, without utility transfers, the formateur would be indifferent as to which other donor country joined the majority coalition.Footnote 30 We solve the model by backward induction, and thus begin with the allocation decisions. Allocation of aid In the last step, donor countries decide how to allocate the aid budget to the set of projects {ai,1, ai,2, ai,3}. Importantly, the donor countries are committed to allocate bilaterally or multilaterally, and cannot at this point reverse their decision on the allocation mechanism. If countries allocate their aid budgets bilaterally, each donor country i solves: $$ \begin{array}{@{}rcl@{}} \max_{\{a_{i,j}\}}&&\quad \sqrt{q_{i} a_{i}}, \end{array} $$ $$ \begin{array}{@{}rcl@{}} \text{s.t.}&& \quad \sum\limits_{j} a_{i,j} \leq x. \end{array} $$ This maximization problem shows that, irrespective of the project quality, each donor country i will spend all of its budget in its preferred recipient country i, i.e., ai = x. This result already eludes to the Samaritan's dilemma donor countries face when allocating their aid budget bilaterally: The preferred recipient country will receive the whole aid budget regardless of the level of reform effort they implement. Even though donor countries are free to allocate to a different country, their preferences make it impossible to commit to conditionality. If donors have decided to allocate their aid budgets through a joint fund, at this stage they bargain over the allocation of the aggregate budget to the set of recipient country projects. The Nash Bargaining (NB henceforth) outcome maximizes the sum of utilities of the bargaining parties:Footnote 31 $$ \begin{array}{@{}rcl@{}} \max_{\{a_{i}\}}&&\quad\sum\limits_{i} \sqrt{q_{i} a_{i}}, \end{array} $$ $$ \begin{array}{@{}rcl@{}} \text{s.t.}&&\quad \sum\limits_{j} a_{j} \leq 3x. \end{array} $$ Note that the indirect utility benefit of recipient-country investment is not factored into the NB outcome, since gδi is independent of the allocation decision (i.e., the indirect utility benefit of investment is part of each donor country's outside option). Solving the maximization problem above gives the NB allocation outcome: $$ a_{i} = \left( \frac{q_{i}}{q_{1} + q_{2} + q_{3}}\right)3x. $$ This equation shows that the division of funds under NB depends on the realized project qualities. Therefore, unlike with bilateral aid allocation, the expected share of the total aid budget that each target nation receives is sensitive to its investment in δj. Since utility transfers are possible, donor countries share the created surplus equally. That is, Nash Bargaining allocates the aid budget to maximize the joint surplus, and specifies utility transfers such that all donor countries receive an equal share of the surplus. Formally, each receives: $$ u_{i} = \frac{1}{3}{\sum}_{i}\sqrt{q_{i}a_{i}}. $$ The aid allocation stage of Multilateral-Majority is identical to Multilateral-Unanimity, with the exception that bargaining over aid allocation only takes place within the majority coalition. Since donor countries only value aid spending in their respective recipient country, only recipient countries whose donor country is in the majority coalition will receive a positive level of aid funding. Specifically, take a majority coalition of \({\mathcal{M}} = \{i,k\}\). NB among the majority coalition solves the following maximization problem: $$ \begin{array}{@{}rcl@{}} \max_{\{a_{j}\}}&&\quad\sum\limits^{\mathcal{M}} \sqrt{q_{j} a_{j}},\\ \text{s.t.}&&\quad \sum\limits^{\mathcal{M}} a_{j} \leq 3x, \end{array} $$ which results in the following aid allocation for i = j in the majority coalition: $$ a_{j} = \left( \frac{q_{j}}{q_{i} + q_{k}}\right)3x \text{ if} j \in \mathcal{M}. $$ That is, if the donor country with i = j is in the majority coalition then aj is defined by the above expression, and if the donor country with i = j is not in the majority coalition then aj = 0. Expression (8) characterizes aid allocation given a majority coalition. However, to fully characterize aid allocation as a function of {qj}, we must also detail the endogenous formation of the majority coalition. First note that donor country \(i\notin {\mathcal{M}}\) receives a utility of 0 since ai = 0. Therefore, F (the formateur) will select itself into the majority coalition (\(F \in {\mathcal{M}}\)). Next, consider F's choice of the additional country in the majority coalition. Take the majority coalition to be \({\mathcal{M}} = \{F,k\}\). Similar to Multilateral-Unanimity, NB results in an equal split of the utility surplus among the donor countries in the majority coalition. That is, F's utility is equal to: $$ u_{F} = \frac{1}{2} \left( \sqrt{q_{F}a_{F}} + \sqrt{q_{k} a_{k}}\right). $$ Given expression (9), it follows that to maximize the utility surplus of the majority coalition, the formatuer will select the donor country whose recipient country's project has the highest quality. It might seem counter-intuitive that the formateur selects the donor country whose recipient country's project has the highest quality into the majority coalition, since recipient countries with high-quality projects will receive a higher amount of aid. However, since the donor countries have access to utility transfers, and donors in the majority coalition split the utility surplus equally, it is in the best interest of the formateur to maximize the joint surplus by selecting a donor country whose recipient country's project has high quality. The following lemma summarizes the endogenous choice of the majority coalition (\({\mathcal{M}}(\{q_{j}\})\)). Lemma 1 Take donor countries {F, k1, k2} with corresponding project qualities \(\{q_{F},q_{k_{1}},q_{k_{2}}\}\). The formatuer will select a majority coalition \({\mathcal{M}} = \{F,i\}\) with: $$i = \left\{ \begin{array}{lr} k_{1} \text{ if} q_{k_{1}} = h^{2}, q_{k_{2}} = l,\\ k_{2} \text{ if} q_{k_{1}} = l, q_{k_{2}} = h^{2},\\ k_{1}/k_{2} \text{ with probability 1/2 if} q_{k_{1}} = q_{k_{2}}. \end{array} \right. $$ Recipient countries move simultaneously when deciding their investment levels δj and take into account how their expected share of aid spending will change with the investment they make. Since recipient countries have linear utility, each recipient country chooses δj to maximize their expected aid minus the cost of investment: $$ \begin{array}{@{}rcl@{}} \max_{\delta_{j}} \left\{E[a_{j}|\{\delta_{k}\}]-{\delta_{j}^{2}}\right\}. \end{array} $$ That is, given the investment decisions of the other recipient countries, country j will select a level of investment that maximizes the return of investment—i.e., the expected aid spending—minus the cost of investment. Bilateral aid When aid is allocated bilaterally, recipient countries know that the donor who prefers to allocate aid to their project will do so irrespective of the quality of the projects {qj}. That is, recipient countries solve problem [10] given aj = x for all {qj}. When aid is allocated bilaterally, recipient countries do not invest in reforms; i.e., δj = 0 for all j. This is a classic Samaritan's problem in the context of aid conditionality—aid allocation does not change with investment—which illustrates that in the presence of donor country bias, donor countries face significant difficulties implementing aid conditionality when allocating aid bilaterally. In contrast to Bilateral aid allocation, under Multilateral allocation, aid spending is a function of the realized quality of the recipient countries' projects. Therefore, recipient countries will internalize the effect that their reform efforts have on the final allocation of the fund's budget and choose δj to maximize their expected utility, taking the allocation rule of the donor countries and the investment decisions of the other recipient countries as given. To analyze recipient-country investment under Multilateral allocation, we introduce some additional notation to facilitate the analysis. Take Qj ∈ {0, 1, 2} to be equal to the number of recipient countries other than j that realize high quality projects: Next, given that we have characterized the aid allocation that results for any given set of project qualities, we introduce a functional notation for the equilibrium level of aid spending given a set of project qualities. That is, take aj({q}) to be the NB allocation to j given project qualities \(\{q\} \equiv \{q_{j},q_{k_{1}},q_{k_{2}}\}\) under Unanimity, and {q} ≡ {qj, qk} under Majority given majority coalition of \({\mathcal{M}} = \{j,k\}\). The function aj({q}) is characterized by Expressions [6] and [8] in the previous subsection for Unanimity and Majority, respectively. Following backward induction, we then characterize the equilibrium level of investment chosen by the recipient countries, who take aj({q}) as given. That is, given the results of the previous section we are able to specify an investment strategy under Unanimity that is a symmetric best response by characterizing the level of investment, δj, that solves: $$ \max_{\delta_{j}}\{E[a_{j}(\{q\})|\delta_{j}, \delta^{u},\delta^{u}]-{\delta_{j}^{2}}\}, $$ and that satisfies δj = δu, which gives δu as the equilibrium level of investment under unanimity. We leave the details of solving for δu to the Appendix. However, the main insight from the analysis is that, in contrast to bilateral aid, multilateral aid results in a positive level of recipient-country investment. The reason for this finding is that under multilateral aid, the expected amount of aid given to country j is increasing in the level of investment, δj. We illustrate the intuition with the following example: assume that j knows with certainty that one of the other recipient country's projects will be low quality and the other will be high quality (Qj = 1). In this case, when choosing their optimal level of investment, they will select δj to maximize the expected sum of aid spending on their project, aj(qj, h, l), minus the cost of investing, captured by the following expression: $$ \begin{array}{@{}rcl@{}} E[a_{j}(q_{j}, h,l)|\delta_{j}] - {\delta_{j}^{2}} &=& \delta_{j} a(h,h,l) + (1-\delta_{j})a(l,l,h) - {\delta_{j}^{2}}\\ &=& \delta_{j} (a(h,h,l)-a(l,l,h)) + a(l,l,h) - {\delta_{j}^{2}}. \end{array} $$ Note that this shows that the first term in the above expression, E[aj(qj, h, l)|δj], is increasing in δj since a(h, h, l) > a(l, l, h). This implies that, given Qj = 1, j will select a positive level of investment, δj. Moreover, the same is true for Qj = 0,2, which implies that the equilibrium level of investment, δu, will be positive, since a higher level of investment leads to a higher level of aid. Specifically, expanding on our simple example, given that the other two recipient countries select investment levels δu, recipient-country j will select δj to maximize the following expression: $$ \begin{array}{@{}rcl@{}} E[a_{j}|\delta_{j}, \delta^{u},\delta^{u}]^{u} - {\delta_{j}^{2}}&=& p(Q_{j} = 0|\delta^{u})\left[\delta_{j} \left( a(h,l,l)-a(l,l,l)\right) + a(l,l,l)\right] \\ &+& \!p(Q_{j} = 1|\delta^{u})\left[\delta_{j} \left( a(h,h,l)-a(l,l,h)\right) + a(l,l,h)\right]\\ &+& \!p(Q_{j} = 2|\delta^{u})\left[\delta_{j} \left( a(h,h,h) - a(l,h,h)\right) + a(l,h,h)\right] - {\delta_{j}^{2}}. \end{array} $$ This expression allows us to solve for the unique symmetric equilibrium level of investment under Unanimity using the first-order conditions, an exercise we leave for the Appendix but summarize in the following lemma: Under Multilateral-Unanimity, the unique equilibrium level of investment is characterized by:Footnote 32 $$ \delta^{u} = \min\left\{1, \frac{(h^{2}-1)(2h^{2}+1)x}{2+5h^{2}+2h^{4} + (h^{2}-1)^{2}x}\right\}. $$ Again, Lemma 3 illustrates one of the key insights of the analysis: When aid is allocated through a multilateral fund, the bargaining process induces competition between the recipient countries. Under Multilateral-Unanimity they thus have an incentive to invest in reforms in order to secure a larger share of the budget. Next, we illustrate how a majority rule in the multilateral fund impacts the recipient country's incentive to invest in good governance. Under Majority, recipient countries have two reasons to invest in good governance: first, analogous to Unanimity, conditional upon i = j being selected to the majority coalition investment increases the expected allocation to aj; second, as illustrated by Lemma 1, higher investment increases the probability of i = j being selected to the majority coalition. Here we detail the impact of this second channel on recipient-country investment under majority rule. As above, we illustrate the intuition with an example: again, assume that country j knows with certainty that one of the other recipient country's projects will be low quality and the other will be high quality (Qj = 1). Additionally, assume that j knows with certainty that i = j will be chosen as the formateur. In this case, j's expected level of aj as a function of δj is analogous to unanimity, since i = j is always in the majority coalition. That is: $$ E[a_{j}|\delta_{j}] = \delta_{j} (a(h,h)-a(l,h)) + a(l,h). $$ Note that E[aj|δj] is still increasing in δj, since a(h, h) > a(l, h). Next assume that i ≠ j with a high-quality project is selected as the formateur. In this case, j only receives positive aid funding with certainty if their project has high quality; that is, in this case: $$ E[a_{j}|\delta_{j}] = \delta_{j} (a(h,h)-\frac{1}{2}a(l,h)) + \frac{1}{2}a(l,h). $$ Comparing the two cases, we see that the marginal rate of return on investment is higher when i ≠ j with a high-quality project is selected as the formateur—\(\partial E[a_{j}|\delta _{j}]/\partial \delta _{j} = a(h,h)-\frac {1}{2} a(l,h)\)—than when i = j is chosen as the formateur—∂E[aj|δj]/∂δj = a(h, h) − a(l, h). This illustrates the additional incentive to invest provided by Majority relative to Unanimity. Below we list the expression for the expected utility of the recipient country as a function of the level of investment δj, which is similar to the expression for Unanimity, with the exception that in cases where i = j is not chosen as the formateur, there is an additional incentive to invest to increase the probability that i = j is chosen to the majority coalition: $$ \begin{array}{@{}rcl@{}} E[a_{j}|\delta_{j}, \delta^{m},\delta^{m}]^{m} - \delta^{2}\!&=&\!p(Q_{j} = 0|\delta^{m})\left[\delta_{j} \left( a(h,l) -\frac{2}{3} a(l,l)\right) +a(l,l)\right] \\ &+&\!p(Q_{j} = 1|\delta^{m})\left[\delta_{j} \left( a(h,h) -\frac{1}{2} a(l,h)\right) +\frac{1}{2} a(l,h)\right] \\ &+&\!p(Q_{j} = 2|\delta^{m})\left[\!\delta_{j} \left( \frac{2}{3}a(h,h) - \frac{1}{3} a(l,h)\right) + \frac{1}{3} a(l,h)\!\right] - \delta^{2}. \end{array} $$ As above, we leave the details of solving for the unique symmetric equilibrium to the Appendix, and summarize the result in the following lemma. Under Multilateral-Majority, the unique equilibrium level of investment is characterized by: $$ \delta^{m} = \min\left\{1, \frac{(2(h)^{2}-1)x}{2+2(h)^{2}+((h)^{2}-1)x}\right\}. $$ In the next subsection, we utilize the characterization results in Lemmas 3 and 4 to compare investment levels under Unanimity and Majority and detail the optimal decision rule. Optimal decision rules Given the characterization results above, we are able to compare the level of investment under Bilateral, Multilateral-Unanimity and Multilateral-Majority and introduce our first main result in the following proposition: The equilibrium level of investment under Majority, δm, is weakly greater than the level of investment under bilateral aid and the level of investment under Unanimity, δu. Intuitively, recipient countries have a higher incentive to invest under Majority since majority rule incentivizes the recipient countries to invest via two channels: (1) to increase the expected allocation to aj, conditional upon i = j being selected to the majority coalition (analogous to Unanimity), and (2) to increase the probability of i = j being selected to the majority coalition. The addition of the second channel implies that recipient-country investment is always higher under Majority relative to Unanimity. The comparison of Unanimity and Majority is also illustrated visually in Fig. 1, where we vary the value of the high-quality project, h, which represents the indirect value of investment to donor countries, for a fixed x. Reading the figures right-to-left, we see that under Unanimity, as h → 1 the incentive to invest approaches zero for the recipient countries, since aj approaches x for any qj. Under Majority, however, the incentive to invest stays strictly positive, since any country with qj = 1 is more likely to be left out of the majority coalition and receive aj = 0. δm and δu (dashed line) for x = 2 In Section 2, we established the novel empirical result that multilateral funds that allocate aid spending via majority rule are more sensitive to institutional quality in recipient countries. Intuitively, this empirical result suggests that recipient countries' incentive to invest is higher under majority rule, consistent with Proposition 1. However, to show that our model can rationalize this empirical finding specifically, we also prove the following related result: At the equilibrium levels of investment δu and δm, expected aid spending is more responsive to recipient-country investment under Majority. That is: $$ \partial E[a_{j}|\{\delta^{u}\}]^{u}/ \partial \delta_{j} < \partial E[a_{j}|\{\delta^{m}\}]^{m}/ \partial \delta_{j}. $$ Lastly, note that Propositions 1 and 2 only establish that Majority results in a higher level of investment—they do not show that it is optimal for multilateral funds to adopt a majority rule: While Majority results in a higher degree of partner-country investment, it does not necessarily result in a higher level of donor-country welfare. We explore donor-country welfare under the two different decision rules in the following section. Comparing donor-country welfare To determine the optimal decision rule, we compare donor-country utility under the different decision rules and show that under Majority, there exists a tradeoff between higher investment, characterized in Proposition 1, and a utility loss relative to Unanimity that stems from the fact that Majority limits funding to the two recipient countries in the majority coalition. Note that due to the higher investment levels under Majority, a crucial determinant of the optimal decision rule will be to what extent the donor countries place an independent value on reform in the recipient countries, captured by g in the utility function of the donor countries. As we discuss in the introduction, donor countries may value investment directly since an increase in good governance may have significant spillover effects to areas other than project quality. Here, we characterize the optimal decision rule as a function of g, the direct value of investment in good governance to donor countries, and h, which represents the indirect value of investment to donor countries (indirect in the sense that the probability that h is realized is a function of investment). That is, as h increases, the expected value of recipient-country investment also increases since the donor countries receive a higher benefit from the project realizing high quality. The impact of g on the optimal decision rule is straightforward: as the direct value of investment increases, majority rule becomes more beneficial to donor countries. The impact of h, however, is less straightforward. This is because as h increases, the incentive to invest increases under both Majority and Unanimity and, as illustrated in Fig. 1, the difference in investment under Majority and Unanimity is actually decreasing in h—by L'Hôpital's Rule, both δm and δu approach \(\min \limits \{1,2x/(2+x)\}\) as \(h \to \infty \). This suggests that if Majority outperforms Unanimity, then it will do so for intermediate values of h: When the productivity of investment (h) is very low then there is little benefit from incentivizing recipient countries to invest, and when the productivity of investment is very high then investment levels are high under both Majority and Unanimity. This implies that the relative benefit of instituting Majority for donor countries will be the highest for intermediate values of productivity of recipient-country investment, when the difference in the relative levels of investment between Majority and Unanimity and the benefit of investment are both high. This intuition is formalized in the following proposition, which considers the utility difference between the two decision rules holding x fixed and varying h and g: There exists g∗, such that iff g > g∗, there exists an interval (1, ϕ] for some ϕ > 1 such that the expected utility of the donor countries is higher under Majority than Unanimity for h ∈ (1, ϕ]. This result is also illustrated in Fig. 2, which shows the expected utility under the two decision rules (upper graph) and the utility difference under the two voting rules (lower graph) as the productivity of investment (h) increases, for three different values of g. As we can see from the figure, the relative value of Majority is the highest for an intermediate value of h. Figure 2 also demonstrates that a range of h where Majority outperforms Unanimity need not exist. In fact, simulations show that if g = 0, then Unanimity is preferable to Majority for all x, h. This suggests that the higher investment achieved via Majority will only be beneficial to donor countries who place an independent value on recipient-country reform in good governance. The upper graphs show the expected utility for the donor country under Majority and Unanimity (dashed line) for x = 2 as h increases, given different direct values of investment (g = 0,0.5). The lower graphs illustrate the relative utility of Majority, ΔE[u] = E[u]m − E[u]u Robustness of the formal results In this section we discuss the robustness of the formal results to relaxing some of the main assumptions of the model. Asymmetric donor countries First, consider the case where donor countries have aid budgets of different size, xi. Note that since there is full commitment, in the sense that aid budgets are given to the multilateral institution with "no strings attached," this does not impact our formal results—the threat-point of Nash Bargaining remains zero and the results are unaffected. However, asymmetry in aid budgets may impact the decision to delegate to the multilateral institution. That is, while delegation results in higher investment, it also leads to an equal split of the utility surplus. Therefore, donor countries with relatively high aid budgets may be better off under bilateral aid and either opt out of the multilateral institution, or contribute only part of their aid budget to the multilateral institution. Second, it may be the case that not all donor countries have equal weight in the bargaining process. For example, donor countries who contribute more to the budget may negotiate for a higher bargaining weight in the aid allocation process. Asymmetric bargaining weights may decrease the competition between recipient countries, since recipient countries whose donor country has a high/low bargaining weight may face an aid allocation that is less sensitive to investment. However, even with asymmetric weights, bargaining will still provide some incentive for recipient countries to invest, implying that delegation to a multilateral institution would still mitigate the Samaritan's problem. A similar intuition holds for asymmetric probabilities of being selected as the formateur under Majority. As is illustrated in the preceding section, this will decrease the incentive to invest for recipient countries with donor countries that have higher probabilities of being selected, but increase the incentive to invest for others, leaving the aggregate impact of this asymmetry unclear. Partial bias In our analysis, we make the assumption that donor countries only care about aid spending in a single recipient country. This makes the Samaritan's problem as stark as possible, and highlights the ability of multilateral aid to overcome the Samaritan's problem even in this extreme case. However, the qualitative results of our analysis hold even as this assumption is relaxed. Assume donor countries have the following utility functions over aid allocation: $$ u_{i}(\{a_{i}\}) = \sqrt{q_{i} a_{i}} + \epsilon\sum\limits_{j\neq i}\sqrt{q_{j} a_{j}}, $$ for some 𝜖 ∈ (0, 1). In this case donor countries place positive value on aid spending in all recipient countries, but favor country j = i by placing a higher weight on aid spending in this recipient country. First, note that for 𝜖 small enough, under Bilateral aid it is still a best response for countries to give all aid to their favored recipient country regardless of project qualities, which shows that the Samaritan's problem still exists with partial bias. Next, note that under Unanimity, the allocation of aid spending maximizes \((1+2\epsilon ){\sum }_{i} \sqrt {q_{i} a_{i}}\); i.e., it results in the exact same allocation as in the above analysis, which shows that all results for Unanimity also hold for partial bias. Lastly, note that under Majority, the allocation of aid will maximize \((1+\epsilon ){\sum }_{i \in M} \sqrt {q_{i} a_{i}} + 2\epsilon \sqrt {q_{k} a_{k}}\), where k is not in the majority coalition. Therefore, the formateur will still select the highest-quality project into the majority coalition, and the recipient countries will have an additional incentive to invest under Majority relative to Unanimity, just as above. With a partial bias, however, the incentive to invest may be lower, since all recipient countries receive a positive level of aid from the majority coalition, although the recipient countries whose donor countries are in the majority coalition still receive more. In this paper, we consider a formal model of the allocation of aid spending in an environment where donor countries face a bias over which recipient countries receive funding. Our analysis provides several important insights regarding the optimal design of multilateral aid organizations. As highlighted by Svensson (2000, 2003), multilateral aid organizations must focus on distributing aid in a manner that provides an incentive for developing nations to invest in reform. However, given competing national and special party interests, the question is how to enforce this objective. Here, we show that competition in the area of reform can arise endogenously when donor countries directly bargain over the allocation of aid funds, and that this competition is intensified under majority rule, as recipient countries invest in reform to increase the probability that their project will be selected by the endogenous majority coalition. These findings are consistent with our novel empirical finding that aid allocation by organizations deciding via majority is more responsive to changes in recipient-country governance quality. From a policy perspective, our analysis suggests that a majority rule should only be adopted when there are high positive spillovers of recipient countries' investment in reform on other areas, such as promoting good bureaucratic practices more widely, and when investment in reform has an intermediate level of productivity—if productivity is very low, then the utility loss of only funding a subset of projects is not worth the gain from additional investment in reform; and if productivity is very high, then both Unanimity and Majority result in high levels of investment in reform. We emphasize that the predictions of our model only apply to multilateral aid funds that allocate aid spending via an unstructured bargaining process. In recent years, "earmarked" donations (aka multi-bi aid) have become increasingly common as donor countries seek to take advantage of the benefits of scale of international organizations, while ensuring that aid is distributed according to national priorities (Eichenauer and Hug 2018). However, as our paper shows, earmarking diminishes the incentive of recipient countries to invest in reforms, since it circumvents multilateral bargaining. Therefore, in this case, less structure can result in greater efficiency. Gang and Epstein (2009) instead model the optimal allocation of aid when aid is contractable. They show that allocating aid in proportion to the quality of governance (rather than allocating all aid to the country with better governance) is optimal as long as recipient countries are sufficiently asymmetric. Equivalently, donors might also be biased towards their own aid projects which are located in a subset of recipient countries, rather than valuing all aid to a recipient country equally. To correct for such bias in donor-country preferences and protect recipient countries, Auriol and Miquel-Florensa (2019) propose a tax scheme on unilateral projects. For example, multilateral funds are typically less politicized, and are managed by technocrats and experts who have a longer-term perspective; however, the appointment of technocrats, rather than politicized agents, to multilateral funds may be a direct result of the mechanism we highlight here. As we discuss in more detail in Section 3 below, our assumption is that if no agreement is reached over the allocation of aid spending, then there is no disbursement of aid and the aid budget is not reimbursed to the member nations. Some multilateral institutions do have provisions to reimburse donor countries if a decision is made to liquidate the multilateral fund. Importantly, however, disbursement does not occur automatically following disagreement over the allocation of aid spending; rather, the decision to liquidate the fund is taken independently of allocation decisions. We focus on the contrast between bilateral aid and different decision rules within multilateral aid funds and do not consider delegation to an independent third party. The effectiveness of delegation to an independent party in overcoming the Samaritan's problem depends crucially on the objective of that institution (see Svensson 2000)—here we consider the objective that endogenously emerges from direct bargaining between country representatives. Svensson (2003) details how competition over aid spending incentivizes investment in state capacity. In our setting, donor countries cannot induce such competition under bilateral aid due to the non-contractibility of investments. An earlier version of our paper also considered different levels of donor-country bias (Dreher et al. 2018)—to simplify the exposition, however, we focus on the results with equal donor-country biases. For a broad overview of the political economy of international organizations see Dreher and Lang (2019). In our robustness section, we show that even delegation to a multilateral fund whose members share similar, but not identical, preferences is sufficient to incentivize higher investment among recipient countries. At the same time they show that policy-selective aid improves policies. Also see Smets and Knack (2016). The Online Appendix is available on the Review of International Organizations' web page. Existing datasets do not cover a sufficient number of decision rules on how international organizations allocate funds. Blake and Lockwood-Payton (2015) include 26 banks in their data, but only 12 of those are included in the CRS. Hooghe et al. (2017) provide detailed data on decision-rules in 76 international organizations. Most of these organizations do not provide funding to their members however. Gould (2017) distinguishes unanimity from consensus, but again no detailed aid data are available for a sufficiently large number of organizations. Climate Investment Funds, Green Climate Fund, Nordic Development Fund, OSCE, and United Nations Relief and Works Agency for Palestine Refugees. A typical example is the statutory document of the Adaptation Fund (2018, p.6), which states that "Decisions of the Board shall be taken by consensus whenever possible." CRS data include a substantial amount of missings in earlier years. For example, data on commitments are reported by 70 percent of the DAC members in 1995, 90 percent in the year 2000, and 100 percent since 2003. See http://www.oecd.org/dac/stats/crsguide.htm, last accessed September 24, 2018. In the regressions we report below, we have replaced years with missing information on aid commitments with zero. We however also report results for a regression with positive values only. Our results are unchanged when we include only those IO-recipient-year observations as zeros that are reported in CRS. As we report in an earlier version of this paper (Dreher et al. 2018), our results are also unchanged when we use the absence of corruption or the level of democracy as indicators of institutional quality, when we focus on aid disbursements rather than commitments, and when we restrict the sample to years after 1999 or 2002 (with better data coverage). See Alesina and Dollar (2000), Dreher et al. (2011), or Faye and Niehaus (2012). We take these variables from the World Bank (2018). Another set of variables typically included in bilateral aid allocation studies includes proxies for colonial history or political relations between the donor and recipient. As our focus is on multilateral donors we exclude such variables here (though geopolitics has been shown to matter in some international organizations as well, see Vreeland and Dreher 2014). Given that the choice of control variables always involves discretion, we also show results excluding them. Of course, our analysis provides conditional correlations with the intention to motivate the formal model rather than causal effects. A number of omitted variables at the level of international organizations might affect how voting rules interact with institutional quality in determining aid commitments. The most obvious one is an organization's budget. While controlling for yearly totals of aid from an organization does not affect our results, this does not rule out potential effects of variables we cannot control for, such as the structure of organizations' decision-making committee or preferences of its most powerful members. Comparable work frequently used Tobit models (e.g., Alesina and Dollar 2000). The inverse hyperbolic sine transformation allows to keep zero observations in the sample without adding an arbitrary constant. It is defined as \(\Tilde {x}=ln(x+\sqrt {x^{2} +1})\) and is frequently used in recent applied research (see Bellemare and Wichman 2020). Note that the incidental parameter problem does not affect the coefficients of Tobit models (Greene 2004). Due to missing data our regressions include a maximum of 45 donor organizations and 100 recipient countries, over the 1985-2016 period. The maximum number of observations is 63,986, of which 45,843 are zero; around 50 percent of the observations are allocated under consensus rule. Aid commitments increase with population size and per capita GDP, while exports are not significant at conventional levels. While results of a Logit model are similar, the interpretation of interacted variables is not straightforward in such non-linear models (Greene 2010). In column 8 of Table 1, "Triple interaction" shows the coefficient of Consensual decision-making*Bureaucratic quality*IO Budget; while the level of IO Budget is absorbed by the year fixed effects, interactions of Consensual decision-making*IO Budget and Bureaucratic quality*IO Budget are included but not shown to reduce clutter. This result is confirmed when we split the sample according to median aid commitments and estimate two separate regressions. The coefficient is significantly negative only in the sample with the larger budgets. The assumption of identical budgets is without loss of generality for our model, which considers the allocation decision given that donor countries commit their aid budgets to the multilateral fund. However, asymmetric budgets may impact the decision to join the fund, and we discuss asymmetric budgets at the end of the analysis in a section that considers the robustness of our results. Empirical studies disagree on whether and to what extent good governance improves the effectiveness of aid. Burnside and Dollar's (2004) finding that aid effectiveness is higher in countries with higher levels of institutional quality has frequently been overturned (see, e.g., Doucouliagos and Paldam 2010). An earlier version of this paper (Dreher et al. 2018) utilized a model with deterministic investments. The qualitative insights of the models with stochastic investments and deterministic investments are the same; however, there are no pure-strategy equilibria in the investment stage under Majority (we introduce the formal definitions for the different decision rules below). Therefore, we use a stochastic investment technology to make the analysis tractable and to more clearly illustrate the difference between Majority and Unanimity. Our model of decision-making under majority closely follows the example of Harstad (2005), who models majority decisions in organizations such as the European Union. In general, the "formateur" model of decision-making is commonly used to model bargaining under majority (see Baron and Ferejohn 1989). That is, we assume that donor countries have a fixed aid budget, and cannot costlessly produce a new aid budget to fund bilateral aid if negotiations in the multilateral fund break down. Without utility transfers, Nash Bargaining results in the allocation of aid spending that maximizes the product of the payoff the donor countries receive from aid spending (\(\sqrt {q_{i} a_{i}}\))—since the max is independent of scaling, the level of recipient-country investment (qi) will not impact the allocation of aid spending, and Nash Bargaining will result in the same level of aid spending to the formateur's preferred recipient country regardless of who they select to the majority coalition. A complete description of the NB outcome would include utility transfers; however, since there is no need to refer to them directly, we simplify the notation by not explicitly introducing the utility transfers. Note that if δu = 1 then the recipient's project will be of high quality with probability one; therefore, recipient countries will never select δu to be greater than one, which is why a min-function characterizes the equilibrium level of investment. The same is true for the equilibrium level of investment under Majority, which we characterize below. Abbott, K.W., & Snidal, D. (1998). Why states act through formal international organizations. Journal of Conflict Resolution, 42(1), 3–32. Adaptation Fund. (2018). Rules of procedure of the adaptation fund board. https://www.adaptation-fund.org/wp-content/uploads/2018/04/Rules-of-procedure-of-the-Adaptation-Fund-Board.pdf, Accessed: September 2018. Aghion, P., & Tirole, J. (1997). Formal and real authority in organizations. Journal of Political Economy, 105(1), 1–29. Alesina, A., & Dollar, D. (2000). Who gives foreign aid to whom and why?. Journal of Economic Growth, 5(1), 33–63. Alesina, A., & Weder, B. (2002). Do corrupt governments receive less foreign aid?. American Economic Review, 92(4), 1126–1137. Annen, K., & Knack, S. (2018). On the delegation of aid implementation to multilateral agencies. Journal of Development Economics, 133, 295–305. Annen, K., & Knack, S. (2019). Better policies from policy-selective aid? World Bank Policy Research working paper 8889. Auriol, E., & Miquel-Florensa, J. (2019). Taxing aid to improve aid efficiency. Review of International Organizations, 14(3), 453–477. Barbera, S., & Jackson, M.O. (2006). On the weights of nations: Assigning voting weights in a heterogeneous union. Journal of Political Economy, 114 (2), 317–339. Baron, D., & Ferejohn, J. (1989). Bargaining in legislatures. American Political Science Review, 83(4), 1181–1206. Bellemare, M.F., & Wichman, C.J. (2020). Elasticities and the inverse hyperbolic sine transformation. Oxford Bulletin of Economics and Statistics, 82, 50–61. Binmore, K., Rubinstein, A., & Wolinsky, A. (1986). The nash bargaining solution in economic modelling. RAND Journal of Economics, 17(2), 176–188. Blake, D., & Lockwood-Payton, A. (2015). Balancing design objectives: Analyzing new data on voting rules in intergovernmental organizations. Review of International Organizations, 10, 377–402. Burnside, C., & Dollar, D. (2004). Aid, policies, and growth: Revisiting the evidence. World Bank Policy Research paper number O-2834. Collier, P. (1997). The failure of conditionality. In Gwin, C., & Nelson, J. (Eds.) Perspectives on aid and development. Overseas Development Institute, Washington DC. Dietrich, S. (2013). Bypass or engage: Explaining donor delivery tactics in foreign aid allocation. International Studies Quarterly, 57(4), 698–712. Dietrich, S., & Murdie, A. (2017). Human rights shaming through INGOs and foreign aid delivery. Review of International Organizations, 12(1), 95–120. Doucouliagos, H., & Paldam, M. (2010). Conditional aid effectiveness: A Meta-study. Journal of International Development, 22(4), 391–410. Dreher, A. (2009). IMF conditionality: Theory and evidence. Public Choice, 141(1-2), 233–267. Dreher, A., & Lang, V.F. (2019). The political economy of international organizations. In Congleton, R, Grofman, B, & Voigt, S (Eds.) The oxford handbook of public choice. Oxford University Press, Oxford, UK. Dreher, A., Lang, V.F., Rosendorff, B.P., & Vreeland, J.R. (2020). Bilateral or multilateral? International financial flows and the dirty-work hypothesis. Centre for Economic Policy Research paper 13290. Dreher, A., Nunnenkamp, P., & Thiele, R. (2011). Are 'new' donors different? Comparing the allocation of bilateral aid between NonDAC and DAC donor countries. World Development, 39(11), 1950–1968. Dreher, A., Simon, J., & Valasek, J. (2018). The political economy of multilateral aid funds. Centre for Economic Policy Research, discussion paper 13297. Easterly, W. (2003). Can foreign aid buy growth?. Journal of Economic Perspectives, 17(3), 23–48. Eichenauer, V., & Hug, S. (2018). The politics of special purpose trust funds. Economics & Politics, 30(2), 211–255. Eichenauer, V., & Knack, S. (2018). Poverty and policy selectivity of World Bank trust funds. Journal of International Development, 30(4), 707–7142. Faye, M., & Niehaus, P. (2012). Political aid cycles. American Economic Review, 102(7), 3516–3530. Fleck, R., & Kilby, C. (2010). Changing aid regimes? US foreign aid from the cold war to the war on terror. Journal of Development Economics, 9 (1), 185–197. Fuchs, A., Dreher, A., & Nunnenkamp, P. (2014). Determinants of donor generosity: A survey of the aid budget literature. World Development, 56, 172–199. Gang, I., & Epstein, G.S. (2009). Good governance and good aid allocation. Journal of Development Economics, 89(1), 12–18. Gould, E. (2017). What consensus? Explaining the rise of consensus decision-making in international organizations. Mimeo. Greene, W. (2004). Fixed effects and bias due to the incidental parameters problem in the tobit model. Econometrics Review, 23(2), 125–147. Greene, W. (2010). Testing hypotheses about interaction terms in nonlinear models. Economics Letters, 107(2), 291–296. Hagen, R. (2006). Samaritan agents? On the strategic delegation of aid policy. Journal of Development Economics, 79(1), 249–263. Harstad, B. (2005). Majority rules and incentives. Quarterly Journal of Economics, 120(4), 1535–1568. Headey, D. (2008). Geopolitics and the effect of foreign aid on economic growth: 1970-2001. Journal of International Development, 20(2), 161–180. Hoeffler, A., & Outram, V. (2011). Need, merit, or self-interest—What determines the allocation of aid?. Review of Development Economics, 15(2), 237–250. Hooghe, L., Marks, G., Lenz, T., Bezuijen, J., Ceka, B., & Derderyan, S. (2017). Measuring international authority: A postfunctionalist theory of governance. Oxfrod: Oxford University Press. Knack, S. (2013). Building or bypassing recipient country systems: Are donors defying the Paris Declaration?. World Bank Policy Research working paper 6423. Maggi, G., & Morelli, M. (2006). Self-enforcing voting in international organizations. American Economic Review, 96(4), 1137–1157. Milner, H.V., & Tingley, D. (2013). The choice for multilateralism: Foreign aid and American foreign policy. Review of International Organizations, 8 (3), 313–341. Molenaers, N., Dellepiane, S., & Faust, J. (2015). Political conditionality and foreign aid. World Development, 75, 2–12. Mosley, P., Harrigan, J., & Toye, J. (1995). Aid and power (2nd ed), Volume 1 of The World Bank and Policy-Based Lending. London: Routledge. Negre, M. (2013). Allocating the next European development fund for ACP countries. German Development Institute. Öhler, H., Nunnenkamp, P., & Dreher, A. (2012). Does conditionality work? A test for an innovative US aid scheme. European Economic Review, 56 (1), 138–153. Pedersen, K. (1996). Aid, investment and incentives. Scandinavian Journal of Economics, 98(3), 423–437. PRS Group. (2017). International country risk guide (ICRG). https://epub.prsgroup.com/products/international-country-risk-guide-icrg, Accessed: September 2018. Reinsberg, B., Michaelowa, K., & Knack, S. (2017). Which donors, which funds? Bilateral donors' choice of multilateral funds at the World Bank. International Organization, 71(4), 767?802. Schneider, C., & Tobin, J. (2016). Portfolio similarity and international development aid. International Studies Quarterly, 60(4), 647–664. Schneider, C.J., & Slantchev, B.L. (2013). Abiding by the vote: Between-groups conflict in international collective action. International Organization, 67 (4), 759–796. Smets, L., & Knack, S. (2016). World bank lending and the quality of economic policy. Journal of Development Studies, 52(1), 72–91. Svensson, J. (2000). When is foreign aid policy credible? Aid dependence and conditionality. Journal of Development Economics, 61(1), 61–84. Svensson, J. (2003). Why conditional aid does not work and what can be done about it?. Journal of Development Economics, 70(2), 381–402. Vreeland, J., & Dreher, A. (2014). The political economy of the United Nations Security Council: Money and influence. Cambridge: Cambridge University Press. World Bank. (2018). World Development Indicators. World Bank. We thank the editor of the special issue "In memoriam: Stephen Knack"--Christopher Kilby--three reviewers, Vera Eichenauer, Steffen Huck, Lennart Kaplan, Anders Olofsgård, Maria Perrotta Berlin and Giancarlo Spagnolo for their valuable comments and suggestions. Open Access funding enabled and organized by Projekt DEAL. Heidelberg University, Heidelberg, Germany Axel Dreher German Federal Ministry of Labor and Social Affairs, Berlin, Germany Jenny Simon Norwegian School of Economics (NHH), Bergen, Norway Justin Valasek Correspondence to Justin Valasek. The views expressed throughout the paper are the authors' and do not necessarily correspond to the views of the German Federal Ministry of Labor and Social Affairs or the German Government. Below is the link to the electronic supplementary material. (ZIP 1.35 MB) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Dreher, A., Simon, J. & Valasek, J. Optimal decision rules in multilateral aid funds. Rev Int Organ (2021). https://doi.org/10.1007/s11558-020-09406-w Aid allocation Aid effectiveness Decision rules H87.
CommonCrawl
An evaluation of methods correcting for cell-type heterogeneity in DNA methylation studies Kevin McGregor1,2, Sasha Bernatsky2, Ines Colmegna7, Marie Hudson2,3,6, Tomi Pastinen4,5, Aurélie Labbe1,8,9 & Celia M.T. Greenwood2 Many different methods exist to adjust for variability in cell-type mixture proportions when analyzing DNA methylation studies. Here we present the result of an extensive simulation study, built on cell-separated DNA methylation profiles from Illumina Infinium 450K methylation data, to compare the performance of eight methods including the most commonly used approaches. We designed a rich multi-layered simulation containing a set of probes with true associations with either binary or continuous phenotypes, confounding by cell type, variability in means and standard deviations for population parameters, additional variability at the level of an individual cell-type-specific sample, and variability in the mixture proportions across samples. Performance varied quite substantially across methods and simulations. In particular, the number of false positives was sometimes unrealistically high, indicating limited ability to discriminate the true signals from those appearing significant through confounding. Methods that filtered probes had consequently poor power. QQ plots of p values across all tested probes showed that adjustments did not always improve the distribution. The same methods were used to examine associations between smoking and methylation data from a case–control study of colorectal cancer, and we also explored the effect of cell-type adjustments on associations between rheumatoid arthritis cases and controls. We recommend surrogate variable analysis for cell-type mixture adjustment since performance was stable under all our simulated scenarios. DNA methylation is an important epigenetic factor that modulates gene expression through the inhibition of transcriptional proteins binding to DNA [1]. Examining the associations between methylation and phenotypes, either at a few loci or epigenome-wide (i.e. the Epigenome-Wide Association Study (EWAS) [2]) is an increasingly popular study design, since such studies can improve understanding of how the genome influences phenotypes and diseases. However, unlike genetic association studies, where the randomness of Mendelian transmission patterns from parents to children enables some inference of causality for associated variants, results from EWAS studies can be more difficult to interpret. The choices of tissue for analysis and time of sampling are crucial, since methylation levels vary substantially across tissues and time. Methylation plays a large role in cellular differentiation, especially in regulatory regions [3, 4], and methylation patterns are largely responsible for determining cell-type-specific functioning, despite the fact that all cells contain the same genetic code [5]. Ideally, methylation would be measured in tissues and cells of most relevance to the phenotype of interest, but in practice such tissues may be impossible to obtain in human studies. Many accessible tissues for DNA methylation studies, such as saliva, whole blood, placenta, adipose, tumors, or many others, will contain mixtures of different cell types, albeit to varying degrees. Hence, the measured methylation levels represent weighted averages of cell-type-specific methylation levels, with weights corresponding to the proportion of the different cell types in a sample. However, cell-type proportions can vary across individuals, and can be associated with diseases or phenotypes [6]. For example, individuals with autoimmune disease are likely to have very different proportions of autoimmune cells in their blood than non-diseased individuals [7–11], synovial and cartilage cell proportions differ between rheumatoid arthritis patients and controls [12], and associations with age have been consistently reported [13]. Hence, variable cell-type-mixture proportions can confound relationships between locus-specific methylation levels and phenotypes, since these proportions are associated both with phenotype and with methylation levels [14]. In situations potentially subject to confounding, although less biased estimates of association can be obtained by incorporating the confounding variable as a covariate, this is not a perfect solution, since it may not be possible to distinguish lineage differences [14] or to estimate accurately the proportions of each cell type in a tissue sample [15, 16]. Initial studies of associations between DNA methylation and phenotypes largely ignored this potential confounding factor, which may have led to biased estimates of association and failure to replicate findings [17, 18]. However, in parallel with the increasing prevalence of high-dimensional methylation studies, a number of methods that can account for this potential confounding of methylation–phenotype associations have been developed or adapted from other contexts. Among those developed specifically for methylation data (Ref-based [19], Ref-free [20], CellCDec [21], and EWASher [22]), the first two were proposed by the same author (Houseman), but the first of these requires an external reference data set. Other methods were proposed in more general contexts where confounding does not necessarily result from cell-type mixtures yet is still of concern; many of these rely on some implementation of matrix decompositions (SVA [23], ISVA [24], Deconfounding (Deconf) [25], and RUV [26, 27]). Although there are numerous similarities between the approaches, there remain some fundamental differences in terms of limitations and performance. An unbiased comparison of methods has been difficult since true cell-type mixture proportions are unknown, replications using alternative technologies such as targeted pyrosequencing do not lead to genome-wide data where cell-type proportions can be estimated, and new methods have tended to be compared with only a few other approaches. Since the problem of confounding plagues all researchers in this field, a careful comparison of existing methods correcting for cell-type heterogeneity is essential, and this is the objective of our paper. In an ongoing study of incident treatment-naive patients with one of four systemic autoimmune rheumatic diseases (SARDs), whole-blood samples were taken at presentation, and immune cell populations (purity >95 %) were sorted from the peripheral mononuclear cells (PBMCs) of these patients. Analysis of DNA methylation was then performed with the Illumina Infinium HumanMethylation450 BeadChip (450K) on the cell-separated data. These data provide a unique and valuable opportunity to compare the performance of methods for cell-type mixture adjustments. We present here the results of an extensive simulation study where we remixed the cell-separated methylation profiles to incorporate variable mixture proportions and confounding of associations, and we then compare performance of eight different methods of adjustment. We also compare the ease of use of each method, and provide an R script allowing for easy implementation of several of the best-performing methods. As far as we are aware, this is the first study to compare such an extensive set of methods in a simulation based on cell-separated data. Patients and original methylation profiles Whole-blood samples were obtained from patients with incident treatment-naive rheumatoid arthritis (n=11), systemic lupus erythematosus (n=9), systemic sclerosis (n=14), and idiopathic inflammatory myositis (n=3). Several control samples were available as well (n=9). CD4, CD19, and CD14 subpopulations were sorted from PBMCs by magnetic cell isolation and cell separation (MACS) sorting (see "Methods"). The purity of the isolated populations was confirmed by flow cytometry. Only in samples with a purity >95 % were methylation profiles assessed using the Illumina Infinium HumanMethylation 450 BeadChip on the separate cell populations. Our simulation and results are based primarily on 46 patients for whom cell-sorted methylation profiles were available for both CD4 + T lymphocytes and CD14 + monocytes. The heat map in Fig. 1 shows some representative patterns of methylation in the SARD samples across CD14 + monocytes, CD4 + T cells, and CD19 + B cells (this latter cell type was not available for all patients), at 200 CpG sites that were selected because of the inter-cell-type differences. The figure demonstrates that there are sizeable differences in methylation levels between cell types, and it follows that small variations in the proportions of these component cell types in a mixed tissue sample can lead to great difficulties in interpreting any phenotype-associated results. Clustered heat map showing patterns of methylation in 46 SARD samples (columns) and 200 CpG sites (rows). The sites were selected to highlight the methylation differences between cell types. Consequently, the samples cluster by cell type: monocytes, B cells, and then T cells Multi-layer simulation design We implemented a rich simulation design, based on the SARD methylation data. This simulation contains random sources of variability at multiple levels, including both variability of population mean parameters as well as variability at individual-level parameters. Starting with the observed cell-separated methylation profiles for T cells and monocytes from the SARD data, we simulated a number of probes to have associations directly with the phenotype, and we induced confounding by combining the two cell types in proportions that vary across individuals. Although this simulation design is complex and depends on a large number of parameters, it allows substantial flexibility in specifying consistency or variability between cell types, individuals, or probes, and easily allows us to create realistic and pathological situations in the same framework. Let i=1,…,n where n=46 denote the individuals in the SARD data. In brief, the simulation proceeds as follows (see "Methods" for more details): Select a set of S CpG sites where "true" associations with a phenotype will be generated by our simulation; we refer to these CpGs as differentially methylated sites (DMSs). Generate a phenotype, either binary (disease or no disease) or continuous. For any probe not in the DMS set, the cell-type-specific methylation values are the observed values from the real data. For a probe in the DMS set, a randomly generated quantity is added to the observed cell-type-specific level of methylation, in a way that depends on the phenotype. The cell-type-specific methylation values are mixed together in proportions that vary depending on the phenotypes. Over all DMSs, one would expect to see a range of positive and negative associations with the phenotype. In step 4, we allow these associations to differ between cell types in order to specify an association between change in methylation and cell type. After having specified each of the site- and cell-type-specific associations, we then add between-subject variability to each site. The final step, step 5, leads to methylation proportions as they would appear had the mixed tissue been analyzed directly. After simulating the data, we then test for association between the phenotype and the methylation levels in the mixed data at each probe. We compare the p values obtained from these tests of association across eight simulation scenarios and eight different methods for cell-type adjustment. Some of the DMSs were simulated to have very small effects and therefore, statistical tests of association may not be significant. On the other hand, since the cell-type proportions vary with phenotype, this can lead to non-DMSs showing spurious associations with the phenotype. The notation for the key parameters is given in Table 1, and the parameter choices across different simulation scenarios are summarized in Table 2. Table 1 Fixed parameters in the simulation design Table 2 Parameter choices for the simulation scenarios Scenario 1: DMSs have differences in both means and variances between cell types In our first simulation scenario (Table 2), we chose to specify distinct differences in the strength and distribution of the methylation–phenotype associations (DMS) for the two cell types, with a binary phenotype. Differences in the DMS distributions include both direction of the effects as well as the amount of variability across sites and individuals; Additional file 1: Figure S1 displays histograms of the 500 simulated values of the DMS means μ jk for the two cell types, showing the substantial differences between these two distributions. In this scenario, we would expect that an analysis not taking cell type into consideration should result in many p values that are smaller than expected, or equivalently a greatly inflated slope in a p value QQ plot, due to the strong confounding built into the simulation design. After analyzing for all probes with unadjusted data, we repeated the epigenome-wide analysis with eight popular or newly developed adjustment methods (see "Methods"). QQ plots for these eight methods as well as the uncorrected analysis can be seen in Fig. 2. Examination of the left-hand side of this plot (x-axis smaller than about 3.5) shows that there is, indeed, a genome-wide inflation of p values in the analysis uncorrected for cell-type mixture. Encouragingly, most methods do a fairly good job in correcting for the confounding, since the corrected QQ plots are close to the line of expectation up until the tails of each set of p values. The reference-free method, however, continues to display inflation even after correction. QQ plots showing distributions of p values in simulation scenario 1 where the true effects in the different cell types have very distinct distributions. Results are shown with no adjustment for cell-type mixture as well as with eight other methods; these are split across two panels, (a) and (b), to clarify the display Several numeric performance metrics can be seen in Table 3 for this simulation scenario. One of these metrics is the genomic inflation factor (GIF) [28], which is the slope of the lines seen in Fig. 2 after removing the 500 DMSs. The unadjusted GIF was 1.6, indicating a substantial inflation of significance across all p values, but after adjustment most values are quite close to 1.0, as would be expected in the absence of any confounding. The Ref-based, EWASher, and Deconf methods have slopes slightly less than 1.0, implying possible over-correction. Table 3 Performance metrics under simulation scenario 1 (distinct associations between cell types) Since we know which 500 sites were generated to be truly DMSs, Table 3 reports both the power and the number of false positives (NFP). We declare a CpG site to be significant if the p value falls below the fixed value 10−4. This table also shows a measure of performance based on the Kolmogorov–Smirnov (KS) test for whether the p value distribution matches the expected uniform distribution. Of course, the KS test assumes independence of all the individual tests, and therefore, we are not using this test for inference, but simply as a measure of deviation where smaller values imply less deviation. For this simulation (scenario 1 with distinct association distributions in the two cell types), all methods except the reference-free method achieved a greater reduction in the NFP than the unadjusted analysis for the non-associated probes. Although the power (sensitivity) for all the methods appears very low, many of the simulated effect sizes at the chosen DMSs were very small, and nevertheless the rankings of the different methods are still informative. Additional file 1: Figure S1 shows the simulated means of the cell-type distributions for the 500 probes; subsequently, additional random errors were introduced at the level of each individual leading to substantial variability in the realized methylation differences. The power of most methods was slightly less than that of the unadjusted analysis, except for Ref-free and EWASher. The power for EWASher is extremely poor; this method removes probes with very high or very low levels of methylation prior to constructing the components, and hence, many DMS probes are not even included in its analyses. Results for the Ref-free power must be interpreted cautiously since the type 1 error is so substantially elevated for this method. The KS statistic confirms the conclusions obtained from other metrics, showing small values for most methods except for the unadjusted data and the Ref-free method. Scenario 2: no confounding It is also of interest to examine performance when there is no confounding. By simulating data with the same cell-type-specific means and variances in both cell types (scenario 2 in Table 2), the unadjusted analysis should not be subject to any bias. As expected, Table 4 shows low NFP and good power for the unadjusted method, and similar results are obtained for CellCDec, Deconf, and Ref-based. In Additional file 1: Figure S2, it can be seen that the unadjusted results lie very close to the line of expectation, apart from the tail of the distribution where the DMSs predominate. It is interesting to note that SVA, RUV, Ref-free, and in particular ISVA display high NFP, implying that far too many DMS probes are being inferred. Despite that no confounding was simulated, the GIF for the unadjusted data is slightly inflated; in fact, after adjustment, the GIF increases for Ref-free and ISVA. In contrast, the GIF is less than 1 for CellCDec, Deconf, and the Ref-based methods, implying some over-correction. Table 4 Performance metrics under simulation scenario 2 (no confounding) Scenario 3: opposite effects in different cell types To investigate a case of severe differential effects, in scenario 3, the cell-type-specific means μ k were selected to have opposite signs in the two cell types. In this case, the mixed sample can have small DMS effects, since the two cell-type-specific effects may cancel each other. Confirming this expectation, there is no inflation of the test statistics in the unadjusted data (GIF = 0.97, Table 5). Like the previous scenarios, we see very poor power and over-correction with EWASher, and extremely inflated NFP with the Ref-free method (Additional file 1: Figure S3). Small power improvements over the unadjusted analysis can be seen when using any of the other methods. Table 5 Performance metrics under simulation scenario 3 (opposite effects) Scenarios 4 and 5: altered precision simulations Two scenarios were generated where we changed the precision of the individuals' cell-type distributions between cases and controls. That is, a higher precision corresponds to a more pronounced separation in the cell-type distributions between cases and controls, while a lower precision makes the two distributions more difficult to distinguish. Here, both T cells and monocytes were chosen to have distinct, positive net association with the phenotype; however, the precision parameter, ρ, from the Dirichlet distribution was varied such that ρ=200 for high precision and ρ=10 for low precision. QQ plots are shown in Additional file 1: Figure S4 (high precision) and S5 (low precision), and numeric metrics are in Tables 6 and 7. Table 6 Performance metrics under simulation scenario 4 (high precision) Table 7 Performance metrics under simulation scenario 5 (low precision) In the high-precision scenario, the NFP is extremely high when no adjustment is used. Most methods, however, perform quite well in reducing the GIF and KS statistics, reducing the NFP and retaining decent power (with the exceptions of Ref-free and EWASher as seen previously). In contrast, for the low-precision scenario, where there is much more variability from one individual to the next in the mixture proportions, as well as substantial differences between cases and controls, performance is generally poor. The QQ plots display substantial inflation, and most methods have very high NFP. Even the Ref-based method has very high NFP, and notably the QQ plot for RUV has enormous inflation and with the NFP at 2943. In fact, the unadjusted analysis appears to be one of the better choices here, with lower NFP and good power; ISVA also seems to perform better than the others. Scenario 6: continuous phenotype simulation results In our simulation with continuous phenotypes, the relative performances of the methods are different again. Table 8 and Additional file 1: Figure S6 indicate that unlike all the other scenarios, the Ref-free method performs fairly decently in this case, leading to small reductions in the GIF and KS statistics and a small improvement in power. RUV's performance is one of the best here, with a low NFP, good power, and an excellent GIF value. In contrast, the CellCDec method, which had performed quite well in all the other scenarios, shows extensive inflation across the QQ plots (GIF = 1.44) and a very high NFP. We were unable to obtain results for the Deconf method with three components within the available computational time limits on the Mammouth Compute Canada cluster. We note that the EWASher method does not allow continuous phenotypes and cannot be used in this scenario. Table 8 Performance metrics under simulation scenario 6 (continuous) Scenario 7: simulation with a small number of true DMSs To consider cell-type mixture effects when there are only a few epistable alleles, as has been observed in several studies [29, 30], we created a simulation with only 50 DMSs, exhibiting moderate to strong positive associations with the phenotype in both cell types. Ten replications of the simulation were performed. The parameters in Table 1 were held constant over the different replications, except for σ jk , which was generated from a uniform distribution to induce some additional variation. Box plots comparing performance over the ten replications can be seen in Fig. 3. QQ plots for all methods under one of the replications can be seen in Additional file 1: Figure S7. Comparison of performance metrics over ten replications in Scenario 7, few associated DMS. a The number of non-differentially methylated sites with a raw p value less than 10−4. b Power at significance level 10−4. c Kolmogorov–Smirnov statistic. d Genomic inflation factor Unsurprisingly, the performance metrics are quite variable when the analysis does not adjust for cell-type composition. The NFP of both the reference-free method and RUV vary substantially. In contrast, both the reference-based and SVA methods show a very good reduction in the number of non-DMSs below this p value threshold across all replications. Most methods do not achieve a power as high as the unadjusted method; however, the lowered NFP is a worthy tradeoff for the loss in power, especially for the reference-based and SVA methods. Most methods improve the KS statistic and GIF, except for the reference-free method, which has values almost as high as those in the unadjusted analysis. Scenario 8: widespread, subtle, correlated DMS effects In the final simulation scenario, we simulated a large number of associated DMSs to capture the possibility of a subtle genome-wide shift in methylation, such as might be seen for a large change in metabolic functioning or in immune system function. Here, we randomly selected 10,000 CpG sites to be DMSs, and unlike the previous simulation scenarios, we created dependence between the phenotype-associated methylation shifts at these sites. The DMSs were grouped into two blocks of size 5000, and a moderate background level of correlation between the mean effect sizes μ jk was included in each block. Effect sizes across blocks were still generated independently. Additional details of the dependence structure can be seen in step 4 of the "Methods" section, and the parameter choices can be seen in Table 2. Box plots comparing performance over the ten replications are shown in Fig. 4. QQ plots for all methods under one of the replications can be seen in Additional file 1: Figure S8. Comparison of performance metrics over ten replications in the many associated DMSs scenario. a Number of non-differentially methylated sites with a raw p value less than 10−4. b Power at significance level 10−4. c Kolmogorov–Smirnov statistic. d Genomic inflation factor In this scenario, NFP is less variable among the different adjustment methods, with the exception of the reference-free method. Once again, the reference-based and SVA methods do quite well, but Deconf and CellCDec also perform well in this scenario. Power is lower in all methods compared to the other simulation scenarios, with the exception of the reference-free method, though the high NFP for that method undermines this result. Most methods perform fairly similarly when considering the KS statistic and GIF. Estimated latent dimension Our simulation is based on complex mixtures of methylation profiles from two separated cell types. It is, therefore, interesting to note that for all the methods that provide estimates for the latent dimension (the last column of Tables 3, 4, 5, 6, 7 and 8), these estimates are consistently much larger than 2. Estimates are obtained for the Ref-free, SVA, ISVA, and RUV methods. Both SVA and ISVA assume the number of surrogate variables is less than or equal to the number of true confounders whose linear space they span, and for RUV, the authors themselves commented that the estimated values for K do not necessarily reflect the true dimension [27]. All estimates are generally greater than ten, and RUV's estimates tend to be over 30. In fact, there may be some additional sources of variation present in the original cell-separated methylation data, and these factors are likely being captured by these numerous latent variables. In fact, analyses of the original cell-type-separated data using patient age as the predictor resulted in estimated latent dimensions that were themselves large. For example, random matrix theory [31] (which is used for dimension estimation in the reference-free method and ISVA) estimated a latent dimension of ten for both T cells and monocytes when analyzed separately. Furthermore, SVA estimated the number of surrogate variables to be seven and nine for T cell and monocytes, respectively. Results from analysis of the ARCTIC data set We tested the performance of these eight adjustment methods on 450K measurements from the Assessment of Risk in Colorectal Tumors in Canada (ARCTIC) study [32], and the methylation data are deposited in dbGAP under accession number [phs000779.v1.p1]. We analyzed only 977 control subjects from this study, restricting to those where DNA methylation was measured on lymphocyte pellets, and examined the association between smoking (ever smoked) and methylation levels at all autosomal probes who passed quality control (473,864 probes). We excluded the colorectal cancer patients from this analysis due to concerns that their methylation profiles may have been affected by treatment. Patient age was included as a covariate in all analyses. Figure 5 shows the QQ plots for seven adjustment methods, and Table 9 provides KS and GIF numeric metrics of performance. The Deconf and CellCDec methods could not be used with these data since the computational time exceeded the 5-day limit allowed on the Mammouth cluster of Calcul Quebec. As was seen in our simulations, the EWASher method seems to over-correct, leaving no significant probes, and the GIF is much smaller than 1.0. However, all other methods lead to QQ plots where the slope is larger after correction than before; the GIF estimates are substantially larger than for the unadjusted analysis. For SVA and ISVA, the KS statistic is increased after corrections are applied. Furthermore, among the top 1000 probes selected by each method (based on raw p value), none were shared by all methods (including unadjusted results). If EWASher was excluded, 87 probes overlapped among the most significant 1000, and 89 probes overlapped among methods excluding EWASher and the unadjusted results. Therefore, the methods are highlighting quite different results for the most significant probes. QQ plots of −log10 p values from the ARCTIC study with different adjustment methods. These are split across two panels, (a) and (b), to clarify the display Table 9 Performance metrics for the ARCTIC data with the most significant probes removed (top 5 %) In Table 10, p values are shown for probes that have been linked to smoking status in a large published EWAS [33]. They specifically discuss seven CpGs, previously reported as associated with smoking, that were replicated in their work. Although substantial evidence of association can be seen at all probes, it is interesting to see the differences in significance across methods. For example, at probe cg21161138, significance ranges from 10−7 to 10−25. Table 10 P values for sites previously found to be associated with smoking [33] Results from analysis of the rheumatoid arthritis data set We also performed analysis of data from a rheumatoid arthritis study published in 2013 [34]. The data are available from GEO (http://www.ncbi.nlm.nih.gov/geo/). Methylation was measured with the Illumina 450K array in whole-blood samples from 354 anti-citrullinated protein antibody-associated rheumatoid arthritis cases and 337 controls. The manuscript reported 51,478 CpGs as demonstrating evidence of significant association with disease status. Akin to the ARCTIC analysis, we compared the results of running the different cell-type adjustment methods on these data. We attempted to replicate the original analysis as closely as possible by using the Illumina control probe scaling procedure, including the same covariates in our linear model (age, sex, and smoking status), and adjusting for cell-type composition using the reference-based method. However, when extracting the significant CpGs mentioned in their Additional file 1, the distribution of raw p values in our analysis did not match those in the original paper. This could suggest there is a step in the original analysis not explicitly stated in the paper's "Methods" section. However, since for our purposes, we wish to compare the results of the adjustment methods relative to each other, we do not believe this to be a significant issue. For our comparison, we have used functional normalization [35]. Comparisons of the distribution of the reported p values and the ones we obtained can be seen in Additional file 1: Figure S9. The results from our analysis are summarized in Fig. 6. We performed the cell-type adjustment methods using all probes on autosomes, but restricted the analysis to the 51,478 probes reported in the paper. We examine the proportion of CpGs present in the top d significant CpGs found in each method that were also present in the top d CpGs found in the original paper. The method showing the highest concordance with the originally reported CpG list is the reference-based method, which was expected given that the reference-based method was used in the original analysis. It is evident, however, that the different adjustment methods do not replicate the findings of the original study, especially for smaller values of d. We have shown that the choice of cell-type adjustment method can drastically change the conclusions of an EWAS. Agreement of the proportion of top d CpGs declared significant in the different cell-type adjustment methods with the top d CpGs in the originally reported list of significant CpGs from the rheumatoid arthritis study Computational performance To compare computational time across the different adjustment methods, we selected a random sample of 10,000 CpGs from the ARCTIC methylation matrix to create a benchmark data set. As we are not making any statistical inference here, all samples were included, regardless of whether we had matching cell-type sets or quality control status. Some of the methods calculate p values and parameter estimates internally, and others require the use of an external function to perform a linear fit. Therefore, to make the computational times comparable, we define the start time as when the adjustment method is first called, and the end time when all estimates and p values have been obtained. Figure 7 shows the running times on the log scale, as the sample size increases (N=50 to N=500), and for methods where a value of the latent dimension K can be specified, running times as K increases with a fixed sample size (N=50). There are major differences in running times for the cell-type adjustment methods. Not surprisingly, the Ref-based method is very fast, as is RUV. The slowest methods are Deconf and CellCDec. The computational time required for the Ref-free method also increases quickly with the sample size. In Fig. 7 b, it is interesting to note that increasing K has very little effect on the speed of four of the six methods that require a specification of K. However, the computational times for both CellCDec and Deconf increase exponentially with larger values of K. As noted previously, we were not able to obtain results for these methods with the ARCTIC data set when using all autosomal probes. We note also that the complexity of preparation of the input files also varies from one algorithm to another. Computational time comparison. a Sample size. The latent dimension was estimated by the algorithms as needed. b Latent dimension. The sample size is fixed at 50 We have presented an extensive comparison of eight different methods for adjusting for cell-type-mixture confounding, by designing a rich simulation based on cell-type-separated methylation data in SARD patients. Our simulation contained multiple levels of variability, between cell types, at the level of the probe means, and at the level of the individual. We found that there was no adjustment method whose performance was uniformly the best, and in fact in some of our scenarios, the unadjusted results were quite comparable to the best adjusted results. These general conclusions were similar whether we had a few DMSs of large effect (scenario 7) or many correlated DMSs of smaller effect size (scenario 8). Ten replications were run in both simulation scenarios 7 and 8. However, for scenarios 1 through 6, the reported results were obtained from one run of the simulation for each scenario. There are two reasons for this: firstly, the computational time for some methods was very long, and hence, multiple simulations would be extremely time-consuming. More fundamentally, however, since our metrics of performance are calculated from the distributions of behavior across all good quality probes in the 450K array, we obtained very similar results from repeated runs of our simulations, during the phase when we were designing the simulation setup and choosing parameter values. In many of our simulated scenarios, as might be expected, the reference-based method performed well. This method is very easy to implement, and as seen in the computing performance section, it runs very quickly, even on larger sample sizes. It usually achieved good statistical power, and, with one exception, reduced NFP relative to the unadjusted model. It also has the advantage of being able to estimate directly the cell-type composition of each sample. Therefore, the Ref-based method is an obvious choice when a complete set of the required cell-separated methylation profiles are available; however, this is not always the case. For some tissues, cell types that are of particular interest are very difficult or impossible to extract; one example here would be the syncytiotrophoblast cells in placenta [36, 37]. In every case we examined, EWASher did a very good job in reducing p value inflation, and GIF values were substantially reduced from the unadjusted analyses. However, that this method so strictly forces the GIF factor downwards may raise concerns about over-correction. If there were, for example, global hypermethylation associated with a disease, adjustment using EWASher would be overly conservative. Additionally, part of the algorithm involves filtering out loci that are unilaterally high or low among all subjects. The assumption behind this filter is that these loci are, for all intents and purposes, completely methylated or unmethylated and any associations between these probes and the phenotype are not interesting. This may be an overly strong assumption. In our simulations, this filter results in the dramatically worse power of this method, since we did not restrict the randomly selected DMS loci to any particular mean level of methylation. Furthermore, the EWASher method is quite difficult to implement. Although most methods can be run in R (https://www.r-project.org), EWASher requires the user to create three separate input files for a standalone executable, and then to perform post-processing in R. We cannot explain the poor performance in our simulations of the Ref-free method. The NFP were almost always more inflated than in the raw data, and this inflation is clearly visible in the QQ plots. Furthermore, the implementation was somewhat more complex since the approach involved one step to estimate the latent dimension, a second to get parameter estimates, and then finally bootstrap calculations to obtain standard errors. The performance of the Ref-free method was good for scenario 6 with a continuous phenotype, so we hypothesize that there are some linearity assumptions in the correction that are being violated in our binary phenotype simulations. The performance of CellCDec and Deconf was generally quite good for binary phenotypes. The CellCDec method exists as a C++ program, and was quite easy to implement. The number of latent cell types must be specified in advance, which is a limitation. The run time was longer for this algorithm than the others, and increased quickly with the assumed number of cell types; in fact, we were unable to obtain results with the ARCTIC data. CellCDec does not use phenotype information; it would be interesting to see how this program performs if it took the phenotype and other covariates into account. For Deconf, the most important limitation was the running time. In all cases, it took longer to run than the other adjustment methods, and we were unable to obtain results for the ARCTIC data. The run time was sensitive to both increases in sample size and number of cell types. Akin to CellCDec, that it does not internally estimate the number of cell types is an issue. The results for ISVA and RUV were often among the better ones with a couple of notable exceptions: NFP were extremely high for RUV in the low-precision scenario, and for ISVA in the no confounding scenario. The computational time for the ISVA method also increased quite rapidly with sample size. RUV is very easy to run and is available as an R function. It contains a function to estimate the latent dimension (K), although, akin to the other methods that estimate K, the estimated dimension tends to be much higher than the simulated reality. We performed some investigations into how RUV performs at a range of values for K, and the best performance was observed, in most simulation scenarios, at smaller values such as K=3. Recently, Houseman also found that estimated latent dimensions obtained through random matrix theory may not be the best choices [38]. RUV is also extremely fast, slower only than the Ref-based method, and as shown in Fig. 7, the computational time is essentially invariant as the latent dimension is varied, making this an attractive option. Nevertheless, the SVA method, although rarely the best, did not have any notable failures across our scenarios, and was easy to implement. There are other methods for deconvolution that we did not examine, especially in the computer science and engineering literature [39]. However, it is not clear whether these methods would be easily adapted for use on methylation data. Also, new methods for DNA methylation analysis continue to be published, such as [40]. However, the spectrum of methods that we have examined includes the most-commonly used approaches. All methods that we have examined assume approximately linear relationships between the phenotype and the methylation levels or covariates; however, this should not be an important limitation since approximate linearity should hold [38]. The latent dimension, when estimated, was rarely similar to the dimension of K=2 implemented in our simulation. However, these estimates of K capture aspects of heterogeneity in the data that are only partially attributable to the mixture of data from two cell types. This heterogeneity may also be partially due to technological artefacts from batch effects or experimental conditions, and in particular to the fact that subtler cell lineage differences will still be present even after cell sorting [38]. In summary, our simulation study comparing methods found a wide range of performance across our scenarios with notable failures of some methods in some situations. We recommend SVA as a safe approach for adjustment for a cell-type mixture since it performed adequately in all simulations with reasonable computation time. In all situations, EWAS results are extremely sensitive to the normalization and cell-type adjustment methods used, and hence, this issue should receive more attention when interpreting findings. A set of scripts enabling implementation of all these methods can be found at https://github.com/GreenwoodLab/CellTypeAdjustment. We have compared eight different methods for adjusting methylation data for cell-type-mixture confounding in a rich and multi-layered simulation study, and in a large set of samples where methylation was measured in whole blood. No method performs best in all simulated scenarios, nevertheless we recommend SVA as a method that performed adequately without notable failures. Patient data and quality checks Ethical approval was obtained at the Jewish General Hospital and at McGill University, Montreal, QC, to obtain whole-blood samples from the patients with SARDs, at the time of initial diagnosis prior to any treatment. Cell purification and phenotyping protocols for cell subset isolation, analysis, purity evaluation, fractionation, and storage were standardized and optimized. Then 40 ml of peripheral blood were obtained from the above subjects and processed within 4 hours. PBMCs were separated with lymphocyte separation medium (Mediatech, Inc.). Isolated PBMCs were sequentially incubated with anti-CD19, anti-CD14, and anti-CD4 microbeads (Miltenyi Biotec). Automated cell separation of specific cell subpopulations was performed with auto-MACS using positive selection programs. An aliquot of the specific isolated cell subtypes was used for purity assessment with flow cytometric analysis. A minimum of 2 million cells from each subpopulation with a purity higher than 95 % were frozen in liquid nitrogen for the epigenomic studies. The optimized protocols required the isolation of sufficient numbers of CD4 + lymphocytes (9.04±4.03×106) and CD14 + monocytes (7.89±2.96×106), and CD19 + B lymphocytes (2.02±1.42×106), of sufficient purity to perform the epigenetic analyses. The required number of cells with the right purity was not always available, especially for the CD19 + B lymphocytes, so we did not have all three cell types for all patients; for this reason, the simulation used only two cell types and 46 patients. Illumina Infinium HumanMethylation450 BeadChip data were normalized with funnorm [35]. Also, a number of probes were removed, specifically those on the sex chromosomes as well as probes close to single-nucleotide polymorphisms [41]. There were 375,639 probes remaining after filtering. Details of the simulation method This simulation design was initially developed in the master's thesis of the first author [42]. Selection of DMS probes: S=500 probes were randomly selected to be associated with the phenotype. Phenotype (z i , i=1,…,n): A random sample of size 46 was drawn from either a Bernoulli distribution (p=0.5) for a binary phenotype, or from a standard normal distribution for a continuous phenotype. Cell-type-specific methylation values for non-DMS probes: Let β ijk represent the true methylation value for individual i, probe j, and cell type k. For a probe that is not a DMS probe, the simulated value \(\beta ^{\prime }_{ijk} = \beta _{ijk}\). Cell-type-specific methylation values for DMS probes: Cell-type-specific means are sampled from normal distributions with given parameters. That is, for chosen values μ k and σ k for k=1,2, cell-type-specific means that for each DMS probe, μ jk are generated from \(\mu _{jk} \sim N\left (\mu _{k}, {\sigma ^{2}_{k}}\right)\). In scenario 8, we split the DMSs into two blocks and simulate effect sizes from a multivariable normal distribution with a fixed background correlation in each block. That is, probes in the same block are correlated, but probes across blocks are independent from one another. In this scenario, the parameters μ k for k=1,2 differ between the blocks. The simulated cell-type-specific methylation effect, ε ijk , at a DMS, for an individual sample i and an individual probe j, is another random quantity, so that $$ e_{ijk} \sim N\left(\mu_{jk}, \sigma^{2}_{jk}\right) $$ where σ jk is a parameter provided to the simulation. For either a binary or continuous phenotype z i , the simulated methylation value β ijk is then $$ \beta^{\prime}_{ijk} = \text{logit}^{-1} \left(\text{logit} \left(\beta_{ijk} \right) + z_{i} e_{ijk} \right). $$ Although all the random effects were simulated on a linear scale, the results are reconverted to the (0,1) scale since several of the cell-type adjustment methods require this range. Combining across cell types: Each individual is assumed to have a unique mixture of the two cell types in a way that depends on the phenotype, z. Let \(\alpha ^{(0)} = \left (\alpha _{1}^{(0)}, \alpha _{2}^{(0)}\right)^{\top }\) represent the average proportions of the two cell types when z i =0, and then let \(\alpha ^{(Z)} = \left (\alpha _{1}^{(Z)}, \alpha _{2}^{(Z)}\right)^{\top }\) be these proportions when z i =Z. We then say, $$ \begin{aligned} \alpha^{(Z)} &\!= \alpha^{(0)}+ Z \\ &\!\quad\times\!\! \left[\begin{array}{l} \text{\!average change in proportion in monocytes\!\!} \\ \text{\!average change in proportion in T cells}\! \end{array}\right]\!. \end{aligned} $$ The cell-type proportions p ik for individual i were then generated from \(\text {Dirichlet}\left (\rho \alpha _{k}^{(Z)}\right)\), where ρ>0 is a precision parameter, such that larger precision corresponds to less variation in the observed values. The final simulated beta value for person i at CpG site j becomes $$ \beta^{f}_{ij} = p_{i1} \beta^{\prime}_{ij1} + p_{i2} \beta^{\prime}_{ij2}. $$ Key notation definitions are summarized in Table 1, and parameter choices for the simulations are in Table 2. Description of adjustment methods The performance of eight popular methods is compared. Brief descriptions of each method are provided here, and Table 11 compares some key features of the methods, including some details of the implementations. This set of eight methods is not an exhaustive list of all methods available at this time. In fact, in other fields, particularly engineering and computer science, there exists a plethora of other methods under the guise of deconvolution providing the same kind of correction for unmeasured confounding in other high-throughput data sources [39]. However, we include and compare many of the approaches that are in common usage in the last few years in the world of genomics and epigenomics. Table 11 Comparison of some features of the methods for cell-type mixture adjustment Reference-based This method was published in 2012 by Houseman et al. [19]. It relies on the existence of a separate data set containing methylation measurements on separated cell types. The method uses methylation profiles for the individual cell types to estimate directly the cell-type composition of each sample. However, cell-separated data are not always available for all constituent cell types. Reference-free The second method from Houseman et al. does not depend on a reference data set, and therefore, can be used in methylation studies on any tissue type [20]. Rather than directly estimating cell-type composition, the reference-free method performs a singular value decomposition on the concatenation of the estimated coefficient and residual matrices from an initial, unadjusted model. A set of latent vectors is then obtained that accounts for cell type in further analyses. Surrogate variable analysis Surrogate variable analysis (SVA) is a popular method that was introduced by Leek and Storey in 2007 [23]. It was not specifically intended for use in methylation studies, but is nonetheless well suited for such analyses. SVA seeks a set of surrogate variables that span the same linear space as the unmeasured confounders (i.e. cell-type proportions). It is based on a singular value decomposition on the residual matrix from a regression model not accounting for cell-type composition. The total number of surrogate variables included in the model is based on a permutation test. Independent surrogate variable analysis Independent surrogate variable analysis (ISVA) from Teschendorff et al. [24] is very similar in principle to SVA. The main difference is that instead of applying singular value decomposition, it uses independent component analysis (ICA), which attempts to find a set of latent variables that are as statistically independent as possible. FaST-LMM-EWASher (EWASher) This method from Zou et al. [22] extends the Factored Spectrally Transformed Linear Mixed Model algorithm (FaST-LMM) [43] for use in the context of EWAS. A similarity matrix is calculated based on the methylation profiles, and principal components are subsequently included in the linear mixed model until GIF is controlled. The maximum number of principal components allowed was fixed as ten. Removing unwanted variation The method called Removing Unwanted Variation (RUV) was published in 2012 [26] by Gagnon-Bartsch and Speed. It performs a factor analysis on negative control probes to separate out variation due to unmeasured confounders, while leaving the variation due to the factors of interest intact. Here we use RUV-4, an extension to the original published version, which uses elements from RUV as well as SVA [27]. Control probes were chosen from a list of 500 probes on the 450K platform known to be differentially methylated with blood cell type and age [13]. We selected probes that were not strongly correlated with the simulated phenotype. Deconfounding The Deconf method from Repsilber et al. [25] was developed for gene expression studies on heterogeneous tissue samples, but is applicable for use in EWAS. The algorithm performs a non-negative matrix factorization on the methylation matrix, but does not consider the phenotype in correcting for the heterogeneity and does not estimate the number of cell types present. CellCDec CellCDec was developed by Wagner [21], and is similar to Deconf in that it does not consider the phenotype in performing its decomposition and does not internally estimate the number of cell types present. The method assumes a specific regression parameterization, and makes random perturbations to the model parameters, which are accepted if there is a decrease in the sum squared residuals. Additional statistical details For each of the simulation scenarios 1–6, the simulation was run once, whereas in scenarios 7 and 8, we performed ten replications. DMSs were chosen randomly in each simulation scenario and within each replication. After cell-type adjustment, a linear model was performed including the latent variables as covariates (except in EWASher where the model was run within the function call). Standard errors were obtained after performing the empirical Bayes method eBayes from the limma package in R. Probes were declared significant if the p value fell below the fixed value 10−4. In the reference-free method, standard errors were obtained via the bootstrap procedure included in the method. To estimate standard errors, 100 bootstrap samples were generated. We performed test runs with higher numbers of bootstrap samples (500 and 1000), but did not find significant differences in the resulting p values. For SVA, we used the "iteratively re-weighted least squares" option; however, no control probes were specified. For RUV, the control probes were chosen from sites previously shown to be associated with blood cell type and age, but were not significantly associated with the phenotype of interest, a necessary condition for a probe to be used as a control in this method. For each of the methods requiring a pre-specified value of K, the latent dimension, we ran the methods multiple times with different values of K. We consistently found that performance did not change with increasing values of K and so when running the methods CellCDec and Deconf, the value of K was fixed at 3 in all scenarios to keep running times down. For the RUV method, we also noticed better performance when K was fixed at 3, despite that the method consistently estimated a much higher value for that parameter. All results shown for RUV were run with K fixed at 3. None of the other adjustment methods had specific options or tuning parameters that needed to be provided by the user. Compliance with ethical standards Ethics committee approval for this study was obtained at McGill University and all subjects provided informed written consent to participate in the study. The Institutional Research Board (IRB) number is [A12 M83 12A]. This study complies with the Helsinki Declaration. Availability of supporting data The data sets supporting the results of this article are available in the repositories: ARCTIC data are in dbGAP under accession number [phs000779.v1.p1], http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000779.v1.p1. The rheumatoid arthritis data are available under accession number [GSE42861], https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE42861. Data for simulation scenarios for cell-type mixtures are available at Zenodo [10.5281/zenodo.46746], https://zenodo.org/record/46746%23.VtW8H2SAOko. Choy MK, Movassagh M, Goh HG, Bennett MR, Down TA, Foo RS. Genome-wide conserved consensus transcription factor binding motifs are hyper-methylated. BMC Genom. 2010; 11(1):519. Rakyan V, Down T, Balding D, Beck S. Epigenome-wide association studies for common human diseases. Nat Rev Genet. 2011; 12(8):529–41. Khavari DA, Sen GL, Rinn JL. DNA methylation and epigenetic control of cellular differentiation. Cell Cycle. 2010; 9(19):3880–3. Meissner A, Mikkelsen TS, Gu H, Wernig M, Hanna J, Sivachenko A, et al. Genome-scale DNA methylation maps of pluripotent and differentiated cells. Nature. 2008; 454(7205):766–70. Bird A. DNA methylation patterns and epigenetic memory. Genes Dev. 2002; 16:6–21. Laird P. Principles and challenges of genome-wide DNA methylation analysis. Nat Rev Genet. 2010; 11:191–203. Farid N. The immunogenetics of autoimmune diseases. Boca Raton, FL: CRC Press; 1991. Papp G, Horvath I, Barath S, Gyimesi E, Spika S, Szodoray P, et al. Altered T-cell and regulatory cell repertoire in patients with diffuse cutaneous systems sclerosis. Scand J Rheumatol. 2011; 40:205–10. Gambichler T, Tigges C, Burkert B, Hoxtermann S, Altmeyer P, Kreuter A. Absolute count of T and B lymphocyte subsets is decreased in systemic sclerosis. Eur J Med Res. 2010; 15:44–6. Wagner D, Kaltenhauser S, Pierer M, Wilke B, Arnold S, Hantzschel H. B lymphocytopenia in rheumatoid arthritis is associated with the DRB1 shared epitope and increased acute phase response. Arthritis Res. 2002; 4(4):R1. Manda G, Neagu M, Livescu A, Constantin C, Codreanu C, Radulescu A. Imbalance of peripheral B lymphocytes and NK cells in rheumatoid arthritis. J Cell Mol Med. 2003; 7(1):79–88. Scott D, Wolfe F, Huizinga T. Rheumatoid arthritis. Lancet. 2010; 376(9746):1094–108. Jaffe A, Irizarry R. Accounting for cellular heterogeneity is critical in epi-genome-wide association studies. Genome Biol. 2014; 15:R31. Reinius L, Acavedo N, Joerink M, Pershagen G, Dahlen SE, Greco D, et al. Differential DNA methylation in purified human blood cells: implications for cell lineage and studies on disease susceptibility. PLoS One. 2012; 7(7):e41361. Gu H, Bock C, Mikkelsen T, Jager N, Smoth Z, Tomazou E, et al. Genome-scale DNA methylation mapping of clinical samples at single-nucleotide resolution. Nat Methods. 2010; 7:133–6. Liang L, Cookson W. Grasping nettles: cellular heterogeneity and other confounders in epigenome-wide association studies. Hum Mol Genet. 2014; 21(R1):83–8. Liu Y, Aryee M, Padyukov L, Fallin M, Hesselberg E, Runarsson A, et al. Epigenome-wide association data implicate DNA methylation as an intermediary of genetic risk in rheumatoid arthritis. Nat Biotechnol. 2013; 31(2):142–8. Michels K, Binder A, Dedeurwaerder S, Epstein C, Greally J, Gut I, et al. Recommendations for the design and analysis of epigenome-wide association studies. Nat Methods. 2013; 10(10):940–55. Houseman EA, Accomando WP, Koestler DC, Christensen BC, Marsit CJ, Nelson HH, et al. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinform. 2012; 13(1):86. Houseman EA, Molitor J, Marsit CJ. Reference-free cell mixture adjustments in analysis of DNA methylation data. Bioinformatics. 2014; 30(10):1431–9. Wagner J. Computational approaches for the study of gene expression, genetic and epigenetic variation in human. Montreal, QC: McGill University School of Computer Science; 2015. Zou J, Lippert C, Heckerman D, Aryee M, Listgarten J. Epigenome-wide association studies without the need for cell-type composition. Nat Methods. 2014; 11(3):309–11. Leek JT, Storey JD. Capturing heterogeneity in gene expression studies by surrogate variable analysis. PLoS Genet. 2007; 3(9):e161. Article PubMed Central Google Scholar Teschendorff AE, Zhuang J, Widschwendter M. Independent surrogate variable analysis to deconvolve confounding factors in large-scale microarray profiling studies. Bioinformatics. 2011; 27(11):1496–505. Repsilber D, Kern S, Telaar A, Walzl G, Black GF, Selbig J, et al. Biomarker discovery in heterogeneous tissue samples-taking the in-silico deconfounding approach. BMC Bioinform. 2010; 11(1):27. Gagnon-Bartsch JA, Speed TP. Using control genes to correct for unwanted variation in microarray data. Biostatistics. 2012; 13(3):539–52. Gagnon-Bartsch JA, Jacob L, Speed TP. Removing unwanted variation from high dimensional data with negative controls. Berkeley: Department of Statistics. University of California; 2013. Devlin B, Roeder K. Genomic control for association studies. Biometrics. 1999; 55:997–1004. Harris RA, Nagy-Szakal D, Kellermayer R. Human metastable epiallele candidates link to common disorders. Epigenetics. 2013; 8(2):157–63. Silver MJ, Kessler NJ, Hennig BJ, Dominguez-Salas P, Laritsky E, Baker MS, et al. Independent genomewide screens identify the tumor suppressor VTRNA2-1 as a human epiallele responsive to periconceptional environment. Genome Biol. 2015; 16(1):118. Plerou V, Gopikrishnan P, Rosenow B, Amaral LAN, Guhr T, Stanley HE. Random matrix approach to cross correlations in financial data. Phys Rev E. 2002; 65(6):066126. Zanke BW, Greenwood CM, Rangrej J, Kustra R, Tenesa A, Farrington SM, et al.Genome-wide association scan identifies a colorectal cancer susceptibility locus on chromosome 8q24. Nat Genet. 2007; 39(8):989–94. Tsaprouni LG, Yang TP, Bell J, Dick KJ, Kanoni S, Nisbet J, et al. Cigarette smoking reduces DNA methylation levels at multiple genomic loci but the effect is partially reversible upon cessation. Epigenetics. 2014; 9(10):1382–96. Liu Y, Aryee MJ, Padyukov L, Fallin MD, Hesselberg E, Runarsson A, et al. Epigenome-wide association data implicate DNA methylation as an intermediary of genetic risk in rheumatoid arthritis. Nat Biotechnol. 2013; 31(2):142–7. Fortin JP, Labbe A, Lemire M, Zanke BW, Hudson TJ, Fertig EJ, et al. Functional normalization of 450k methylation array data improves replication in large cancer studies. Genome Biol. 2014; 15(11):503. Le Bellego F, Vaillancourt C, Lafond J, Vol. 550. Human Embryogensis: Methods and Protocols, Book chapter: 4. Methods in Molecular Biology.Springer; 2009, pp. 73–87. Kaspi T, Nebel L. Isolation of syncytiotrophoblasts from human term placenta. Obstet Gynecol. 1974; 43:549–57. Houseman EA, Kelsy KT, Wiencke KJ, Marsit CJ. Cell-composition effects in the analysis of DNA methylation array data: a mathematical perspective. BMC Bioinform. 2015; 16:95. Yadav V, De S. An assessment of computational methods for estimating purity and clonality using genomic data derived from heterogeneous tumor tissue samples. Brief Bioinform. 2014; 16(2):232Ű241. Jones MJ, Islam SA, Edgar RD, Kobor MS. Adjusting for Cell Type Composition in DNA Methylation Data Using a Regression-Based Approach. Totowa, NJ: Humana Press, pp. 1–8. Busche S, Ge B, Vidal R, Spinella J-F, Saillour V, Richer C, Healy J, Chen S-H, Droit A, Sinnett D, Pastinen T. Integration of High-Resolution Methylome and Transcriptome Analyses to Dissect Epigenomic Changes in Childhood Acute Lymphoblastic Leukemia. Cancer Res; 73(14):4323–36. McGregor K. Methods for estimating changes in DNA methylation in the presence of cell type heterogeneity. Montreal, QC: McGill University Department of Epidemiology, Biostatistics, and Occupational Health; 2015. Lippert C, Listgarten J, Liu Y, Kadie CM, Davidson RI, Heckerman D. FaST linear mixed models for genome-wide association studies. Nat Methods. 2011; 8(10):833–5. Computations were made on the supercomputer Mammouth parallèle 2 from Université de Sherbrooke, managed by Calcul Québec and Compute Canada. The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), NanoQuébec, RMGA, and the Fonds de recherche du Québec, Nature et technologies (FRQ-NT). McGill University, Department of Epidemiology, Biostatistics, and Occupational Health, 1020 Pine Ave. West, Montréal, H3A 1A2, QC, Canada Kevin McGregor & Aurélie Labbe Lady Davis Research Institute, Jewish General Hospital, 3755 Chemin de la Côte Sainte Catherine, Montréal, H3T 1E2, QC, Canada Kevin McGregor, Sasha Bernatsky, Marie Hudson & Celia M.T. Greenwood Division of Rheumatology, Jewish General Hospital, Montréal, QC, Canada Marie Hudson McGill University and Genome Quebec Innovation Centre, McGill University, Montréal, QC, Canada Tomi Pastinen Department of Human Genetics, McGill University, Montréal, QC, Canada Department of Medicine, McGill University, Montréal, QC, Canada The Research Institute of the McGill University Health Centre, Montréal, QC, Canada Ines Colmegna Department of Psychiatry, McGill University, Montréal, QC, Canada Aurélie Labbe The Douglas Mental Health University Institute, Verdun, QC, Canada Kevin McGregor Sasha Bernatsky Celia M.T. Greenwood Correspondence to Celia M.T. Greenwood. Study design: KM, AL, CG. Data collection: MH, SB, IC, TP. Data analysis: KM. Writing manuscript: KM, AL, CG, MH. All authors read and approved the final manuscript. This work was funded by the Ludmer Centre for Neuroinformatics and Mental Health, by the Canadian Institutes of Health Research operating grant MOP-300545, and by a Lady Davis Institute Clinical Research Pilot Project award. Contains plots showing simulated effect sizes among 500 CpGs in simulation scenario 1, and QQ plots for p values of several simulation scenarios. (PDF 771 kb) McGregor, K., Bernatsky, S., Colmegna, I. et al. An evaluation of methods correcting for cell-type heterogeneity in DNA methylation studies. Genome Biol 17, 84 (2016). https://doi.org/10.1186/s13059-016-0935-y DOI: https://doi.org/10.1186/s13059-016-0935-y Cell-type mixture Matrix decomposition Epigenomics of Disease
CommonCrawl
QMUL Experimental Particle Physics Programme 2009-2014 Lead Research Organisation: Queen Mary, University of London Department Name: Physics The Queen Mary Experimental Particle Physics Group has an exciting set of particle physics experiments at the forefront of the field. Members of the Group have been working on the design, R&D, construction and commissioning of the ATLAS detector at the CERN LHC which is just starting to see real data in the form of cosmic ray and a few beam splash events. They are being joined by colleagues from the H1 and BaBar experiments whose analyses are coming to an end after many years of extremely productive results including measurements of CP violation in the bottom quark sector that were recognized in the award of the 2008 Nobel Prize for physics. The ATLAS Group has also been joined by colleagues from the CDF experiment who are experts on the top quark. The ATLAS group will continue the study of the top quark at the LHC and the expertise gained will allow us to probe for new physics such as the discovery of the Higgs particle or Supersymmetry. We will also continue our study of proton structure at the highest possible energies. The Queen Mary Group is also starting to get involved in upgrades to the ATLAS detector for the higher luminosity Super-LHC, first by participating in the ATLAS Tracker Upgrade programme and later in possible Trigger upgrades. At the other end of the mass scale other colleagues from BaBar are currently building the T2K long baseline neutrino experiment in Japan which will continue the investigations of the recently discovered neutrino oscillations. In addition the Group will look to exploit new opportunities, such as Super B Factories or Linear Colliders when they become available. Oct 10 - Sep 12 STFC ST/H001042/2 Stephen Lloyd Particle physics - experiment (100%) Beyond the Standard Model (100%) Queen Mary, University of London, United Kingdom (Lead Research Organisation) Stephen Lloyd (Principal Investigator) http://orcid.org/0000-0002-5073-2264 Francesca Di Lodovico (Co-Investigator) Lucio Cerrito (Co-Investigator) Eram Rizvi (Co-Investigator) http://orcid.org/0000-0001-9834-2671 Adrian John Bevan (Co-Investigator) http://orcid.org/0000-0002-4105-9629 Alex James Martin (Co-Investigator) Graham Thompson (Co-Investigator) |< < 9 10 11 12 13 14 15 16 17 > >| Aad G (2016) Search for new phenomena in final states with large jet multiplicities and missing transverse momentum with ATLAS using s = 13 TeV proton-proton collisions in Physics Letters B Aad G (2013) Measurement of the high-mass Drell-Yan differential cross-section in pp collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:t in Physics Letters B ATLAS Collaboration (2012) A particle consistent with the Higgs boson observed with the ATLAS detector at the Large Hadron Collider. in Science (New York, N.Y.) Aad G (2013) Measurements of top quark pair relative differential cross-sections with ATLAS in pp collisions at $\sqrt{s} = 7\ \mbox{TeV}$ in The European Physical Journal C Aad G (2012) Search for a fermiophobic Higgs boson in the diphoton decay channel with the ATLAS detector in The European Physical Journal C Aad G (2012) Measurement of the polarisation of W bosons produced with large transverse momentum in pp collisions at $\sqrt{s} = 7\ \mbox{TeV}$ with the ATLAS experiment in The European Physical Journal C Aad G (2012) Measurement of event shapes at large momentum transfer with the ATLAS detector in pp collisions at $\sqrt{\mathbf{s}}=7\ \mathrm{TeV}$ in The European Physical Journal C Aad G (2013) Single hadron response measurement and calorimeter jet energy scale uncertainty with the ATLAS detector at the LHC in The European Physical Journal C Aad G (2013) Search for pair-produced massive coloured scalars in four-jet final states with the ATLAS detector in proton-proton collisions at $\sqrt{s} = 7\ \mbox{TeV}$ in The European Physical Journal C Aad G (2013) Measurement of the flavour composition of dijet events in pp collisions at $\sqrt{s}=7\ \mbox{TeV}$ with the ATLAS detector in The European Physical Journal C Award Value ST/H001042/1 01/10/2009 31/03/2011 £1,088,256 ST/H001042/2 Transfer ST/H001042/1 01/10/2010 30/09/2012 £2,686,975 Description We have discovered the Higgs Boson the fundamental scalar boson that is predicted to give mass to all other particles. Exploitation Route Further research is required to establish if this is the Higgs Boson or if it is one of many (possibly Supersymmetric) Higgs Bosons. Sectors Education URL https://twiki.cern.ch/twiki/bin/view/AtlasPublic Description The discovery of the Higgs Boson captured the imagination of millions of people. It will lead to an increased interest in science among the general public and lead to more students studying science at University. Sector Education Impact Types Societal
CommonCrawl
Chaotic distributions for relativistic particles KRM Home A degenerate $p$-Laplacian Keller-Segel model December 2016, 9(4): 715-748. doi: 10.3934/krm.2016013 Well-posedness for the Keller-Segel equation with fractional Laplacian and the theory of propagation of chaos Hui Huang 1, and Jian-Guo Liu 2, Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, China Department of Physics and Department of Mathematics, Duke University, Durham, NC 27708 Received July 2015 Revised February 2016 Published September 2016 This paper investigates the generalized Keller-Segel (KS) system with a nonlocal diffusion term $-\nu(-\Delta)^{\frac{\alpha}{2}}\rho~(1<\alpha<2)$. Firstly, the global existence of weak solutions is proved for the initial density $\rho_0\in L^1\cap L^{\frac{d}{\alpha}}(\mathbb{R}^d)~(d\geq2)$ with $\|\rho_0\|_{\frac {d}{\alpha}} < K$, where $K$ is a universal constant only depending on $d,\alpha,\nu$. Moreover, the conservation of mass holds true and the weak solution satisfies some hyper-contractive and decay estimates in $L^r$ for any $1< r<\infty$. Secondly, for the more general initial data $\rho_0\in L^1\cap L^2(\mathbb{R}^d)$$~(d=2,3)$, the local existence is obtained. Thirdly, for $\rho_0\in L^1\big(\mathbb{R}^d,(1+|x|)dx\big)\cap L^\infty(\mathbb{R}^d)(~d\geq2)$ with $\|\rho_0\|_{\frac{d}{\alpha}} < K$, we prove the uniqueness and stability of weak solutions under Wasserstein metric through the method of associating the KS equation with a self-consistent stochastic process driven by the rotationally invariant $\alpha$-stable Lévy process $L_{\alpha}(t)$. Also, we prove the weak solution is $L^\infty$ bounded uniformly in time. Lastly, we consider the $N$-particle interacting system with the Lévy process $L_{\alpha}(t)$ and the Newtonian potential aggregation and prove that the expectation of collision time between particles is below a universal constant if the moment $\int_{\mathbb{R}^d}|x|^\gamma\rho_0dx$ for some $1<\gamma<\alpha$ is below a universal constant $K_\gamma$ and $\nu$ is also below a universal constant. Meanwhile, we prove the propagation of chaos as $N\rightarrow\infty$ for the interacting particle system with a cut-off parameter $\varepsilon\sim(\ln N)^{-\frac{1}{d}}$, and show that the mean field limit equation is exactly the generalized KS equation. Keywords: rotationally invariant $\alpha$-stable Lévy process, log-Lipschitz continuity, uniqueness of the weak solutions, collision between particles., interacting particle system, Newtonian potential aggregation, stability in Wasserstein metric. Mathematics Subject Classification: Primary: 65M75, 35K55; Secondary: 60J7. Citation: Hui Huang, Jian-Guo Liu. Well-posedness for the Keller-Segel equation with fractional Laplacian and the theory of propagation of chaos. Kinetic & Related Models, 2016, 9 (4) : 715-748. doi: 10.3934/krm.2016013 D. Applebaum, Lévy Processes and Stochastic Calculus,, $2^{nd}$ edition, (2009). doi: 10.1017/CBO9780511809781. Google Scholar F. Bartumeus, F. Peters, S. Pueyo, C. Marraśe and J. Catalan, Helical Lévy walks: Adjusting searching statistics to resource availability in microzooplankton,, Proceedings of the National Academy of Sciences, 100 (2003), 12771. Google Scholar J. Bertoin, Lévy Processes,, Cambridge University Press, (1996). Google Scholar S. Bian and J.-G. Liu, Dynamic and steady states for multi-dimensional Keller-Segel model with diffusion exponent $m>0$,, Comm. Math. Phys., 323 (2013), 1017. doi: 10.1007/s00220-013-1777-z. Google Scholar S. Bian and J.-G. Liu, Ultra-contractivity for Keller-Segel model with diffusion exponent $m>1-2/d$,, Kinet. Relat. Models, 7 (2014), 9. doi: 10.3934/krm.2014.7.9. Google Scholar P. Biler, T. Cieślak, G. Karch and J. Zienkiewicz, Local criteria for blowup in two-dimensional chemotaxis models, preprint,, , (). Google Scholar P. Biler, T. Funaki and W. A. Woyczyński, Interacting particle approximation for nonlocal quadratic evolution problems,, Probability and Mathematical Statistics-Wroclaw University, 19 (1999), 267. Google Scholar P. Biler and G. Karch, Blowup of solutions to generalized Keller-Segel model,, J. Evol. Equ., 10 (2010), 247. doi: 10.1007/s00028-009-0048-0. Google Scholar P. Biler and W. A. Woyczyński, Nonlocal quadratic evolution problems,, Banach Center Publications, 52 (2000), 11. Google Scholar P. Biler and W. A. Woyczyński, Global and exploding solutions for nonlocal quadratic evolution problems,, SIAM J. Appl. Math., 59 (1999), 845. doi: 10.1137/S0036139996313447. Google Scholar F. Bolley, J. A. Cañizo and J. A. Carrillo, Mean-field limit for the stochastic Vicsek model,, Appl. Math. Lett., 25 (2012), 339. doi: 10.1016/j.aml.2011.09.011. Google Scholar M. Bonforte and J. L. Vázquez, Quantitative local and global a priori estimates for fractional nonlinear diffusion equations,, Adv. in Math., 250 (2014), 242. doi: 10.1016/j.aim.2013.09.018. Google Scholar L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian,, Comm. Partial Differential Equations, 32 (2007), 1245. doi: 10.1080/03605300600987306. Google Scholar L. Caffarelli and P. E. Souganidis, Convergence of nonlocal threshold dynamics approximations to front propagation,, Arch. Ration. Mech. Anal., 195 (2010), 1. doi: 10.1007/s00205-008-0181-x. Google Scholar L. Caffarelli and A. Vasseur, Drift diffusion equations with fractional diffusion and the quasi-geostrophic equation,, Ann. of Math., 171 (2010), 1903. doi: 10.4007/annals.2010.171.1903. Google Scholar J. A. Carrillo, S. Lisini and E. Mainini, Uniqueness for Keller-Segel-type chemotaxis models,, Discrete Contin. Dyn. Syst., 34 (2014), 1319. doi: 10.3934/dcds.2014.34.1319. Google Scholar X. Chen, A. Jüngel and J.-G. Liu, A Note on Aubin-Lions-Dubinskiĭ Lemmas,, Acta Appl. Math., 133 (2014), 33. doi: 10.1007/s10440-013-9858-8. Google Scholar F. G. Egana and S. Mischler, Uniqueness and long time asymptotic for the Keller-Segel equation: The parabolic-elliptic case,, Arch. Ration. Mech. Anal., 220 (2016), 1159. doi: 10.1007/s00205-015-0951-1. Google Scholar C. Escudero, Chemotactic collapse and mesenchymal morphogenesis,, Phys. Rev. E, 72 (2005). doi: 10.1103/PhysRevE.72.022903. Google Scholar C. Escudero, The fractional Keller-Segel model,, Nonlinearity, 19 (2006), 2909. doi: 10.1088/0951-7715/19/12/010. Google Scholar V. Feller, An Introduction to Probability Theory and Its Applications: Volume 2,, $2^{nd}$ edition, (1971). Google Scholar N. Fournier, M. Hauray and S. Mischler, Propagation of chaos for the 2D viscous vortex model,, J. Eur. Math. Soc. (JEMS), 16 (2014), 1423. doi: 10.4171/JEMS/465. Google Scholar M. Kac, Foundations of kinetic theory,, Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 3 (1956), 171. Google Scholar E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability,, J. Theoret. Biol., 26 (1970), 399. doi: 10.1016/0022-5193(70)90092-5. Google Scholar J. Klafter, B. White and M. Levandowsky, Microzooplankton feeding behavior and the Lévy walk,, Biological Motion, 89 (1990), 281. doi: 10.1007/978-3-642-51664-1_20. Google Scholar M. Levandowsky, B. White and F. Schuster, Random movements of soil amebas,, Acta Protozoologica, 36 (1997), 237. Google Scholar D. Li and J. L. Rodrigo, Finite-time singularities of an aggregation equation in $\mathbbR^n$ with fractional dissipation,, Comm. Math. Phys., 287 (2009), 687. doi: 10.1007/s00220-008-0669-0. Google Scholar D. Li and J. L. Rodrigo, Refined blowup criteria and nonsymmetric blowup of an aggregation equation,, Adv. in Math., 220 (2009), 1717. doi: 10.1016/j.aim.2008.10.016. Google Scholar D. Li, J. L. Rodrigo and X. Zhang, Exploding solutions for a nonlocal quadratic evolution problem,, Rev. Mat. Iberoamericana, 26 (2010), 295. doi: 10.4171/RMI/602. Google Scholar J.-G. Liu and J. Wang, A note on $L^\infty$ bound and uniqueness to a degenerate Keller-Segel model,, Acta Appl. Math., 142 (2016), 173. doi: 10.1007/s10440-015-0022-5. Google Scholar J.-G. Liu and R. Yang, Propagation of chaos for keller-segel equation with a logarithmic cut-off,, preprint., (). Google Scholar F. Matthäus, M. Jagodič and J. Dobnikar, E. coli superdiffusion and chemotaxis-search strategy, precision, and motility,, Biophys. J., 97 (2009), 946. doi: 10.1016/j.bpj.2009.04.065. Google Scholar P. E. Protter, Stochastic Integration and Differential Equations,, $2^{nd}$ edition, (2004). doi: 10.1007/978-3-662-10061-5. Google Scholar K. Sato, Lévy Processes and Infinitely Divisible Distributions,, Cambridge University Press, (2013). Google Scholar E. M. Stein, Singular Integrals and Differentiability Properties of Functions,, Princeton University Press, (1970). Google Scholar A. Stevens, The derivation of chemotaxis equations as limit dynamics of moderately interacting stochastic many-particle systems,, SIAM J. Appl. Math., 61 (2000), 183. doi: 10.1137/S0036139998342065. Google Scholar A.-S. Sznitman, A propagation of chaos result for Burgers' equation,, Probab. Theory Relat. Fields, 71 (1986), 581. doi: 10.1007/BF00699042. Google Scholar E. Valdinoci, From the long jump random walk to the fractional Laplacian,, Boletín de la Sociedad Española de Matemática Aplicada, 49 (2009), 33. Google Scholar J. L. Vázquez, Nonlinear diffusion with fractional Laplacian operators,, Nonlinear Partial Differential Equations, 7 (2012), 271. doi: 10.1007/978-3-642-25361-4_15. Google Scholar J. L. Vázquez, Recent progress in the theory of nonlinear diffusion with fractional Laplacian operators,, Discrete Contin. Dyn. Syst. Ser. S, 7 (2014), 857. doi: 10.3934/dcdss.2014.7.857. Google Scholar C. Villani, Optimal Transport: Old and New,, Springer-Verlag, (2008). doi: 10.1007/978-3-540-71050-9. Google Scholar V. Yudovich, Non-stationary flow of an ideal incompressible liquid,, U.S.S.R. Comput. Math. and Math. Phys., 3 (1963), 1407. doi: 10.1016/0041-5553(63)90247-7. Google Scholar Yong-Kum Cho. On the Boltzmann equation with the symmetric stable Lévy process. Kinetic & Related Models, 2015, 8 (1) : 53-77. doi: 10.3934/krm.2015.8.53 Xiangjun Wang, Jianghui Wen, Jianping Li, Jinqiao Duan. Impact of $\alpha$-stable Lévy noise on the Stommel model for the thermohaline circulation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1575-1584. doi: 10.3934/dcdsb.2012.17.1575 José Antonio Carrillo, Marco Di Francesco, Antonio Esposito, Simone Fagioli, Markus Schmidtchen. Measure solutions to a system of continuity equations driven by Newtonian nonlocal interactions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 1191-1231. doi: 10.3934/dcds.2020075 Lukáš Poul. Existence of weak solutions to the Navier-Stokes-Fourier system on Lipschitz domains. Conference Publications, 2007, 2007 (Special) : 834-843. doi: 10.3934/proc.2007.2007.834 Alan Mackey, Theodore Kolokolnikov, Andrea L. Bertozzi. Two-species particle aggregation and stability of co-dimension one solutions. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1411-1436. doi: 10.3934/dcdsb.2014.19.1411 Hongjun Gao, Fei Liang. On the stochastic beam equation driven by a Non-Gaussian Lévy process. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1027-1045. doi: 10.3934/dcdsb.2014.19.1027 Yongxia Zhao, Rongming Wang, Chuancun Yin. Optimal dividends and capital injections for a spectrally positive Lévy process. Journal of Industrial & Management Optimization, 2017, 13 (1) : 1-21. doi: 10.3934/jimo.2016001 Scipio Cuccagna, Masaya Maeda. On weak interaction between a ground state and a trapping potential. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3343-3376. doi: 10.3934/dcds.2015.35.3343 Cédric Bernardin, Valeria Ricci. A simple particle model for a system of coupled equations with absorbing collision term. Kinetic & Related Models, 2011, 4 (3) : 633-668. doi: 10.3934/krm.2011.4.633 Vadim Kaushansky, Christoph Reisinger. Simulation of a simple particle system interacting through hitting times. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5481-5502. doi: 10.3934/dcdsb.2019067 Peter Markowich, Jesús Sierra. Non-uniqueness of weak solutions of the Quantum-Hydrodynamic system. Kinetic & Related Models, 2019, 12 (2) : 347-356. doi: 10.3934/krm.2019015 Adam Andersson, Felix Lindner. Malliavin regularity and weak approximation of semilinear SPDEs with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4271-4294. doi: 10.3934/dcdsb.2019081 Andrea L. Bertozzi, Dejan Slepcev. Existence and uniqueness of solutions to an aggregation equation with degenerate diffusion. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1617-1637. doi: 10.3934/cpaa.2010.9.1617 Wendong Wang, Liqun Zhang. The $C^{\alpha}$ regularity of weak solutions of ultraparabolic equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1261-1275. doi: 10.3934/dcds.2011.29.1261 Ludovic Dan Lemle. $L^1(R^d,dx)$-uniqueness of weak solutions for the Fokker-Planck equation associated with a class of Dirichlet operators. Electronic Research Announcements, 2008, 15: 65-70. doi: 10.3934/era.2008.15.65 Xiaobin Sun, Jianliang Zhai. Averaging principle for stochastic real Ginzburg-Landau equation driven by $ \alpha $-stable process. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1291-1319. doi: 10.3934/cpaa.2020063 Wen Chen, Song Wang. A finite difference method for pricing European and American options under a geometric Lévy process. Journal of Industrial & Management Optimization, 2015, 11 (1) : 241-264. doi: 10.3934/jimo.2015.11.241 Roberto Castelli, Susanna Terracini. On the regularization of the collision solutions of the one-center problem with weak forces. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1197-1218. doi: 10.3934/dcds.2011.31.1197 Pedro J. Torres. Non-collision periodic solutions of forced dynamical systems with weak singularities. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 693-698. doi: 10.3934/dcds.2004.11.693 Elena Beretta, Elisa Francini, Sergio Vessella. Uniqueness and Lipschitz stability for the identification of Lamé parameters from boundary measurements. Inverse Problems & Imaging, 2014, 8 (3) : 611-644. doi: 10.3934/ipi.2014.8.611 2018 Impact Factor: 1.38 Hui Huang Jian-Guo Liu
CommonCrawl
Applied Bioinformatics For the students and learners of the world. Chapter 1: Introduction to Biological Sequences, Biopython, and GNU/Linux 1.1 Nucleic Acid Bioinformatics 1.2 Sequences, Strings, and the Genetic Code 1.3 Sequences File Formats 1.4 Lab 1: Introduction to GNU/Linux and Fasta files 1.5 Biological Sequence Database 1.6 Lab 2: FASTQ and Quality Scores Chapter 2: Sequence Motifs 2.1 Introduction to Motifs 2.2 String Matching 2.3 Consensus Sequences 2.4 Motif Finding 2.5 Promoters 2.6 De novo Motif Finding 2.7 Lab 3: Introduction to Motifs Chapter 3: Sequence Alignments 3.1 Alignment Algorithms and Dynamic Programming 3.2 Alignment Software 3.3 Alignment Statistics 3.4 Short Read Mapping 3.5 Lab 4: Using BLAST on the command line Chapter 4: Multiple Sequence Alignments, Molecular Evolution, and Phylogenetics 4.1 Multiple Sequence Alignment 4.2 Phylogenetic Trees 4.3 Models of mutations 4.4 Lab 5: Phylogenetics Chapter 5: Genomics 5.1 The Three Fundamental "Gotchas" of Genomics 5.2 Genomic Data and File Formats 5.3 Genome Browsers 5.4 Lab 6: Genome Annotation Data Chapter 6: Transcriptomics 6.1 High-throughout Sequencing (HTS) 6.2 RNA Deep Sequencing 6.3 Small RNA sequencing 6.4 Long RNA sequencing 6.5 Single-Cell Transcriptomics 6.6 Transcription Initiation 6.7 Transcription 6.8 Elongation 6.9 Lab 7: RNA-seq Chapter 7: Noncoding RNAs 7.1 Small Noncoding RNAs (srcRNAs) 7.2. Long Noncoding RNAs 7.3 RNA Structure Prediction 7.4 Destabilizing energies 7.5 Lab 8: RNA Structure Chapter 8: Proteins 8.1 Protein Alignment 8.2 Functional Annotation of Proteins 8.3 Secondary Structure prediction 8.4 Gene Ontology 8.5 Lab 9: Proteins Chapter 9: Gene Regulation 9.1 Transcription Factors and ChIP-seq 9.2 MicroRNA regulation and Small RNA-seq 9.3 Regulatory Networks 9.4 Lab 10: ChIP-seq Appendix A: Mathematical Preliminaries Appendix B: Probability A multiple sequence alignment is an alignment of more than 2 sequences. It turns out that this makes the problem of alignment much more complicated, and much more computationally expensive. Dynamic programming algorithm such as Smith-Waterman can be extended to higher dimensions, but at a significant computing cost. Therefore, numerous methods have been developed to make this task faster. 4.1.1 MSA Methods There have been numerous methods developed for computing MSAs do make them more computationally feasible. Despite the computational cost of MSA by Dynamic Programming, there have been approaches to compute multiple sequence alignments using these approaches [24,25]. The programs MSA [25] and MULTALIN [26] use dynamic programming. This process takes [latex]O(L^N)[/latex] computations for aligning [latex]N[/latex] sequences of length [latex]L[/latex]. The Carrillo-Lipman Algorithm uses pairwise alignments to constrain the search space. By only considering regions of the multi-sequence alignment space that are within a score threshold for each pair of sequences, the [latex]L^N[/latex] search space can be reduced. Progressive alignments Progressive alignments begin by performing pairwise alignments, by aligning each pair of sequences. Then it combines each pair and integrates them into a multiple sequence alignment. The different methods differ in their strategy to combine them into an overall multiple sequence alignment. Most of these methods are "greedy", in that they combine the most similar pairs first, and proceed by fitting the less similar pairs into the MSA. Some programs that could be considered progressive alignment include T-Coffee [27], ClustalW and its variants [28], and PSAlign [29]. Iterative Alignment Iterative Alignment is another approach that improves upon the progressive alignment because it starts with a progressive alignment and then iterates to incrementally improve the alignment with each iteration. Some programs that could be considered iterative alignment include CHAOS/DIALIGN [30], and MUSCLE [31]. Full Genome Alignments Specialized multiple sequence alignment approaches have been developed for aligning complete genomes, to overcome the challenges associated with aligning such long sequences. Some programs that align full genomes include MLAGAN (using LAGAN) [32], MULTIZ (using BLASTZ) [33], LASTZ [34], MUSCLE [31], and MUMmer [35, 36]. 4.1.2 MSA File Formats There are several file formats that are specifically designed for multiple sequence alignment. These approaches can differ in their readability by a human, or are designed for storing large sequence alignments. Multi-FASTA Format Probably the simplest multiple sequence alignment format is the Multi-FASTA format (MFA), which is essentially like a FASTA file, such that each sequence provides the alignment sequence (with gaps) for a given species. The deflines can in some cases only contain information about the species, and the file name, for example, could contain information about what sequence is being described by the file. For short sequences the mfa can be human readable, but for very long sequences it can become difficult to read. Here is an example [latex]\texttt{.mfa}[/latex] file that shows the alignment of a small (28aa) Drosophila melanogaster peptide called Sarcolamban (isoform C) with its best hits to [latex]\texttt{nr}[/latex]. D.melanogaster --------------------------------MSEARNLFTTFGILAILL FFLYLIYA---------------------VL--------------- >D.sechellia FFLYLIYAPAAKSESIKMNEAKSLFTTFLILAFLLFLLYAFYEAAF >D.pseudoobscura MSEAKNLMTTFGILAFLLFCLYLIYASNNSKRWPTFCGEAEFRSENSESQ LLRAFSYERLEQCPNKKYPPKQPTTTTTKPIKMNEARSLFTTFLILAFLL FLLYAFYEA--------------------AF--------------- >D.busckii --------------------------------MNEAKSLVTTFLILAFLL Clustal The Clustal format was developed for the program [latex]\texttt{clustal}[/latex] [37], but has been widely used by many other programs [28, 38]. This file format is intended to be fairly human readable in that it expresses only a fixed length of the alignment in each section, or block. Here is what the Clustal format looks like for the same Sarcolamban example: CLUSTAL W (1.83) multiple sequence alignment D.melanogaster ------------------------------------------------------------ D.sechellia ------------------------------------------------------------ D.pseudoobscura MSEAKNLMTTFGILAFLLFCLYLIYASNNSKRWPTFCGEAEFRSENSESQLLRAFSYERL D.busckii ------------------------------------------------------------ D.melanogaster ----------------------MSEARNLFTTFGILAILLFFLYLIYA------------ D.sechellia ----------------------MSEARNLFTTFGILAILLFFLYLIYAPAAKSESIKMNE D.pseudoobscura EQCPNKKYPPKQPTTTTTKPIKMNEARSLFTTFLILAFLLFLLYAFYEA----------- D.busckii ----------------------MNEAKSLVTTFLILAFLLFLLYAFYEA----------- *.**:.*.*** ***:***:** :* D.melanogaster ---------VL--------------- D.sechellia AKSLFTTFLILAFLLFLLYAFYEAAF D.pseudoobscura ---------AF--------------- D.busckii ---------AF--------------- The clustal format has the following requirements, which can make it difficult to create one manually. First, the first line in the file must start with the words "[latex]\texttt{CLUSTAL W}[/latex]" or "[latex]\texttt{CLUSTALW}[/latex]". Other information in the first line is ignored, but can contain information about the version of [latex]\texttt{CLUSTAL W}[/latex] that was used to create it. Next, there must be one or more empty lines before the actual sequence data begins. The rest of the file consists of one or more blocks of sequence data. Each block consists of one line for each sequence in the alignment. Each line consists of the sequence name, defline, or identifier, some amount white space, then up to 60 sequence symbols such as characters or gaps. Optionally, the line can be followed by white space followed by a cumulative count of residues for the sequences. The amount of white space between the identifier and the sequences is usually chosen so that the sequence data is aligned within the sequence block. After the sequence lines, there can be a line showing the degree of conservation for the columns of the alignment in this block. Finally, all this can be followed by some amount of empty lines. The Multiple Alignment Format (MAF) can be a useful format for storing multiple sequence alignment information. It is often used to store full-genome alignments at the UCSC Genome Bioinformatics site. The file begins with a header beginning with [latex]\texttt{##maf}[/latex] and information about the version and scoring system. The rest of the file consists of alignment blocks. Alignment blocks start with a line that begins with the letter [latex]\texttt{a}[/latex] and a score for the alignment block. Each subsequent line begins with either an [latex]\texttt{s}[/latex], a [latex]\texttt{i}[/latex], or an [latex]\texttt{e}[/latex] indicating what kind of line it is. The lines beginning with [latex]\texttt{s}[/latex] contain sequence information. Lines that begin with [latex]\texttt{i}[/latex] typically follow each [latex]\texttt{s}[/latex]-line, and contain information about what is occurring before and after the sequences in this alignment block for the species considered in the line. Lines beginning with [latex]\texttt{e}[/latex] contain information about empty parts of the alignment block, for species that do not have sequences aligning to this block. For example, the following is a portion of the alignment of the Human Genome (GRCh38/hg38) [latex]\texttt{chr22}[/latex] with 99 vertebrates. ##maf version=1 scoring=roast.v3.3 a score=49441.000000 s hg38.chr22 10514742 28 + 50818468 acagaatggattattggaacagaataga s panTro4.chrUn_GL393523 96163 28 + 405060 agacaatggattagtggaacagaagaga i panTro4.chrUn_GL393523 C 0 C 0 s ponAbe2.chrUn 66608224 28 - 72422247 aaagaatggattagtggaacagaataga i ponAbe2.chrUn C 0 C 0 s nomLeu3.chr6 67506008 28 - 121039945 acagaatagattagtggaacagaataga i nomLeu3.chr6 C 0 C 0 s rheMac3.chr7 24251349 14 + 170124641 --------------tggaacagaataga i rheMac3.chr7 C 0 C 0 s macFas5.chr7 24018429 14 + 171882078 --------------tggaacagaataga i macFas5.chr7 C 0 C 0 s chlSab2.chr26 21952261 14 - 58131712 --------------tggaacagaataga i chlSab2.chr26 C 0 C 0 s calJac3.chr10 24187336 28 + 132174527 acagaatagaccagtggatcagaataga i calJac3.chr10 C 0 C 0 s saiBol1.JH378136 10582894 28 - 21366645 acataatagactagtggatcagaataga i saiBol1.JH378136 C 0 C 0 s eptFus1.JH977629 13032669 12 + 23049436 ----------------gaacaaagcaga i eptFus1.JH977629 C 0 C 0 e odoRosDiv1.KB229735 169922 2861 + 556676 I e felCat8.chrB3 91175386 3552 - 148068395 I e otoGar3.GL873530 132194 0 + 36342412 C e speTri2.JH393281 9424515 97 + 41493964 I e myoLuc2.GL429790 1333875 0 - 11218282 C e myoDav1.KB110799 133834 0 + 1195772 C e pteAle1.KB031042 11269154 1770 - 35143243 I e musFur1.GL896926 13230044 2877 + 15480060 I e canFam3.chr30 13413941 3281 + 40214260 I e cerSim1.JH767728 28819459 183 + 61284144 I e equCab2.chr1 43185635 316 - 185838109 I e orcOrc1.KB316861 20719851 245 - 22150888 I e camFer1.KB017752 865624 507 + 1978457 I The [latex]\texttt{s}[/latex] lines contain 5 fields after the [latex]\texttt{s}[/latex] at the beginning of the line. First, the source of the column usually consists of a genome assembly version, and chromosome name separated by a dot ".". Next is the start position of the sequence in that assembly/chromosome. This is followed by the size of the sequence from the species, which may of course vary from species to species. The next field is a strand, with "+" or "-", indicating what strand from the species' chromosome the sequence was taken from. The next field is the size of the source, which is typically the length of the chromosome in basepairs from which the sequence was extracted. Lastly, the sequence itself is included in the alignment block. A phylogenetic tree is a representation of the evolutionary history of a character or sequence. Branching points on the tree typically represent gene duplication events or speciation events. We try to infer the evolutionary history of a sequence by computing an optimal phylogenetic tree that is consistent with the extant sequences or species that we observe. 4.2.1 Representing a Phylogenetic Tree A phylogenetic tree is often a "binary tree" where each branch point goes from one to two branches. The junction points where the branching takes place are called "internal nodes". One way of representing a tree is with nested parentheses corresponding to branching. Consider the following example ((A,B),(C,D)); where the two characters [latex]\texttt{A}[/latex] and [latex]\texttt{B}[/latex] are grouped together, and the characters [latex]\texttt{C}[/latex] and [latex]\texttt{D}[/latex] are grouped. The semi-colon at the end is needed to make this tree in proper "newick" tree format. One of the fastest ways to draw a tree on the command line is the "ASCII tree", which we can draw by using the function [latex]\texttt{Phylo.draw_ascii()}[/latex]. To use this, we'll need to save our tree to a text file that we can read in. We could read the tree as just text into the python terminal (creating a string), but that would require loading an additional module [latex]\texttt{cStringIO}[/latex] to use the function [latex]\texttt{StringIO}[/latex]. Therefore, it might be just as easy to save it to a text file called [latex]\texttt{tree.txt}[/latex] that we can read in. Putting this together, we can draw the tree with the following commands: >>> from Bio import Phylo >>> tree = Phylo.read("tree.txt","newick") >>> Phylo.draw_ascii(tree) ___________________________________ A ___________________________________| | |___________________________________ B | ___________________________________ C |___________________________________| |___________________________________ D This particular tree has all the characters at the same level, and does not include any distance or "branch length" information. Using real biological sequences, we can compute the distances along each brach to get a more informative tree. For example, we can download 18S rRNA sequences from NCBI Nucleotide. Using clustalw, we can compute a multiple sequence alignment, and produce a phylogenetic tree. In this case, the command $ clustalw -infile=18S_rRNA.fa -type=DNA -outfile=18S_rRNA.aln will produce the output file [latex]\texttt{18S_rRNA.dnd}[/latex], which is a tree in newick tree format. The file contains the following information (fly:0.16718,(mouse:0.00452,human:0.00351):0.05186,chicken:0.24654); You'll note, this is the same format as the simple example above, but rather than a simple label for each character/sequence, there is a label and a numerical value, corresponding to the brach length, separated by a colon. In addition, each of the parentheses are followed by a colon and a numerical value. In each case, the value of the branch length corresponds to the substitutions per site required to change one sequence to another, a common unit of distance used in phylogenetic trees. This value also corresponds to the length of the branch when drawing the tree. The command to draw a tree image is simply [latex]\texttt{Phylo.draw}[/latex], which will allow the user to save the image. >>> tree = Phylo.read('18S_rRNA.dnd','newick') >>> Phylo.draw(tree) The resulting image can be seen in Figure 4.1, and visually demonstrates the branch lengths corresponding to the distance between individual sequences. The x-axis in the representation corresponds to this distance, but the y-axis only separates taxa, and the distance along the y-axis does not add to the evolutionary distance between sequences. Figure 4.1: A phylogenetic tree computed with Clustal for 18S rRNA sequences for Drosophila melanogaster, Homo sapiens, Mus musculus, and Gallus gallus. 4.2.2 Pairwise Distances Phylogenetic trees are often computed as binary trees with branch lengths that optimally match the pair-wise distances between species. In order to compute a phylogenetic tree, we need a way of defining this distance. One strategy developed by Feng and Doolittle is to compute a distance from the alignment scores computed from pair-wise alignments. The distance is defined in terms of an "effective", or normalized, alignment score for each pair of species. This distance is defined as [latex]D_{ij} = - \ln S_{eff}(i,j)[/latex] so that pairs of sequences [latex](i,j)[/latex] that have high scores will have a small distance between them. The effective score [latex]S_{eff}(i,j)[/latex] is defined as [latex]S_{eff}(i,j) = \frac{S_{real}(i,j) - S_{rand}(i,j)}{S_{iden}(i,j) - S_{rand}(i,j)} \times 100[/latex] Where in this expression, [latex]S_{real}(i,j)[/latex] is the observed pairwise similarity between sequences from species [latex]i[/latex] and [latex]j[/latex]. The value [latex]S_{iden}(i,j)[/latex] is the average of the two scores when you align species [latex]i[/latex] and [latex]j[/latex] to themselves, which represents the score corresponding to aligning "identical" sequences, the maximum possible score one could get. [latex]S_{rand}(i,j)[/latex] is the average pairwise similarity between randomized, or shuffled, versions of the sequences from species [latex]i[/latex] and [latex]j[/latex]. After this normalization, the score [latex]S_{eff}(i,j)[/latex] ranges from 0 to 100. Evolution is a multi-faceted process. There are many forces involved in molecular evolution. The process of mutation is a major force in evolution. Mutation can happen when mistakes are made in DNA replication, just because the DNA replication machinery isn't 100% perfect. Other sources of mutation are exposure to radiation, such as UV radiation, certain chemicals can induce mutations, and viruses can induce mutations of the DNA (as well as insert genetic material). Recombination is a genetic exchange between chromosomes or regions within a chromosome. During meiosis, genes and genomic DNA are shuffled between parent chromosomes. Genetic drift is a stochastic process of changing allele frequencies over time due to random sampling of organisms. Finally, natural selection is the process where differences in phenotype can affect survival and reproduction rates of different individuals. 4.3.1 Genetic Drift Because genetic drift is a stochastic process, it can be modeled as a "rate". The rate of nucleotide substitutions [latex]r[/latex] can be expressed as [latex]r = \frac{\mu}{2T}[/latex] where [latex]\mu[/latex] is the substitutions per site across the genome, and [latex]T[/latex] is the time of divergence of the two (extant) species to their common ancestor. The factor of two can be understood by the fact that it takes a total of [latex]2T[/latex] to go from [latex]x_1[/latex] to [latex]x_2[/latex], stopping at [latex]x_a[/latex] along the way. The mutations that occur over time separating [latex]x_1[/latex] and [latex]x_2[/latex] can be viewed as distributed over a time [latex]2T[/latex]. Figure 4.2: For two extant species [latex]x_1[/latex] and [latex]x_2[/latex] diverged for a time [latex]T[/latex] from a common ancestor [latex]x_a[/latex], the mutation rate can be expressed as a time [latex]2T[/latex] separating [latex]x_1[/latex] and [latex]x_2[/latex]. In most organisms, the rate is observed to be about [latex]10^{-9}[/latex] to [latex]10^{-8}[/latex] mutations per generation. Some viruses have higher mutation rates [latex]10^{-6}[/latex] mutations per generation. The generation times of different species can also affect the nucleotide substitution rate [latex]r[/latex]. Organisms with shorter generation times have more opportunities for meiosis per unit time. When mutation rates are evaluated within a gene, positional dependence in nucleotide evolution is observed. Because of the degeneracy in the genetic code, the third position of most codons has a higher substitution rate. Some regions of proteins are conserved domains, hence the corresponding regions of the gene have lower mutation rates compared to other parts of the gene. Other genes such as immunoglobulins have very high mutation rates and are considered to be "hypervariable". Noncoding RNAs have functional constraints to preserver hairpins, and may have sequence evolution that preserves base pairing through compensatory changes on the paired nucleotide. The result is that many noncoding RNAs such as tRNAs have very conserved structure, but vary at the sequence level. 4.3.2 Substitution Models The branches of phylogenetic trees can often represent the the expected number of substitutions per site. That is, the distance along the branches of the phylogenetic tree from the ancestor to the extant species correspond to the expected number of substitutions per site in the time it takes to evolve from the ancestor to the extant species. A substitution model describes the process of substitution from one set of characters to another through mutation. These models are often neutral, in the sense that selection is not considered, and the characters mutate in an unconstrained way. Furthermore, these models are typically considered to be independent from position to position. Substitution models typically are described by a rate matrix [latex]Q[/latex] with terms [latex]Q_{ab}[/latex] that describe mutating from character [latex]a[/latex] to [latex]b[/latex] for terms where [latex]a \ne b[/latex]. The diagonal terms of the matrix are defined so that the sum of the rows are zero, so that [latex]Q_{aa} = -\sum_{b \ne a} Q_{ab}[/latex] In general, the matrix [latex]Q[/latex] can be defined as: [latex]Q = \begin{pmatrix} * & Q_{AC} & Q_{AG} & Q_{AT} \\ Q_{CA} & * & Q_{CG} & Q_{CT} \\ Q_{GA} & Q_{GC} & * & Q_{GT} \\ Q_{TA} & Q_{TC} & Q_{TG} & * \end{pmatrix}[/latex] where the diagonal terms are defined such that it is consistent with Equation 4.1. The rate matrix is associated with a probability matrix [latex]P(t)[/latex], which describes the probability of observing the mutation from [latex]a[/latex] to [latex]b[/latex] in a time [latex]t[/latex] by the terms [latex]P_{ab}(t)[/latex]. We want these probabilities to be multiplicative, meaning that [latex]P(t_1)P(t_2) = P(t_1+t_2)[/latex]. The mutations associated with amounts of time [latex]t_1[/latex] and [latex]t_2[/latex] applied successively can be understood as the same as the mutations associated with [latex]t_1 + t_2[/latex]. Furthermore, the derivative of the equation can be expressed as [latex]P'(t) = P(t)Q[/latex] The solution to this equation is the exponential function. The rate matrix itself can be exponentiated to compute the probability of a particular mutation in an amount of time [latex]t[/latex], which can be computed using the Taylor series for the exponential function. [latex]P(t) = e^{Qt} = \sum_{n=0}^{\infty} Q^n \frac{t^n}{n!}[/latex] Each such model also assumes an equilibrium frequencies [latex]\pi[/latex], which describe the probability of each nucleotide after the system has reached equilibrium. 4.3.3 Jukes-Cantor 1969 (JC69) The simplest substitution model was proposed by Jukes and Cantor (JC69), and describes equal rates of evolution between all nucleotides [39]. The JC69 model defines a constant mutation rate [latex]\mu[/latex], and equilibrium frequencies such that [latex]\pi_A = \pi_C = \pi_G = \pi_T = \frac{1}{4}[/latex]. The equilibrium frequencies describe the frequencies of each nucleotide that result after the system has evolved under this model for a "very long time". The rate matrix for the Jukes-Cantor model is then given by: [latex]{Q = \begin{pmatrix} -\frac{3}{4}\mu & \frac{\mu}{4} & \frac{\mu}{4} & \frac{\mu}{4} \\ \frac{\mu}{4} & -\frac{3}{4}\mu & \frac{\mu}{4} & \frac{\mu}{4} \\ \frac{\mu}{4} & \frac{\mu}{4} & -\frac{3}{4}\mu & \frac{\mu}{4} \\ \frac{\mu}{4} & \frac{\mu}{4} & \frac{\mu}{4} & -\frac{3}{4}\mu \end{pmatrix}}[/latex] It can be shown that the full expression for computing [latex]P(t) = e^{Qt}[/latex] is: [latex]P(t) = \begin{pmatrix} \frac{1}{4}(1+3e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) \\ \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1+3e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) \\ \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1+3e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) \\ \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1 - e^{- \mu t}) & \frac{1}{4}(1+3e^{- \mu t}) \end{pmatrix}00[/latex] Therefore, we can solve this system to get the probabilities [latex]P_{ab}(t)[/latex], which can be expressed as [latex]\label{JKSimple} \begin{aligned} P_{ab}(t) = \begin{cases} \frac{1}{4} (1 + 3 e^{-\mu t}) & \mbox{if } a = b\\ \frac{1}{4} (1 - e^{-\mu t }) & \mbox{if } a \ne b\\ \end{cases} \end{aligned}[/latex] Finally, we note that the sum of the terms of a row of the matrix [latex]Q[/latex] that correspond to mutation changes give us the expected value of the distance [latex]\hat{d}[/latex] in substitutions per site. For the Jukes-Cantor model, this corresponds to [latex]\hat{d} = \frac{3}{4} \mu t[/latex]. Substituting this into Equation 4.7 for the terms involving change ([latex]a \ne b[/latex]), gives [latex]p = \frac{1}{4} (1 - e^{-\frac{4}{3}\hat{d}})[/latex], which can be solved for [latex]\hat{d}[/latex] to give: [latex]\hat{d} = -\frac{3}{4} \ln (1 - \frac{4}{3} p)[/latex]} This formula is often called the Jukes-Cantor distance formula, and it gives a way to relate the proportion of sites that differ [latex]p[/latex] to the evolutionary distance [latex]\hat{d}[/latex], which stands for the expected number of substitutions in the time [latex]t[/latex] for a mutation rate [latex]\mu[/latex]. This formula corrects for the fact that the proportion of sites that differ, [latex]p[/latex], does not take into account sites that mutate, and then mutate back to the original character. 4.3.4 Kimura 1980 model (K80) The Jukes-Cantor model considers all mutations to be equally likely. The Kimura model (K80) considers the fact that transversions, mutations involving purine to pyrimidines or vice versa, are less likely than transitions, which are from purines to purines or pyrimidines to pyrimidines [40]. Therefore, this model has two parameters: a rate [latex]\alpha[/latex] for transitions, and a rate [latex]\beta[/latex] for transversions. [latex]Q = \begin{pmatrix} -(2\beta+\alpha) & \beta & \alpha & \beta \\ \beta & -(2\beta+\alpha) & \beta & \alpha \\ \alpha & \beta & -(2\beta+\alpha) & \beta \\ \beta & \alpha & \beta & -(2\beta+\alpha) \end{pmatrix}[/latex] Applying a similar derivation as was done for the JC69 model, we get the following distance formula for K80: [latex]{d = -\frac{1}{2} \ln(1 - 2p - q) - \frac{1}{4} \ln(1 - 2q)}[/latex] where [latex]p[/latex] is the proportion of sites that show a transition, and [latex]q[/latex] is the proportion of sites that show transversions. 4.3.5 Felsenstein 1981 model (F81) The Felsenstein model essentially makes the assumption that the rate of mutation to a given nucleotide has a specific value equal to its equilibrium frequency [latex]\pi_b[/latex], but these value vary from nucleotide to nucleotide [41]. The rate matrix is then defined as: [latex]Q = \begin{pmatrix} * & \pi_{C} & \pi_{G} & \pi_{T} \\ \pi_{A} & * & \pi_{G} & \pi_{T} \\ \pi_{A} & \pi_{C} & * & \pi_{T} \\ \pi_{A} & \pi_{C} & \pi_{G} & * \end{pmatrix}[/latex] 4.3.6 The Hasegawa, Kishino and Yano model (HKY85) The Hasegawa, Kishino and Yano model takes the K80 and F81 models a step further and distinguishes between transversions and transitions [42]. In this expression, the transitions are weighted by an additional term [latex]\kappa[/latex]. [latex]Q = \begin{pmatrix} * & \pi_{C} & \kappa \pi_{G} & \pi_{T} \\ \pi_{A} & * & \pi_{G} & \kappa \pi_{T} \\ \kappa \pi_{A} & \pi_{C} & * & \pi_{T} \\ \pi_{A} & \kappa \pi_{C} & \pi_{G} & * \end{pmatrix}[/latex] 4.3.7 Generalized Time-Reversible Model This model has 6 rate parameters, and 4 frequencies hence, 9 free parameters (frequencies sum to 1) [43]. This method is the most detailed, but also requires the most number of parameters to describe. 4.3.8 Building Phylogenetic Trees Now that we have a probabilistic framework with which to describe phylogenetic distances, we need some methods to build a tree from a set of pair-wise distances. Here are two basic approaches to building Phylogenetic Trees. Unweighted Pair Group Method with Arithmetics Mean (UPGMA) Algorithm UPGMA is a phylogenetic tree building algorithm that uses a type of hierarchical clustering [44]. This algorithm builds a rooted tree by creating internal nodes for each pair of taxa (or internal nodes), starting with the most similar and proceeding to the least similar. This approach starts with a distance matrix [latex]d_{ij}[/latex] for each taxa [latex]i[/latex] and [latex]j[/latex]. When branches are built connecting [latex]i[/latex] and [latex]j[/latex], an internal node [latex]k[/latex] is created, which corresponds to a cluster [latex]C_k[/latex] containing [latex]i[/latex] and [latex]j[/latex]. Distances are updated such that the distance between a cluster (internal node) and a leaf node is the average distance between all members of the cluster and the leaf node. Similarly, the distance between clusters is the average distance between members of the cluster. Neighbor Joining Algorithm One of the issues with UPGMA is the fact that it is a greedy algorithm, and joins the closest taxa first. There are tree structures where this fails. To get around this, the Neighbor joining algorithm normalizes the distances and computes a set of new distances that avoid this issue [45]. Using the original distance matrix [latex]d_{i,j}[/latex], a new distance matrix [latex]D_{i,j}[/latex] is computed using the following formula: [latex]D_{i,j} = d_{i,j} - \frac{1}{n-2} \left( \sum_{k=1}^n d_{i,k} + \sum_{k=1}^n d_{j,k} \right)[/latex] Begin with a star tree, and a matrix of pair-wise distances d(i,j) between each pair of sequences/taxa, an updated distance matrix that normalizes compared to all distances is created. The updated distances [latex]D_{i,j}[/latex] are computed. The closest taxa using this distance are identified, and an internal node is created such that the distance along the branch connecting these two nearest taxa is the distance [latex]D_{i,j}[/latex]. This process s repeated using the new (internal) node as a taxa and the distances are updated. 4.3.9 Evaluating the Quality of a Phylogenetic Tree Maximum Parsimony Maximum Parsimony makes the assumption that the best phylogenetic tree is that with the shortest branch lengths possible, which corresponds to the fewest mutations to explain the observable characters [46 47]. This method begins by identifying the phylogenetically informative sites. These sites would have to have a character present (no gaps) for all taxa under consideration, and not be the same character for all taxa. Then trees are constructed, and characters (or sets of characters) are defined for each internal node all the way up to the root. Then each tree has a cost defined, corresponding to the total number of mutations that need to be assumed to explain that tree. The tree with the shortest total branch length is typically chosen. The length of the tree is defined as the sum of the lengths of each individual character (or column of the alignment) [latex]L_j[/latex], and possibly using a weight [latex]w_j[/latex] for different characters (often just [latex]1[/latex] for all columns, but could weight certain positions more). [latex]{L = \sum_{j=1}^C w_j L_j}[/latex] In this expression, the length of a character [latex]L_j[/latex] can be computed as the total number of mutations needed to explain this distribution of characters given the topology of the tree. Maximum Likelihood Maximum likelihood is an approach that computes a likelihood for a tree, using a probabilistic model for each tree [48 49]. The probabilistic model is applied to each branch of the tree, such as the Jukes-Cantor model, and the likelihood of a tree is the product of the probabilities of each branch. Generally speaking, we seek to find the tree [latex]\mathcal{T}[/latex] that has the greatest likelihood given the [latex]D[/latex]. [latex]P(\mathcal{T}|D) = \frac{P(D|\mathcal{T})P(\mathcal{T})}{P(D)}[/latex] These probabilities could be computed from an evolutionary model, such as Jukes-Cantor model. For example, if we have observed characters [latex]_1[/latex] and [latex]x_2[/latex], and an unknown ancestral character [latex]x_a[/latex], and lengths of the branches from [latex]x_a[/latex] to be [latex]t_1[/latex] and [latex]t_2[/latex] respectively, we could compute the likelihood of this simple tree as [latex]P(x_1,x_2|\mathcal{T},t_1,t_2) = \sum_a p_a P_{a,x_1}(t_1) P_{a,x_2}(t_2)[/latex] where we are summing over all possible ancestral characters [latex]a[/latex], and computing the probability of mutating along the branches using a probabilistic model. This probabilistic model can be the same terms of the probability matrices discussed in Equation 4.6. 4.3.10 Tree Searching In addition to computing the quality of a specific tree, we also want to find ways of searching the space of trees. Because the space of all possible trees is so large, we can't exhaustively enumerate them all in a practical amount of time. Therefore, we need to sample different trees stochastically. Two such methods are Nearest Neighbor Interchange (NNI) and Subtree Pruning and Re-grafting (SPR). Nearest Neighbor Interchange Figure 4.3: Nearest Neighbor Interchange operates by swapping out the locations of subgraphs within a tree. For a tree with four sub-trees, there are only 2 possible interchanges. For a tree toplogy that contains four or more taxa, the Nearest Neighbor Interchange (NNI) exchanges subtrees within the larger tree. It can be shown that for a tree with four subtrees there are only 3 distinct ways to exchange subtrees to create a new tree, including the original tree. Therefore, each such application produces two new trees that are different from the input tree [50 51]. Subtree Pruning and Regrafting Figure 4.4: A depiction of SPR Tree searching method. A. One subtree of the larger tree structure is selected. B. An attachment point is selected. C. The subtree is then "grafted" or attached to the attachment point. The Subtree Pruning and Regrafting (SPR) method takes a subtree from a larger tree, removes it and reattaches it to another part of the tree [52 53]. In this lab, we will learn some basic commands for computing phylogenetic trees, and some python commands that will draw the tree. Let's create a new directory called [latex]\texttt{Lab5}[/latex] to work in. 4.4.1 Download Sequences from NCBI Download sequences in FASTA format for a gene of interest from NCBI nucleotide (https://www.ncbi.nlm.nih.gov/nuccore). Build a FASTA file containing each sequence and a defline. Shorten the defline for each species to make it easier to read later. As an example, I downloaded 18S rRNA for Human (Homo sapiens), Mouse (Mus musculus), Rat (Rattus norvegicus), Frog (Xenopus laevis), Chicken (Gallus gallus), Fly (Drosophila melanogaster) and Arabidopsis (Arabidopsis thalian). You can download these 18S rRNA sequences with the following command: $ wget http://hendrixlab.cgrb.oregonstate.edu/teaching/18S_rRNAs.fasta 4.4.2 Create a Multiple Sequence Alignment and Phylogenetic Tree with Clustal First, use [latex]\texttt{clustalw2}[/latex] to align the sequences, and output a multiple sequence alignment and dendrogram file. $ clustalw2 -infile=18S_rRNAs.fasta -type=DNA -outfile=18S_rRNAs.aln The dendogram file, indicated by the "[latex]\texttt{.dnd}[/latex]" suffix, can be used to create an image of a phylogenetic tree using Biopython: First, enter the python terminal by typing "[latex]\texttt{.python}[/latex]" and then the following: >>> import matplotlib as mpl >>> mpl.use('Agg') >>> import matplotlib.pyplot as pyplot >>> tree = Phylo.read('18S_rRNAs.dnd','newick') >>> pyplot.savefig("myTree.png") >>> quit() How does the resulting tree compare to what you expect given these species? 4.4.3 Create a Multiple Sequence Alignment and Phylogenetic Tree with phyML The program phyML provides much more flexibility in what sort of trees it can compute. To use it, we'll need to convert our alignment file to phylip. We can do this with the Biopython module AlignIO >>> from Bio import AlignIO >>> alignment = AlignIO.parse("18S_rRNAs.aln","clustal") >>> AlignIO.write(alignment,open("18S_rRNAs.phy","w"),"phylip") You can run phyml in the simplest way by simply typing "phyml" and then typing your alignment file: $ phyml Enter the sequence file name > 18S_rRNAs.phy The program will give you a set of options, and you can optionally change them. To change the model, type "+" and then "M" to toggle through the models. Finally, hit "enter" to run the program. >>> tree = Phylo.read('18S_rRNAs.phy_phyml_tree.txt','newick') >>> pyplot.savefig("myTreeML.png") How does the tree created from [latex]\texttt{clustalw2}[/latex] compare to the tree created using [latex]\texttt{phyml}[/latex]? Previous: Chapter 3: Sequence Alignments Next: Chapter 5: Genomics Applied Bioinformatics by David A. Hendrix is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
CommonCrawl
Applied Water Science June 2017 , Volume 7, Issue 3, pp 1429–1438 | Cite as Hydrochemical characteristics and quality assessment of groundwater along the Manavalakurichi coast, Tamil Nadu, India Y. Srinivas T. B. Aghil D. Hudson Oliver C. Nithya Nair N. Chandrasekar The present study was carried out to find the groundwater quality of coastal aquifer along Manavalakurichi coast. For this study, a total of 30 groundwater samples were collected randomly from open wells and borewells. The concentration of major ions and other geochemical parameters in the groundwater were analyzed in the laboratory by adopting standard procedures suggested by the American Public Health Association. The order of the dominant cations in the study area was found to be Na+ > Ca2+ > Mg2+ > K+, whereas the sequence of dominant anions was \({\text{Cl}}^{ - } > {\text{HCO}}_{3}^{ - } > {\text{SO}}_{4}^{2 - }\). The hydrogeochemical facies of the groundwater samples were studied by constructing piper trilinear diagram which revealed the evidence of saltwater intrusion into the study area. The obtained geochemical parameters were compared with the standard permissible limits suggested by the World Health Organization and Indian Standard Institution to determine the drinking water quality in the study area. The analysis suggests that the groundwater from the wells W25 and W26 is unsuitable for drinking. The suitability of groundwater for irrigation was studied by calculating percent sodium, sodium absorption ratio and residual sodium carbonate values. The Wilcox and USSL plots were also prepared. It was found that the groundwater from the stations W1, W25 and W26 is unfit for irrigation. The Gibbs plots were also sketched to study the mechanisms controlling the geochemical composition of groundwater in the study area. Groundwater Quality Assessment Tamil Nadu India Groundwater is an essential source of drinking for numerous people around the world. Saline water intrusion in the coastal aquifers due to overexploitation of groundwater and other anthropogenic activities is a serious environmental issue nowadays. To protect the aquifers from quality degradation, geochemical assessment and monitoring is necessary. It is estimated that approximately one-third of the world's population use groundwater for drinking purpose (Nickson et al. 2005). In India, more than 90 % of rural and nearly 30 % of urban population depends on groundwater for their drinking and domestic requirements (Jaiswal et al. 2003). According to Babiker et al. (2007), the chemistry of groundwater is not only related to the lithology of the area and the residence time the water is in contact with rock material, but also reflects inputs from the atmosphere, soil and weathering as well as pollutant sources such as saline intrusion, mining, and industrial and domestic wastes. Excessive irrigation activities also resulted in groundwater pollution in India (Pawar and Shaikh 1995; Sujatha and Reddy 2003). The overexploitation of groundwater along the coastal area has become a major environmental issue in the entire world. In many coastal areas, human settlements together with the day-by-day development of industrial, agricultural and tourist activities have led to overexploitation of aquifers. Because of this exploitation, the chances of intrusion of seawater into the inland aquifer are very high and also affects the quality of groundwater. In the present study, the geochemical characteristics of groundwater is determined to know the groundwater quality along the coastal aquifer of Manavalakurichi, located in Kanyakumari District of Tamil Nadu, India. The study area Manavalakurichi is a town panchayat located in the southern end of Tamil Nadu, India. The study area extends from 8°7′ to 8°10′N longitudes and 77°19′ to 77°19′E latitudes. It is connected by road network to Trivandrum, Tirunelveli and Kanyakumari. The study area is situated in Kanyakumari District and the southern side of the study area is bounded by the Indian Ocean and in the west by the Arabian Sea. Manavalakurichi and its surroundings are well known for their heavy mineral deposits. Several mining areas under the control of Indian Rare Earth Limited (IREL) are situated here. The average temperature in the study area varies from 22.8 to 33.6 °C with an average rainfall of about 846–1456 mm. The present study area is shown in Fig. 1. A major part of the study area is geologically covered by Quaternary fluvio-marine deposits. The north and northwestern regions of the district are completely occupied by the Western Ghats Mountain with a maximum elevation of 1658 m (CGWB 2008). The coastal tract in the south of the study area is a thin strip of plain region that has a width of 1–2.5 km (Srinivas et al. 2013) and is mostly covered by marshy swamps and a number of sand dunes (Teri sands). The soil type of the present area is classified into red soil, red lateritic soil, brown soil and coastal sand. The soils of this district are in situ, in nature, pale reddish, lateritic and earthy in color (CGWB 2008). Location map of the study area The study area base map was scanned and digitized from the Survey of India (SOI) toposheet No. 58 H/8 (1:50,000) (GSI 1995). ArcGis 9.3 software was used for digitization and to analyze the data for groundwater quality evaluation. Groundwater samples were randomly collected from 30 open wells and borewells during May 2013, representing the premonsoon season (Fig. 1). For sample collection, high-density polyethylene bottles were used. The bottles are immediately sealed after collecting the sample to avoid reaction with the atmosphere. The sample bottles were labeled systematically. The collected samples were analyzed in the laboratory for various physicochemical parameters. During sample collection, handling, preservation and analysis, standard procedures recommended by the American Public Health Association (APHA 1995) were followed to ensure data quality and consistency. The pH and electrical conductivity (EC) were measured in situ by using Hanna (HI9828 USA) multi-parameter probe, and the major ions (Ca2+, Mg2+, Na+, K+, \({\text{HCO}}_{3}^{ - }\), \({\text{SO}}_{4}^{2 - }\), Cl−) were analyzed in the laboratory using the standard methods suggested by the American Public Health Association (APHA 1995). Among the analyzed ions, sodium (Na+) and potassium (K+) were determined using flame photometer. Calcium (Ca2+), magnesium (Mg2+), bicarbonate \(( {\text{HCO}}_{3}^{ - } )\) and chloride (Cl−) were analyzed by volumetric methods and sulfate \(( {\text{SO}}_{4}^{2 - } )\) was estimated using spectrophotometer. The piper diagram, Gibbs plot, Wilcox diagram and the USSL plot were used for studying the quality of the groundwater in detail. The spatial analysis of various physico-chemical parameters was carried out using ArcGIS 9.3 software. An inverse distance weighed (IDW) algorithm was used to interpolate the data spatially and to estimate the values between measurements. The IDW technique calculates a value for each grid node by examining surrounding data points that lie within a user-defined search radius (Burroughs and McDonnell 1998). All of the data points are used in the interpolation process and the node value is calculated by averaging the weighted sum of all the points. Results and discussions The analytical results have been evaluated to find out the suitability of groundwater in the study area for drinking and agricultural uses. By comparing the obtained values of different water quality parameters with the guidelines of the World Health Organization (WHO 2011) and Indian Standard Institute (ISI 1983), we obtain the suitability of groundwater for drinking and agricultural purposes (Table 1). Ranges of chemical parameters and their comparison with the WHO and the Indian standards for drinking water Permissible limits Samples exceeding the permissible limits WHO (2011) ISI (1983) W2–W11, W16–W19, W21, W25, W29, W30 TDS (mg/l) W1, W12, W16, W26, W27 Na+ (mg/l) W1, W25, W26 K+ (mg/l) W1, W12–W17, W20, W22, W24, W26 Ca2+ (mg/l) Mg2+ (mg/l) \({\text{HCO}}_{3}^{ - }\) (mg/l) \({\text{SO}}_{4}^{2 - }\) (mg/l) Cl− (mg/l) The concentration and behavior of major ions such as Ca2+, Mg2+, Na+, K+, \({\text{HCO}}_{3}^{ - }\), \({\text{SO}}_{4}^{2 - }\)and Cl− and some important physico-chemical parameters such as pH, EC, total dissolved solids (TDS) and the suitability of groundwater in the study area are discussed below. The balance between the concentration of hydrogen ion and hydroxyl ions in the water is termed as pH. The limit of pH value for drinking water is specified as 6.5–8.5 (WHO 2011; ISI 1983). The pH of water provides vital information in many types of geochemical equilibrium or solubility calculations (Hem 1985). The pH value of most of the groundwater samples in the study area varies from 5.3 to 7.4, which clearly shows that the groundwater in the study area is slightly acidic in nature. This may be attributed to anthropogenic activities such as sewage disposal and use of fertilizers in agricultural lands (paddy fields and coconut plantation) of the study area. It may also be due to natural phenomena like intrusion of brackish water into the sandy aquifer, which initiates the weathering process of the underlying geology (Sarat Prasanth et al. 2012). Electrical conductivity (EC) EC is a measure of capacity of water to convey electric current. The most desirable limit of EC in drinking water is prescribed as 1500 µS/cm (WHO 2011). The EC of the groundwater in the study area varies from 67.2 and 2771.8 µS/cm with an average value of 818 µS/cm. Higher EC in few groundwater samples indicates the enrichment of salts in the groundwater. The value of EC may be an approximate index of the total content of dissolved substance in water. It depends upon temperature, concentration and types of ions present (Hem 1985). The effect of saline intrusion may be the reason for medium enrichment of EC in the study area. The effect of pH may also increase the dissolution process, which eventually increases the EC value (Sarat Prasanth et al. 2012). Calcium and magnesium The concentration of calcium ions (Ca2+) in the groundwater samples ranges from 6.4 to 124.2 mg/l with an average value of 34.3 mg/l and that of magnesium ions (Mg2+) ranges from 0.5 to 118.2 mg/l (average value of 39.1 mg/l) (Table 1). The principal sources of calcium and magnesium in most of the groundwater samples are detrital minerals such as plagioclase feldspar, pyroxene, amphibole and garnet (Hounslow 1995). The limestone, dolomite, gypsum, anhydrates and clay minerals among the sedimentary rocks of the coastal region also enhance the calcium and magnesium content in the groundwater (Chandrasekar et al. 2013). Reverse cationic exchange, i.e., the replacement of calcium and magnesium ions by sodium ion in the groundwater may be the reason for lower concentration of Ca2+ and Mg2+ in some areas (Jacob et al. 1999). Figure 2a, b shows the spatial distribution of calcium and magnesium in the study area. Spatial distribution of calcium (Ca2+) in the study area. a Spatial distribution of magnesium (Mg2+)in the study area. b Spatial distribution of sodium (Na+) in the study area. c Spatial distribution of potassium (K+) in the study area Sodium and potassium The concentration of sodium ions (Na+) in the collected groundwater samples varies from 18 to 560 mg/l (Table 1). The maximum permissible limit of sodium is 200 mg/l and the present study reveals that few samples exceed the permissible limit of WHO and ISI. Groundwater with high Na+ content is not suitable for agricultural use, as it tends to deteriorate the soil. Potassium is a naturally occurring element; however, its concentration remains quite lower compared with Ca2+, Mg2+ and Na+. Its concentration in drinking water seldom reaches 20 mg/l (Hounslow 1995). The concentration of K+ is between 0.5 and 83 mg/l in the groundwater of the study area. The maximum permissible limit of potassium in drinking water is 12 mg/l, and it was found that few samples exceeded the permissible limit of WHO. The higher concentration of potassium in groundwater is due to the anthropogenic sources and saline water intrusion (Krishnakumar et al. 2009). Figure 2c, d shows the spatial variation of sodium and potassium in the study area. Chloride ion (Cl−) concentration in the collected groundwater samples varies from 21.3 to 756.2 mg/l with a mean value of 169.7 mg/l (Table 1). According to the WHO (2011) standards, the maximum permissible limit of chloride in the groundwater is 600 mg/l. The increased chloride concentration in a freshwater aquifer is one of the indicators of seawater intrusion. Appelo and postma (1993) and Raju et al. (2011) have suggested that the high concentration of chloride may also result from pollution by domestic sewage waste, sand leaching and saline residues in soil. Based on the ionic concentration, sodium and chloride are found to be the dominant cation and anion, respectively. Mostly, sodium and chloride dominate the seawater ionic content, while calcium and bicarbonate are generally the major ions of freshwater (Hem 1985). Therefore, the higher concentration of sodium and chloride ions in the groundwater from the study area indicates the significant effect of saltwater intrusion in few regions. The spatial distribution of chloride in the study area is shown in Fig. 3a. a Spatial distribution of chlorine (Cl−) in the study area. b Spatial distribution of bicarbonate (\({\text{HCO}}_{3}^{ - }\)) in the study area. c Spatial distribution of sulfate (\({\text{SO}}_{4}^{2 - }\)) in the study area. d Spatial distribution of total dissolved solids (TDS) in the study area Bicarbonate The observed \({\text{HCO}}_{3}^{ - }\) values in the groundwater samples range from 110 to 549 mg/l (Table 1) Bicarbonate is the dominant anion in the majority of the samples, except in few groundwater samples collected near the coast. The higher concentration of \({\text{HCO}}_{3}^{ - }\) in the water shows the dominance of mineral dissolution (Stumm and Morgan 1996). The spatial distribution of the bicarbonate ion in the study area is shown in Fig. 3b. The concentration of sulfate (\({\text{SO}}_{4}^{2 - }\)) ranges from 2 to 78.5 mg/l with an average value of 34.3 mg/l (Table 1). High value of sulfate content in the groundwater is due to reduction, precipitation, solution and concentration of the traverse through the sedimentary rock such as gypsum and anhydrite (Hounslow 1995). In the study area, the sulfate concentration is within the limit of WHO and ISI standards. The spatial distribution of sulfate ions in the study area is given in Fig. 3c. Total dissolved solids (TDS) According to WHO specifications, TDS in groundwater up to 500 mg/l is the highest desirable and up to 1000 mg/l is maximum permissible. In the study area, the TDS values vary between a minimum of 41.6 mg/l and a maximum of 1775 mg/l (Table 1). According to the Davis and De wiest (1966) classification of groundwater samples based on TDS, 60 % of the total groundwater samples in the study area are desirable for drinking (TDS < 500 mg/l), 23.3 % of samples are permissible for drinking (500–1000 mg/l) and 16.6 % are suitable for irrigation purposes. The spatial distribution of TDS in the study area is given in the Fig. 3d. Hydrogeochemical facies The geochemical evolution of groundwater can be understood by plotting the concentrations of major cations and anions in the piper trilinear diagram (Piper 1944). The nature and distribution of hydrochemical facies can be determined by providing insights into how groundwater quality changes within and between aquifers. Trilinear diagrams can be used to delineate the hydrogeochemical facies, because they graphically demonstrate the relationships between the most important dissolved constituents in a set of groundwater samples. The hydrogeochemistry of groundwater in the study area was evaluated using the concentrations of major cations (Ca2+, Mg2+, Na+ and K+) and anions (\({\text{HCO}}_{3}^{ - }\), \({\text{SO}}_{4}^{2 - }\) and Cl−) in meq/l. The hydrogeochemical facies for groundwater in the study area is shown in a piper diagram (Fig. 4). The piper diagram indicates that sodium is the major cation and chloride is the major anion. Piper trilinear diagram representing the geochemical evolution of groundwater Pulido-Leboeuf (2004) pointed out that the groundwater of Na–Cl type will generally influence a strong seawater intrusion; anthropogenic contamination is caused by excessive use of fertilizers in the coastal land. In some regions, calcium- and magnesium-based water type is seen. The sources of calcium and magnesium in these regions are probably the shell deposits of fluvio-marine origin (Hounslow 1995). Sodium percentage The sodium in irrigation waters is usually denoted as percent sodium or sodium percentage. According to Wilcox (1955), for all natural waters Na % is a common parameter to assess its suitability for irrigational purposes. The sodium percent (Na %) values are obtained using the following equation: $${\text{Na}}\,\,\% = {{{\text{Na}}^{ + } \times 100} \mathord{\left/ {\vphantom {{{\text{Na}}^{ + } \times 100} {({\text{Ca}}^{ 2+ } + {\text{Mg}}^{ 2+ } + {\text{Na}}^{ + } + {\text{K}}^{ + } )}}} \right. \kern-0pt} {({\text{Ca}}^{ 2+ } + {\text{Mg}}^{ 2+ } + {\text{Na}}^{ + } + {\text{K}}^{ + } )}},$$ where all ionic concentrations are expressed in meq/l. The groundwater samples according to Wilcox classification (Table 2) on the basis of sodium percent shows that 53.33 % of the groundwater samples are of uncertain quality for irrigation, 16 % are within the permissible limit, 30 % are under good quality limit and no samples are in the unsuitable category. Classification of groundwater for irrigation purpose on the basis of Na % and SAR Groundwater class No. of samples Na % (Wilcox 1955) Permissible Unsuitable SAR (Bouwer 1978) Increasing problem Severe problem Sodium absorption ratio Sodium absorption ratio (SAR) is a measure of the suitability of groundwater for irrigation usage, because sodium concentration can reduce the soil permeability and soil structure (Todd 2007). SAR is a measure of alkali/sodium hazard to crops and is estimated by the following formula: $${\text{SAR}} = {{{\text{Na}}^{ + } } \mathord{\left/ {\vphantom {{{\text{Na}}^{ + } } {[({\text{Ca}}^{ 2+ } + {\text{Mg}}^{ 2+ } )/ 2]^{0. 5} ,}}} \right. \kern-0pt} {[({\text{Ca}}^{ 2+ } + {\text{Mg}}^{ 2+ } )/ 2]^{0. 5} ,}}$$ where the concentrations of sodium, calcium and magnesium are in meq/l. The SAR value of water for irrigation purposes has a significant relationship with the extent to which sodium is absorbed by the soils. Irrigation using water with high SAR values may require soil amendments to prevent long-term damage to the soil, because the sodium in the water can displace the calcium and magnesium in the soil. This will cause a decrease in the ability of the soil to form stable aggregates and loss of soil structure. This will also lead to a decrease in infiltration and permeability of the soil to water, leading to problems with crop production (Chandrasekar et al. 2013). The calculated values of SAR in the study area vary between 1.55 and 11.18 meq/l. The classification of groundwater samples based on SAR values are given in Table 2. The SAR values of the majority of the samples are found within the range of excellent to good category, except sample nos 1, 25 and 26 which were found to be unsuitable for irrigation purpose. USSL plot The United States Salinity Laboratory (USSL) has constructed a diagram for the classification of irrigation waters (Wilcox 1955) describing 16 classes with reference to SAR as index for sodium hazard and EC as an index for salinity hazard. The USSL diagram highlights that 33 % of the samples fall under the field of C3S1, which indicates water having high salinity and low sodium alkali hazard (Fig. 5). Three of the samples fall under the C4S3, C3S3 and C3S2 category, respectively, indicating high to very high salinity hazard and medium to high sodium hazard, while the remaining samples fall under the C2S1 and C1S1 regions indicating low and medium saline and sodium hazards. USSL diagram representing the salinity and sodium hazard Wilcox diagram Wilcox (1955) classified groundwater for irrigation purposes by correlating percent sodium and EC, which suggests that 60 % of the total of 30 samples fall under excellent to good limit and 30 % fall under good to permissible limit (Fig. 6). Only 10 % of the samples (W1, W25, W26) fall under the doubtful to unsuitable limit, which illustrates that the groundwater from these wells are not fit for agricultural usages. Suitability of groundwater for irrigation in Wilcox diagram Mechanisms controlling the groundwater chemistry Reactions between aquifer minerals and groundwater have a significant role on water quality, which are also useful to understand the genesis of groundwater (Cederstorm 1946). Commonly, groundwater chemistry in the study region is controlled by different processes and mechanisms. Gibbs plot is used here to understand and differentiate the influences of rock–water interaction, evaporation and precipitation on water chemistry (Gibbs 1970). The Gibbs diagram (Fig. 7a, b) illustrates that most of the groundwater samples from the study area falls under the rock-dominant region. The cation versus TDS curves denote that the cations in some of the wells may be derived from evaporation or crystallization processes. The Gibbs diagrams suggests that the weathering of rocks primarily controls the major ion chemistry of groundwater in this region. Gibbs diagrams representing the mechanism controlling the chemistry of groundwater. a Major anions versus TDS. b Major cations versus TDS Contamination of groundwater generally results in poor drinking water quality, loss of water supply, high cleanup costs, high-cost alternative water supplies and potential health problem. In the present study, the interpretation of hydrochemical analysis reveals that the groundwater in the study area is slightly acidic in nature. The sequence of the abundance of the major cations is Na+ > Ca2+ > Mg2+ > K+ and anions is \({\text{Cl}}^{ - } > {\text{HCO}}_{3}^{ - } > {\text{SO}}_{4}^{2 - }\). The dominant cation and anion are sodium and chloride respectively, which shows the saline nature of the groundwater in some regions. In the study area, rock weathering and evaporation processes are the major hydrogeochemical processes responsible for the concentration of major ions in groundwater. The drinking water quality analysis shows that groundwater from the wells W25 and W26 are not fit for drinking, as they have higher concentration of ions than the standard permissible limits. The groundwater classification based on their suitability for irrigation also reveals that the wells W1, W25 and W26 are unsuitable for irrigation. So, among the 30 well samples analyzed, the groundwater from the wells W1, W25, W26 were found to be more hazardous in the study area. The harmful nature of groundwater may be due to natural saline water intrusion and also because of anthropogenic activities. The present study was carried out under the UGC major research project. The first author is grateful to the UGC for providing the financial support under this major research project No. 41-940/2012 (SR). The authors are very much thankful to the anonymous reviewers for their critical review and comments that helped immensely in improving the quality of the manuscript. APHA (1995) Standard methods for the examination of water and waste water, 19th edn. American Public Health Association, Washington, DCGoogle Scholar Appelo CAJ, Postma D (1993) Geochemistry, groundwater and pollution. AA Balkema, RotterdamGoogle Scholar Babiker IS, Mohamed MAA, Hiyama T (2007) Assessing groundwater quality using GIS. Water Resour Manag 21:699–715CrossRefGoogle Scholar Bouwer H (1978) Ground water hydrology. McGraw-Hill, New YorkGoogle Scholar Burroughs PA, McDonnell RA (1998) Principles of geographical information systems. Oxford University Press, Oxford, p 333Google Scholar Cederstorm DJ (1946) Genesis of groundwater in the coastal plain of Virginia. Environ Geol 41:218–245Google Scholar CGWB (2008) District groundwater Brochure, Kanyakumari district, Tamilnadu. Government of India, Ministry of ResourcesGoogle Scholar Chandrasekar N, Selvakumar S, Srinivas Y, John Wilson JS, Simon Peter T, Magesh NS (2013) Hydrogeochemical assessment of groundwater quality along the coastal aquifers of southern Tamil Nadu, India. J Environ Earth Sci 71(11):4739–4750. doi: 10.1007/s12665-013-2864-3 CrossRefGoogle Scholar Davis SN, De Wiest RJM (1966) Hydrogeology, vol 463. Wiley, New YorkGoogle Scholar Gibbs RJ (1970) Mechanisms controlling world water chemistry. Sci J 170:795–840CrossRefGoogle Scholar GSI (1995) Geological survey of India, geological and mineral map of Tamil Nadu and Pondicherry (scale 1:500000)Google Scholar Hem JD (1985) Study and interpretation of the chemical characteristics of natural waters, 3rd edn. USGS Water Supply Paper. 2254:117–120Google Scholar Hounslow AW (1995) Water quality data—analysis and interpretation. Lewis Publishers, Boca RatonGoogle Scholar Indian Standard Institution (ISI) (1983) Drinking water standard substances or characteristic affecting the acceptability of water for domestic use. ISI0500, pp 1–22Google Scholar Jacob CT, Azariah J, Viji Roy AG (1999) Impact of textile industries on river Noyyal and riverine groundwater quality of Tirupur, India. Pollut Res 18(4):359–368Google Scholar Jaiswal RK, Mukherjee S, Krishnamurthy J, Saxena R (2003) Role of remote sensing and GIS—techniques for generation of groundwater prospect zones towards rural development–an approach. Int J Remote Sens 24(5):993–1008CrossRefGoogle Scholar Krishnakumar S, Ram Mohan V, Dajkumar SJ, Jeevanandam M (2009) Assessment of groundwater quality and hydrogeochemistry of Manimuktha River basin, Tamil Nadu, India. Environ Monit Assess 159(1–4):341–351Google Scholar Nickson R, McArthur JM, Shrestha B, Kyaw-Myint TO, Lowry D (2005) Arsenic and other drinking water quality issues, Muzaffargarh district, Pakistan. Appl Geochem 20:55–68CrossRefGoogle Scholar Pawar NJ, Shaikh IJ (1995) Nitrate pollution of ground waters from shallow basaltic aquifers, Deccan trap Hydrologic Province, India. Environ Geol 25:197–204CrossRefGoogle Scholar Piper AM (1944) A graphic procedure in geochemical interpretation of water analysis. Trans Am Geophys Union 25(6):914–928CrossRefGoogle Scholar Pulido-Leboeuf P (2004) Seawater intrusion and associated processes in a small coastal complex aquifer (Castell deFerro, Spain). Appl Geochem 19:1517–1527CrossRefGoogle Scholar Raju NJ, Shukla UK, Ram P (2011) Hydrogeochemistry for the assessment of groundwater quality in Varanasi: a fast-urbanizing center in Uttar Pradesh, India. Environ Monit Assess 173:279–300CrossRefGoogle Scholar Sarath Prasanth SV, Magesh NS, Jitheshlal KV, Chandrasekar N, Gangadhar K (2012) Evaluation of groundwater quality and its suitability for drinking and agricultural use in the coastal stretch of Alappuzha district. Appl Water Sci, Kerala, India. doi: 10.1007/s13201-012-0042-5 Google Scholar Srinivas Y, Hudson Oliver D, Stanley Raj A, Chandrasekar N (2013) Evaluation of groundwater quality in and around Nagercoil town, Tamilnadu, India: an integrated geochemical and GIS approach. J Appl Water Sci 3:631–651CrossRefGoogle Scholar Stumm W, Morgan JJ (1996) Aquatic chemistry. Wiley, New York, p 1022Google Scholar Sujatha D, Reddy RB (2003) Quality characterization of groundwater in the south eastern parts of the Ranga Reddy district, Andhra Pradesh, India. Environ Geol 44(5):570–576Google Scholar Todd DK (2007) Groundwater hydrology, 3rd edn. Wiley, New York, p 347Google Scholar Wilcox LV (1955) Classification and use of irrigation waters, USDA Circular No. 969, p 19Google Scholar World Health Organization (WHO) (2011) Guidelines for drinking water quality, 4th edn. World Health Organization, GenevaGoogle Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Centre for GeoTechnologyManonmaniam Sundaranar UniversityTirunelveliIndia 2.Senthamarai College of Arts and ScienceMaduraiIndia Srinivas, Y., Aghil, T.B., Hudson Oliver, D. et al. Appl Water Sci (2017) 7: 1429. https://doi.org/10.1007/s13201-015-0325-8 Received 12 January 2015 Accepted 24 August 2015 DOI https://doi.org/10.1007/s13201-015-0325-8 Not logged in Not affiliated 3.84.139.101
CommonCrawl
Skip to main content Skip to sections Journal of Statistical Physics pp 1–26 | Cite as Rare Event Simulation for Stochastic Dynamics in Continuous Time Letizia Angeli Stefan Grosskinsky Adam M. Johansen Andrea Pizzoferrato Large deviations for additive path functionals of stochastic dynamics and related numerical approaches have attracted significant recent research interest. We focus on the question of convergence properties for cloning algorithms in continuous time, and establish connections to the literature of particle filters and sequential Monte Carlo methods. This enables us to derive rigorous convergence bounds for cloning algorithms which we report in this paper, with details of proofs given in a further publication. The tilted generator characterizing the large deviation rate function can be associated to non-linear processes which give rise to several representations of the dynamics and additional freedom for associated numerical approximations. We discuss these choices in detail, and combine insights from the filtering literature and cloning algorithms to compare different approaches and improve efficiency. Dynamic large deviations Interacting particle systems Cloning algorithm Sequential Monte Carlo Communicated by Abhishek Dhar. Large deviation simulation techniques based on classical ideas of evolutionary algorithms [1, 2] have been proposed under the name of 'cloning algorithms' in [3] for discrete and in [4] for continuous time processes, in order to study rare events of dynamic observables of interacting lattice gases. This approach has subsequently been applied in a wide variety of contexts (see e.g. [5, 6, 7, 8] and references therein), and more recently, the convergence properties of the algorithm have become a subject of interest. Analytical approaches so far are based on a branching process interpretation of the algorithm in discrete time [9], with limited and mostly numerical results in continuous time [10]. Systematic errors arise from the correlation structure of the cloning ensemble which can be large in practice, and several variants of the approach have been proposed to address those including e.g. a multicanonical feedback control [7], adaptive sampling methods [11] or systematic resampling [12]. A recent survey of these issues and different variants of cloning algorithms in discrete and continuous time can be found in [13, Sect. 3]. In this paper we provide a novel perspective on the underlying structure of the cloning algorithm, which is in fact well established in the statistics and applied probability literature on Feynman–Kac models and particle filters [14, 15, 16]. The framework we develop here can be used to generalize rigorous convergence results in [17] to the setting of continuous-time cloning algorithms as introduced in [4]. Full mathematical details of this work are published in [18], and here we focus on describing the underlying approach and report the main convergence results. A second motivation is to use different McKean interpretations of Feynman–Kac semigroups (see Sect. 2.2) to highlight several degrees of freedom in the design of cloning-type algorithms that can be used to improve performance. We illustrate this with the example of current large deviations for the inclusion process (originally introduced in [19]), aspects of which have previously been studied [20]. Current fluctuations in stochastic lattice gases have attracted significant recent research interest (see e.g. [21, 22, 23] and references therein), and are one of the main application areas of cloning algorithms which are particularly challenging. In contrast to previous work in the context of cloning algorithms [9, 10], our mathematical approach does not require a time discretization and works in a very general setting of a jump Markov process on a compact state space. This covers in particular any finite state Markov chain or stochastic lattice gas on a finite lattice. The paper is organized as follows. In Sect. 2 we introduce notation, the Feynman–Kac semigroup and several representations of the associated non-linear process. In Sect. 3 we describe different particle approximations including the cloning algorithm, and summarize results published in [18] on convergence properties of estimators based on the latter. In Sect. 4 we describe a modification of the cloning algorithm for a particular class of stochastic lattice gases and apply it to the inclusion process as an example. 2 Mathematical Setting 2.1 Large Deviations and the Tilted Generator We consider a continuous-time Markov jump process \(\big ( X(t) :t\ge 0\big )\) on a compact state space E. To fix ideas we can think of a finite state Markov chain, such as a stochastic lattice gas on a finite lattice \(\Lambda \) with a fixed number of particles M. Here E is of the form \(S^\Lambda \) with a finite set S of local states (e.g. \(S=\{ 0,1\}\) or \(\{ 0,\ldots ,M\}\)), but continuous settings with compact \(E\subset \mathbb {R}^d\) for any \(d\ge 1\) are also included. One can in principle also generalize to separable and locally compact state spaces, including countable Markov chains and lattice gases on finite lattices with open boundaries. But this would require more effort and complicate not only the proof but also the presentation of the main results for technical reasons which we want to avoid here (see [18] for a more detailed discussion). Jump rates are given by the kernel W(x, dy), such that for all \(x\in E\) and measurable subsets \(A\subset E\) $$\begin{aligned} \mathbb {P}\big [ X(t+\Delta t) \in A\big | X(t) =x\big ] =\Delta t\int _A W(x,dy) +o(\Delta t)\quad \text{ as } \Delta t\rightarrow 0,\ \end{aligned}$$ where \(o(\Delta t)/\Delta t\rightarrow 0\). We use the standard notation \(\mathbb {P}\) and \(\mathbb {E}\) for the distribution and the corresponding expectation on the usual path space for jump processes $$\begin{aligned} \Omega =\big \{ \omega :[0,\infty )\rightarrow E \text{ right } \text{ continuous } \text{ with } \text{ left } \text{ limits }\big \}.\ \end{aligned}$$ If we want to stress a particular initial condition \(x\in E\) of the process we write \(\mathbb {P}_x\) and \(\mathbb {E}_x\). The process can be characterized by the infinitesimal generator $$\begin{aligned} \mathcal {L}f(x)=\int _E W(x,dy)[f(y)-f(x)], \quad \forall f\in \mathcal {C}_b (E), \, x\in E, \end{aligned}$$ acting on all continuous bounded functions \(f\in \mathcal {C}_b (E)\) on the state space. The adjoint \(\mathcal {L}^\dagger \) of this operator acts on probability distributions \(\mu \) on E, and determines their time evolution via $$\begin{aligned} \frac{d}{dt} \mu _t (dy)=\int _{x\in E} \mu _t (dx) W(x,dy)-\int _{x\in E} \mu _t (dy) W(y,dx),\ \end{aligned}$$ where \(\mathbb {P}[X_t \in A]=\int _A \mu _t (dy)\) for any regular \(A\subset E\) characterizes the distribution of the process at time \(t\ge 0\). In case of countable E, (2.3) is simply the usual master equation of the process for \(\mu _t (y)=\mathbb {P}[X_t =y]\), but we focus our presentation on the equivalent description via the generator (2.2), which leads to a more compact notation and applies in the general setting. As a technical assumption we require that the total exit rate of the process is uniformly bounded $$\begin{aligned} w(x):=\int _E W(x,dy) \le \bar{w} <\infty \quad \text{ for } \text{ all } x\in E.\ \end{aligned}$$ We are interested in the large deviations of additive path space observables \(A_T:\Omega \rightarrow \mathbb {R}\) of the general form $$\begin{aligned} A_T(\omega ):=\sum _{\begin{array}{c} t\le T\\ \omega (t-)\ne \omega (t) \end{array}}g\big (\omega (t-),\,\omega (t)\big )\,+\,\int _0^T h\big (\omega (t)\big )dt, \end{aligned}$$ where \(g\in \mathcal {C}_b (E^2)\) and \(h\in \mathcal {C}_b (E)\). Note that \(A_T\) is well defined since the bound on w(x) implies that the sum in the first term almost surely contains only finitely many terms for any \(T>0\). The above functional, which recently appeared in this form in [24], assigns a weight via the function g to jumps of the process, as well as to the local time via the function h. Dynamics conditioned on such a functional have been studied in many contexts [5], including driven diffusions on periodic continuous spaces E [25]. As mentioned before, the simplest examples covered by our setting are Markov chains with finite state space E. This includes stochastic particle systems on a finite lattice with periodic or closed boundary conditions such as zero-range or inclusion processes [20, 23, 26], and also processes with open boundaries and bounded local state space such as the exclusion process [3]. Choosing g appropriately and \(h\equiv 0\) the functional \(A_T\) can, for example, measure the empirical particle current across a bond of the lattice or within the whole system up to time T. We assume that \(A_T\) admits a large deviation rate function, which is a lower semi-continuous function \(I:\mathbb {R}\rightarrow [0,\infty ]\) such that $$\begin{aligned} \lim _{T\rightarrow \infty } -\frac{1}{T}\log \mathbb {P}[A_T /T\in U] =\inf _{a\in U} I(a) \end{aligned}$$ for all regular intervals \(U\subset \mathbb {R}\) (see e.g. [27, 28] for a more general discussion). Based on the graphical construction of the jump process and the contraction principle, existence and convexity of I can be established in a very general setting for countable state space (see e.g. [29] and references therein). If I is convex, it is characterized by the scaled cumulant generating function (SCGF) $$\begin{aligned} \lambda _k :=\lim _{T\rightarrow \infty } \frac{1}{T}\log \mathbb {E}\big [ e^{kA_T}\big ] \end{aligned}$$ via the Legendre transform $$\begin{aligned} I(a)=\sup _{k\in \mathbb {R}} \big ( ka -\lambda _k \big ).\ \end{aligned}$$ It is well known (see e.g. [24, 26]) that \(\lambda _k\) can be characterized as the principal eigenvalue of a tilted version of the generator (2.2) $$\begin{aligned} \mathcal {L}_k f (x)&:=\int _E W(x,dy)\big [ e^{kg(x,y)} f(y)-f(x)\big ] +kh(x)f(x)\nonumber \\&=\underbrace{\int _E W_k (x,dy)\big [ f(y)-f(x)\big ]}_{=:\widehat{\mathcal {L}}_k f(x)} +\mathcal {V}_k (x)f(x) \end{aligned}$$ with modified rates for the jump part \(\hat{\mathcal {L}}_k\) $$\begin{aligned} W_k (x,dy):=W(x,dy) e^{kg(x,y)},\ \end{aligned}$$ and potential for the diagonal part $$\begin{aligned} \mathcal {V}_k (x):=\int _E W(x,dy)\big [ e^{kg(x,y)} -1\big ] +kh(x).\ \end{aligned}$$ By (2.4) and the boundedness of g and h, for each \(k\in \mathbb {R}\) there exist constants such that we have uniform bounds $$\begin{aligned} \int _E W_k (x,dy)\le \bar{w}_k\quad \text{ and }\quad \mathcal {V}_k (x)\le \bar{v}_k\quad \text{ for } \text{ all } x\in E.\ \end{aligned}$$ Note also that \(\mathcal {L}_0 =\mathcal {L}\), but for any \(k\ne 0\) the diagonal part of the operator does not vanish and $$\begin{aligned} \mathcal {L}_k 1(x)=\mathcal {V}_k (x)\quad \text{ for } \text{ all } x\in E.\ \end{aligned}$$ Still, it generates a Feynman–Kac semigroup (see e.g. [17, 24] for details), defined as $$\begin{aligned} P_t^k f(x)=\big ( e^{t\mathcal {L}_k}\big )f(x):=\mathbb {E}_x\big [ fe^{kA_t}\big ],\ \end{aligned}$$ which is the unique solution to the backward equation $$\begin{aligned} \frac{d}{dt}P_t^k f=P_t^k (\mathcal {L}_k f)\quad \text{ with }\quad P_0^k f=f.\ \end{aligned}$$ Due to the diagonal part of \(\mathcal {L}_k\) this does not conserve probability, i.e. for the constant function \(f\equiv 1\) we get $$\begin{aligned} P_t^k 1(x)=\mathbb {E}_x \big [ e^{kA_t}\big ]\ne 1\quad \text{ for } \text{ all } k\ne 0,\ t>0.\ \end{aligned}$$ The associated logarithmic growth rate $$\begin{aligned} \lambda _k (t):=\frac{1}{t}\log P_t^k 1(x) \end{aligned}$$ provides a finite-time approximation of the SCGF \(\lambda _k\), which depends on the initial condition \(x\in E\). We require convergence of this approximation as \(t\rightarrow \infty \) as an asymptotic stability property of the process, discussed in detail in [18] and references therein. Under exponential mixing assumptions, which are mild in our contexts of interest, it can be shown that for some constant \(C>0\) $$\begin{aligned} \big |\lambda _k (t)-\lambda _k\big |\le C/t \quad \text{ as } t\rightarrow \infty \ . \end{aligned}$$ This is for example the case if E is finite and the process necessarily has a spectral gap, as is the case for all finite state lattice gases mentioned earlier. Furthermore, exponential mixing implies that the modified finite-time approximation $$\begin{aligned} \lambda _k (at,t):=\frac{1}{(1-a)t}\log \frac{P_t^k 1(x)}{P_{at}^k 1(x)} \quad \text{ with } a\in (0,1),\ \end{aligned}$$ with a 'burn-in' time period of length at, significantly improves the convergence in (2.18) to $$\begin{aligned} \big |\lambda _k (at,t)-\lambda _k\big |\le C\rho ^{at}/t \quad \text{ as } t\rightarrow \infty \end{aligned}$$ for some \(\rho \in (0,1)\). This is of course routinely used in Monte–Carlo sampling where systems are allowed to relax towards stationarity before measuring. These intrinsic properties of the process and the related finite-time errors in the estimation of \(\lambda _k\) are not the main subject of this paper. In the following we simply assume asymptotic stability and (2.18), and focus on the efficient numerical estimation of \(\lambda _k (t)\) for any given \(t\ge 0\). \(\lambda _k (at,t)\) can be treated completely analogously, which is discussed in more detail in [18]. 2.2 McKean Interpretation of the Feynman–Kac Semigroup As usual, for a given initial distribution \(\nu _0^k\) on E the semigroup (2.14) determines a measure \(\nu _t^k\) at times \(t\ge 0\) on E, which can be characterized weakly through integrals of bounded test functions \(f\in C_b (E)\) as $$\begin{aligned} \nu _t^k (f):=\int _E f(x)\nu _t^k (dx)=\int _E P_t^k f(x)\nu _0^k (dx).\ \end{aligned}$$ Here and in the following we use the common short notation \(\nu _t^k (f)\) for the integral of the function f under the measure \(\nu _t^k\) to simplify notation. Note that we can write (2.17) as $$\begin{aligned} \lambda _k (t)=\frac{1}{t}\log \nu _t^k (1),\ \end{aligned}$$ with a more general initial condition \(\nu _0^k\). But since \(P_t^k\) does not conserve probability, \(\nu _t^k\) is not a normalized probability measure and it is consequently impossible to sample from it. With (2.16) \(\nu _t^k (1)>0\) for all \(t\ge 0\), and we can define normalized versions of the measures via $$\begin{aligned} \mu _t^k (f):=\nu _t^k (f)/\nu _t^k (1).\ \end{aligned}$$ Using (2.9), (2.13) and (2.15) on can derive the evolution equation $$\begin{aligned} \frac{d}{dt}\mu _t^k (f)&=\frac{d}{dt}\frac{\nu _t^k (f)}{\nu _t^k (1)} = \frac{1}{\nu _t^k (1)}\, \nu _t^k (\mathcal {L}_k f)-\frac{\nu _t^k (f)}{\nu _t^k (1)^2}\,\nu _t^k (\mathcal {L}_k 1)\nonumber \\&= \mu _t^k (\mathcal {L}_k f)-\mu _t^k (f)\, \mu _t^k (\mathcal {L}_k 1) \nonumber \\&= \mu _t^k (\widehat{\mathcal {L}}_k f)+\mu _t^k (\mathcal {V}_k f)-\mu _{t}^k (f)\,\mu _t^k (\mathcal {V}_k) \end{aligned}$$ with initial condition \(\mu _0^k =\nu _0^k\). It can be shown by similar direct computation of \(\frac{d}{dt}\log \nu _t^k (1)\), using (2.13) and (2.22) that $$\begin{aligned} \lambda _k (t)=\frac{1}{t}\int _0^t \mu _s^k (\mathcal {V}_k )ds.\ \end{aligned}$$ So the finite-time approximation of \(\lambda _k\) is given by an ergodic average with respect to the distribution \(\mu _t^k\), depending on the initial distribution \(\mu _0^k\), with an obvious modification for (2.19). The asymptotic stability of the original process implies that \(\mu _t^k \rightarrow \mu _\infty ^k\) converges to a unique stationary distribution \(\mu _\infty ^k\) on E, so that the SCGF (2.7) can be written as the stationary expectation of the potential $$\begin{aligned} \lambda _k =\mu _\infty ^k (\mathcal {V}_k ).\ \end{aligned}$$ Due to the non-linear nature of (2.24), \(\mu _\infty ^k\) is characterized by stationarity as the solution of the non-linear equation $$\begin{aligned} \mu _\infty ^k (\widehat{\mathcal {L}}_k f)=\mu _\infty ^k (f)\,\mu _\infty ^k (\mathcal {V}_k)-\mu _\infty ^k (\mathcal {V}_k f)\quad \text{ for } \text{ all } f\in C_b (E).\ \end{aligned}$$ Usually \(\mu _\infty ^k\) cannot be evaluated explicitly, but from (2.24) it is possible to define a generic processes \(\big ( X_k (t):t\ge 0\big )\) with time-marginals \(\mu _t^k\), and then use Monte Carlo sampling techniques. The first term of (2.24) already corresponds to a jump process with generator \(\widehat{\mathcal {L}}_k\), and we have to rewrite the second non-linear part to be of the form of a generator. There is some freedom at this stage, and we report three common choices from the applied probability literature on particle approximations [17, 30], one of which corresponds to the approach in [3, 4], and to the best of the authors' knowledge the other two have not been considered in the computational physics literature so far. For every probability distribution \(\mu \) on E we can write $$\begin{aligned} \mu (\mathcal {V}_k f)-\mu (f)\,\mu (\mathcal {V}_k)\,=\, \mu \big (\mathcal {L}^-_{k,\mu ,c} f+\mathcal {L}^+_{k,\mu ,c} f\big ), \end{aligned}$$ $$\begin{aligned} \mathcal {L}^-_{k,\mu ,c}f(x):=\big (\mathcal {V}_k(x)-c\big )^-\,\int _E \big ( f(y)-f(x) \big ) \, \mu (dy) \end{aligned}$$ $$\begin{aligned} \mathcal {L}_{k,\mu ,c}^+ f(x):=\int _E \big (\mathcal {V}_k(y)-c\big )^+\big ( f(y)-f(x) \big ) \, \mu (dy), \end{aligned}$$ using the standard notation \(a^+ =\max \{ 0,a\} \) and \(a^-=\max \{0,-a\}\) for positive and negative part of \(a\in \mathbb {R}\). We have the freedom to introduce an arbitrary constant \(c\in \mathbb {R}\), possibly depending also on the measure \(\mu \) (but not the state \(x\in E\)), since the left-hand side of (2.26) is invariant under renormalization of the potential \(\mathcal {V}_k (x)\rightarrow \mathcal {V}_k (x)-c\). The generators \(\mathcal {L}^-_{k,\mu ,c}\) and \(\mathcal {L}^+_{k,\mu ,c}\) describe jump processes on E with rates depending on the probability measure \(\mu \). \(\mathcal {V}_k (x)\) can be interpreted as a fitness potential for the process, and play exactly that role in the particle approximation of this process based on population dynamics, which is presented in Sect. 3. Generic choices are: \(c=0\) is the default and simplest choice, but is usually not optimal as discussed in Sect. 4. \(c=\mu _t^k (\mathcal {V}_k )\) corresponding to the average potential: If the system in state x is less fit than c it jumps to state y chosen from the distribution \(\mu _t^k (dy)\) according to (2.27), and independently, the system jumps to states fitter than c irrespective of its current state according to (2.28). \(c=\sup _{x\in E} \mathcal {V}_k (x)\) or \(\inf _{x\in E} \mathcal {V}_k (x)\), so that \(\mathcal {L}_{k,\mu ,c}^+(f)(x)\equiv 0\) or \(\mathcal {L}_{k,\mu ,c}^- (f)(x)\equiv 0\), respectively, and only one of the two processes has to be implemented in a simulation. Another representation of the non-linear part in (2.24) is (see e.g. [31, Sect. 5.3.1]) $$\begin{aligned} \mathcal {L}_{k,\mu }^\mathcal {V}f(x):=\int _E \big (\mathcal {V}_k(y)-\mathcal {V}_k (x)\big )^+\big ( f(y)-f(x) \big )\mu (dy),\ \end{aligned}$$ which is particularly interesting for implementing efficient selection dynamics as discussed in Sect. 4. Here every jump from this part of the generator strictly increases the fitness of the process, which is a stronger version of the previous idea where the process on average increased its fitness above level c. The rate depends on departure state x and target state y, which is in general computationally more expensive to implement than rates in (2.27) and (2.28), but can still be feasible due to simplifications in many concrete examples as demonstrated in Sect. 4. A further improvement of that idea is given by $$\begin{aligned} \mathcal {L}_{k,\mu }^\mathcal {V}f(x):=&\big (\mathcal {V}_k(x)-\mu (\mathcal {V}_k )\big )^- \nonumber \\&\int _E \frac{\big (\mathcal {V}_k(y)-\mu (\mathcal {V}_k )\big )^+}{\mu \big ( (\mathcal {V}_k -\mu (\mathcal {V}_k ))^+\big )}\big ( f(y)-f(x) \big )\mu (dy), \end{aligned}$$ which resembles a continuous-time version of selection processes which are known under the names of stochastic remainder sampling [32] or residual sampling [33] in discrete time. Here selection events change the process from states x of less than average fitness \(\mu (\mathcal {V}_k )\) to states y fitter than average, but we will see in Sect. 4 that this variant is harder to implement than (2.29) in our area of interest, and offers only limited extra gain in selection efficiency. In summary, the evolution equation (2.24) for \(\mu _t^k\) can be written as $$\begin{aligned} \frac{d}{dt} \mu _t^k (f)=\mu _t^k (\widehat{\mathcal {L}}_k f)+\mu _t (\mathcal {L}_{k,\mu _t}^\mathcal {V}f) \end{aligned}$$ where the first choice with (2.27) and (2.28) is included defining \(\mathcal {L}_{k,\mu }^\mathcal {V}=\mathcal {L}_{k,\mu ,c}^- +\mathcal {L}_{k,\mu ,c}^+\) in that case. This defines a Markov process \(\big ( X_k (t) :t\ge 0\big )\) on the state space E with generator $$\begin{aligned} \mathcal {L}_{k,\mu _t^k} f(x):=\widehat{\mathcal {L}}_k f(x) +\mathcal {L}_{k,\mu _t^k }^\mathcal {V}f(x).\ \end{aligned}$$ The process is non-linear since the transition rates in the generator \(\mathcal {L}_{k,\mu _t^k}\) depend on the distribution \(\mu _t^k\) of the process at time t, and in particular the process is also time-inhomogeneous. While the generator is still a linear operator acting on test functions f, the adjoint \(\mathcal {L}_{k,\mu _t^k}^\dagger \) is a non-linear operator acting on measures \(\mu _t^k\), generating their time evolution via $$\begin{aligned} \frac{d}{dt}\mu _t^k (dy) =\mathcal {L}_{k,\mu _t^k}^\dagger \mu _t^k,\ \end{aligned}$$ which is equivalent to (2.31). This microscopic mass transport description consistent with the macroscopic description provided by the Feynman–Kac semigroup \(P_t^k\) is also called a McKean representation [16, 31]. It is well know that particle approximations of different McKean representations can have very different properties. The first part is similar to the original dynamics with modified rates \(W_k\) (2.10), and the second non-linear part depending on the distribution \(\mu _t^k\) arises from normalizing the measures \(\nu _t^k\). Note that \(\mu _t^k\) and therefore the finite-time approximation \(\lambda _k (t)\) in (2.25) are uniquely determined by (2.24), and thus independent of the different McKean representations, as are of course the limiting quantities \(\mu _\infty ^k\) and \(\lambda _k\). Also, these interpretations do not make use of concepts from population dynamics such as branching, which will only come into play when using particle approximations of the measures \(\mu ^k_t\) as explained in the next section. 3 Particle Approximations and the Cloning Algorithm The rates of the non-linear process \(\big ( X_k (t):t\ge 0\big )\) (2.32) depend on the distribution \(\mu _t\), which is not known a-priori in the cases in which we are interested. The natural framework to sample such non-linear processes approximately is a particle approximation, see e.g. [30]. Here an ensemble \(\big ( \underline{X}_k (t) :t\ge 0\big )\) of N processes (also called particles or clones) \(X^i_k (t)\), \(i=1,\ldots ,N\) is run in parallel on the state space \(E^N\), and \(\mu _t^k\) is approximated by the empirical distribution \(\mu ^N (\underline{X}_k (t))\) of the realizations, where for any \(\underline{x}\in E^N\) we define $$\begin{aligned} \mu ^N (\underline{x}) (dy):=\frac{1}{N} \sum _{i=1}^N \delta _{x_i} (dy)\quad \text{ as } \text{ a } \text{ distribution } \text{ on } E.\ \end{aligned}$$ Since \(\mu ^N (\underline{X}_k (t))\) is fully determined by the state of the ensemble at time t, the particle approximation is a standard (linear) Markov process on \(E^N\). This leads to an estimator for the SCGF using (2.25) given by $$\begin{aligned} \Lambda ^N_{k} (t):=\frac{1}{t}\int _0^t \mu ^N (\underline{X}_k (s)) (\mathcal {V}_k )ds=\frac{1}{t}\int _0^t \frac{1}{N}\sum _{i=1}^N \mathcal {V}_k (X_k^i (s))ds,\ \end{aligned}$$ which is a random object depending on the realization of the particle approximation. The full dynamics can be set up in various different ways such that \(\mu ^N (\underline{X}_k (t)) \rightarrow \mu _t^k\) converges as \(N\rightarrow \infty \) for any \(t\ge 0\). A generic version, directly related to the above McKean representations has been studied in the applied probability literature in great detail [17, 30], providing quantitative control on error bounds for convergence. After describing this approach, we present a different approach known in the theoretical physics literature under the name of cloning algorithms [5, 13], which provides some computational advantages but lacks general rigorous error control so far [9, 10]. We will then set up a framework to identify common aspects of both approaches, which can be used to generalize existing convergence results to obtain rigorous error bounds for cloning algorithms as described in detail in [18], and to compare computational efficiency of both approaches. 3.1 Basic Particle Approximations The most basic particle approximation is simply to run the McKean dynamics (2.32) in parallel on each of the particles, replacing the dependence on \(\mu _t^k\) by \(\mu ^N (\underline{X}_k (t))\) in the jump rates. Mathematically, denoting by \(\mathcal {L}^N_k\) the generator of the full N particle process \(\big (\underline{X}_k (t):t\ge 0\big )\) acting on functions \(F:E^N \rightarrow \mathbb {R}\), this corresponds to $$\begin{aligned} \mathcal {L}^N_k F(\underline{x}) :=\sum _{i=1}^N \mathcal {L}_{k,\mu ^N (\underline{x}) }^i F(\underline{x}).\ \end{aligned}$$ Here \(\mathcal {L}_{k,\mu ^N (\underline{x}) }^i\) is equivalent to (2.32) acting on particle i only, i.e. on the function \(x^i \mapsto F(\underline{x})\) while \(x^j\), \(j\ne i\) remain fixed. The linear part \(\widehat{\mathcal {L}}_k\) of (2.32) does not depend on \(\mu _t^k\) and follows the original dynamics for each particle, referred to as 'mutation' events in the standard population dynamics interpretation. In this context, the non-linear parts (2.27) and (2.28) can be interpreted as 'selection' events leading to mean-field interactions between the particles. Using the definition (3.1) of the empirical measures, we can write for the part (2.27) $$\begin{aligned} \mathcal {L}^{-,i}_{k,\mu ^N (\underline{x}),c}F(\underline{x})&=\big (\mathcal {V}_k(x_i )-c\big )^-\,\int _E \big ( F(\underline{x}^{i,y} )-F(\underline{x}) \big ) \, \mu ^N (\underline{x})(dy )\nonumber \\&=\big (\mathcal {V}_k(x_i )-c\big )^-\frac{1}{N}\sum _{j=1}^N \big ( F(\underline{x}^{i,x_j} )-F(\underline{x}) \big ) \end{aligned}$$ with notation \(\underline{x}^{i,y} =(x_1 ,\ldots ,x_{i-1} ,y,x_{i+1} ,\ldots ,x_N )\). So with a rate depending on the fitness of particle i, it is 'killed' and replaced by a copy of particle j uniformly chosen from all particles. Analogously, we have for (2.28) $$\begin{aligned} \mathcal {L}^{+,i}_{k,\mu ^N (\underline{x}),c}F(\underline{x})=\frac{1}{N}\sum _{j=1}^N \big (\mathcal {V}_k(x_j )-c\big )^+ \big ( F(\underline{x}^{i,x_j} )-F(\underline{x})\big ),\ \end{aligned}$$ which leads to the same transition \(\underline{x}\rightarrow \underline{x}^{i,x_j }\), but with a different interpretation. Each particle j in the system reproduces independently with a rate depending on its fitness (cloning event), and its offspring replaces a uniformly chosen particle, which is equal to i with probability 1 / N. The different nature of killing and cloning events becomes clearer when we write out the full generator (3.3) and switch summation indices for the cloning part (3.5) in the second line, $$\begin{aligned} \mathcal {L}^N_{k} F(\underline{x})&= \sum _{i=1}^N \int _E W_k (x_i ,dy)\big ( F(\underline{x}^{i,y})-F(\underline{x})\big )\nonumber \\&\quad +\sum _{i=1}^N \big (\mathcal {V}_k(x_i )-c\big )^+ \frac{1}{N}\sum _{j=1}^N\big ( F(\underline{x}^{j,x_i} )-F(\underline{x})\big )\nonumber \\&\quad +\sum _{i=1}^N\big (\mathcal {V}_k(x_i )-c\big )^-\frac{1}{N}\sum _{j=1}^N \big ( F(\underline{x}^{i,x_j})-F(\underline{x} )\big ).\ \end{aligned}$$ Analogously, the McKean representations (2.29) and (2.30) lead to basic N-particle systems with generators $$\begin{aligned} \mathcal {L}^N_{k} F(\underline{x})&= \sum _{i=1}^N \int _E W_k (x_i ,dy)\big ( F(\underline{x}^{i,y})-F(\underline{x})\big )\nonumber \\&\quad +\frac{1}{N}\sum _{i,j=1}^N\big (\mathcal {V}_k(x_j )-\mathcal {V}_k(x_i )\big )^+ \big ( F(\underline{x}^{i,x_j})-F(\underline{x} )\big ) \end{aligned}$$ $$\begin{aligned} \mathcal {L}^N_{k} F(\underline{x})&=\sum _{i=1}^N \int _E W_k (x_i ,dy)\big ( F(\underline{x}^{i,y})-F(\underline{x})\big )\nonumber \\&\quad +\frac{1}{N}\sum _{i,j=1}^N\big (\mathcal {V}_k(x_i ){-}\mu (\mathcal {V}_k )\big )^- \frac{\big (\mathcal {V}_k(x_j )-\mu (\mathcal {V}_k )\big )^+}{\mu \big ( (\mathcal {V}_k -\mu (\mathcal {V}_k ))^+\big )}\big ( F(\underline{x}^{i,x_j})-F(\underline{x} )\big ).\ \end{aligned}$$ Here a particle i is replaced by a particle j with higher fitness, combining killing and cloning into a single event. In the case of (3.8), particle i is furthermore less fit and j is fitter than average. Note that these approximating systems can be seen as particle systems with mean-field or averaged pairwise interaction given by the selection dynamics. Following established results in [17, 30, 31], the (random) quantity \(\Lambda _k^N (t)\) is an asymptotically unbiased estimator of \(\lambda _k (t)\) with a systematic error bounded by $$\begin{aligned} \sup _{t\ge 0} \Big |\mathbb {E}^N \big [ \Lambda _{k}^N (t) \big ] -\lambda _k (t)\Big | \le \frac{C}{N}\quad \text{ for } \text{ all } N\ge 1,\ \end{aligned}$$ along with several rigorous convergence results. These include an estimate on the random error in \(L^p\) norm for any \(p>1\), $$\begin{aligned} \sup _{t\ge 0} \mathbb {E}^N \big [ |\Lambda _{k}^N (t) -\lambda _k (t)|^p \big ]^{1/p} \le \frac{C_p}{\sqrt{N}}\quad \text{ for } \text{ all } N\ge 1,\ \end{aligned}$$ as well as other formulations including almost sure convergence. Note that these estimates are uniform in \(t\ge 0\), so are not affected by the choice of simulation time. The use of a finite simulation time, t, leads to an additional systematic error to the estimate of the SCGF \(\lambda _k\), of order 1 / t as in (2.18) or \(\rho ^{at} /t\) as in (2.20). The bound (3.10) for \(p=2\) implies for the variance $$\begin{aligned} \sup _{t\ge 0} \mathbb {E}^N \Big [ \Big |\Lambda _{k}^N (t) -\mathbb {E}^N \big [ \Lambda _{k}^N (t) \big ]\Big |^2 \Big ] \le \frac{C_2^2}{N}\quad \text{ for } \text{ all } N\ge 1,\ \end{aligned}$$ since we have \(\mathrm {Var}(Y)= \inf _{a\in \mathbb {R}} \mathbb {E}\big [ (Y-a)^2 \big ]\) for any real-valued random variable Y. Therefore, error bars based on standard deviations are of the usual Monte Carlo order of \(1/\sqrt{N}\), and the random error dominates the systematic bias (3.9) for N large enough. Further remarks on possible unbiased estimators can be found at the end of the next subsection. 3.2 Essential Properties of Particle Approximations Following the standard martingale characterization of Feller-type Markov processes (see e.g. [34], Chap. 3), we know that for every bounded, continuous \(F\in C_b (E^N )\) $$\begin{aligned} \mathcal {M}_F^N (t):=F\big (\underline{X}_k (t)\big ) -F\big (\underline{X}_k (0)\big ) -\int _0^t \mathcal {L}_k^N F\big (\underline{X}_k (s)\big ) ds \end{aligned}$$ is a martingale on \(\mathbb {R}\) with (predictable) quadratic variation $$\begin{aligned} \langle \mathcal {M}_F^N \rangle (t)=\int _0^t \Gamma _k^N F\big (\underline{X}_k (s)\big ) ds,\ \end{aligned}$$ where the associated carré du champ operator \(\Gamma _k^N\) is given by $$\begin{aligned} \Gamma _k^N F(\underline{x}) := \mathcal {L}_k^N F^2 (\underline{x}) -2F (\underline{x})\mathcal {L}_k^N F (\underline{x}).\ \end{aligned}$$ In analogy to the decomposition of a random variable into its mean and a centred fluctuating part, the martingale (3.12) describes the fluctuations of the process \(t\mapsto F\big (\underline{X}_k (t)\big )\). The strength of the noise depends on time and is given by the increasing process (3.13), whose time evolution is generated by the carré du champ operator (3.14). In contrast to the generator \(\mathcal {L}_k^N\), this is a quadratic (non-linear) operator and it is the main tool for studying the fluctuations of a process. Elementary computations for approximations (3.6), (3.7) and (3.8) show that for marginal test functions \(F (\underline{x})=f(x_i )\) depending only on a single particle, we have $$\begin{aligned} \mathcal {L}_k^N F (\underline{x}) =\mathcal {L}_{k,\mu ^N (\underline{x})} f(x_i )\quad \text{ and }\quad \Gamma _k^N F (\underline{x}) =\Gamma _{k,\mu ^N (\underline{x})} f(x_i ).\ \end{aligned}$$ So generator and carré du champ both coincide with the corresponding operators \(\mathcal {L}_{k,\mu ^N (\underline{x})}\) and \(\Gamma _{k,\mu ^N (\underline{x})}\) for the McKean dynamics (2.32). This means that for large enough N and \(\mu ^N \big (\underline{X}_k (t)\big )\) close to \(\mu _t^k\), each marginal process \(t\mapsto X_k^i (t)\) has essentially the same distribution as the corresponding McKean process \(t\mapsto X_k (t)\). Note that due to selection events these (auxiliary) dynamics do not coincide with the original process conditioned on a large deviation event, and they are also not unique since there are various choices for McKean representations of Feynman–Kac semigroups, as discussed earlier. Trajectories in a particle approximation always correspond to the trajectories of the particular McKean interpretation they are based on, which is usually (2.26) in the context of cloning algorithms. Due to asymptotic stability the particle approximation converges as \(t\rightarrow \infty \) for fixed N to a unique stationary distribution \(\mu _\infty ^{N,k}\), and the single-particle marginals of this distribution converge to \(\mu _\infty ^k\) as \(N\rightarrow \infty \). While the marginal processes for a given particle approximation are identically distributed they are not independent, and \(\mu _\infty ^{N,k}\) exhibits non-trivial correlations between particles resulting from selection events, which we discuss again in Sect. 4. Now consider averaged observables of the form $$\begin{aligned} F(\underline{x}) =\frac{1}{N}\sum _{i=1}^N f(x_i )=\mu ^N (\underline{x})(f) \end{aligned}$$ as they appear in the eigenvalue estimator (3.2). Since the generator \(\mathcal {L}_k^N\) is a linear operator in F, we have the same identity as above for the generator, $$\begin{aligned} \mathcal {L}_k^N \mu ^N (\underline{x})(f)=\mu ^N (\underline{x})\big ( \mathcal {L}_{k,\mu ^N (\underline{x})} f\big ).\ \end{aligned}$$ The carré du champ, on the other hand, is non-linear in F and the dependence between particles is captured by this operator. Since for all approximations (3.6), (3.7) and (3.8) in every selection event only a single particle is affected, another elementary, slightly more cumbersome computation shows (see [18] for details) $$\begin{aligned} \Gamma _k^N \mu ^N (\underline{x})(f) =\frac{1}{N} \mu ^N (\underline{x})\big ( \Gamma _{k,\mu ^N (\underline{x})} f\big ).\ \end{aligned}$$ The factor 1 / N results from a self-averaging property of the mean-field interaction through selection dynamics, which is expected from results on other mean-field particle systems (see e.g. [35, 36] and references therein), and is fully analogous to the central-limit type scaling of the empirical variance for the sum of N independent random variables. While this scaling remains the same for more general particle approximations with more than one particle being affected by selection events, the simple identity (3.16) does not hold exactly for any \(N\ge 1\) as we see in the next subsection. Recall that the estimator (3.2) for the principal eigenvalue (2.7) is given by an ergodic integral of the average observable \(F(\underline{x})=\mu ^N (\underline{x})(\mathcal {V}_k )\). With (2.12) \(\mathcal {V}_k \in C_b (E)\) and rates are bounded, so \(\mu ^N (\underline{x})\big ( \Gamma _{k,\mu ^N (\underline{x})} \mathcal {V}_k \big )\) is also bounded and the carré du champ (3.16) vanishes as \(N\rightarrow \infty \). Therefore the martingale \(\mathcal {M}_F^N (t)\) also vanishes1 for all \(t\ge 0\), leading to a convergence of the measures \(\mu ^N (\underline{X}_k (t))\rightarrow \mu _t^k (t)\) and also of finite time approximations \(\Lambda _k^N (t)\rightarrow \lambda _k (t)\) as reported in the previous subsection. Due to the time-normalization in (3.2) and the assumed ergodicity, corresponding error bounds hold uniformly in \(t\ge 0\). In summary, bounds on the carré du champ are the main ingredient for the proof of convergence results as explained in detail in [18] and references therein. All above properties up to and including (3.15) are generic requirements for any particle approximation. These particle approximations can differ in their correlation structures and this freedom can be used to construct numerically more efficient particle approximations as discussed in the next subsection. To optimize sampling, particles should ideally evolve in as uncorrelated a fashion as possible; it is not possible to achieve completely independent evolution due to the non-linearity of the underlying McKean process and resulting selection events and mean-field interactions. Remarks on Unbiased Estimators. Estimators based on expectations w.r.t. the empirical measures \(\mu _t^N =\mu ^N (\underline{X}_k (t))\) usually have a bias, i.e. \(\mathbb {E}\big [ \mu _t^N (f)\big ] \ne \mu _t (f)\) for \(f\in \mathcal {C}_b (E)\), which vanishes only asymptotically (3.9). This originates from the non-linear time evolution of \(\mu _t^k\) (2.24) and associated McKean processes. It is straightforward to derive estimators based on the unnormalized measures \(\nu _t^k\) (2.21) that are unbiased. Based on (2.25) and (3.2), we obtain an estimate of the normalization \(\nu _t^k (1)\): $$\begin{aligned} \nu _t^N (1) :=\exp \Big ( \int _0^t \mu _s^N (\mathcal {V}_k )ds \Big ),\ \end{aligned}$$ and then introduce unnormalized empirical measures on E at time t based on the particle approximation $$\begin{aligned} \nu _t^N (f):=\nu _t^N (1)\mu _t^N (f)\quad \text{ for } \text{ all } f\in \mathcal {C}_b (E).\ \end{aligned}$$ The expected time evolution of observables f is then given by $$\begin{aligned} \frac{d}{dt}\mathbb {E}\big [ \nu _t^N (f)\big ] =\mathbb {E}\Big [ \nu _t^N (f)\mu _t^N (\mathcal {V}_k )+\nu _t^N (1)\mathcal {L}_k^N \mu _t^N (f)\Big ].\ \end{aligned}$$ Now with (3.15) and the decomposition (2.32) of \(\mathcal {L}_{k,\mu _t^N}\) into mutation and selection part, we have $$\begin{aligned} \mathcal {L}_k^N \mu _t^N (f)=\mu _t^N \big ( \widehat{\mathcal {L}}_k f+\mathcal {L}_{k,\mu _t^N}^\mathcal {V}f\big ),\ \end{aligned}$$ and with the general construction of McKean representations (2.26) $$\begin{aligned} \mu _t^N (\mathcal {L}_{k,\mu _t^N}^\mathcal {V}f)=\mu _t^N (\mathcal {V}_k f)-\mu _t^N (f)\mu _t^N (\mathcal {V}_k ).\ \end{aligned}$$ Inserting into (3.18), this simplifies to $$\begin{aligned} \frac{d}{dt}\mathbb {E}\big [ \nu _t^N (f)\big ] =\mathbb {E}\big [ \nu _t^N (\widehat{\mathcal {L}}_k f)+\nu _t^N (\mathcal {V}_k f)\big ].\ \end{aligned}$$ Since with (2.15) \(\mathcal {L}_k =\widehat{\mathcal {L}}_k +\mathcal {V}_k\) also generates the time evolution of \(\nu _t (f)\), a simple Gronwall argument with \(\mathbb {E}\big [ \nu _0^N (f)\big ] =\nu _0 (f)\) gives $$\begin{aligned} \mathbb {E}\big [ \nu _t^N (f)\big ] =\nu _t (f)\quad \text{ for } \text{ all } t\ge 0 \text{ and } N\ge 1.\ \end{aligned}$$ Note that choosing \(f\equiv 1\) implies that the normalization (3.17) is an unbiased estimator of \(e^{t\lambda _k (t)}\), which we will see again in Sect. 3.4 in the context of cloning algorithms. However, in practice the random error dominates the accuracy of estimates of \(\lambda _k (t)\), so N has to be chosen large and the bias of the estimator \(\Lambda _k^N (t)\) (3.2) is negligible. 3.3 Cloning Algorithms Cloning algorithms proposed in the theoretical physics literature [3, 4] are similar to the particle approximation (3.6), using the same tilted rates \(W_k\) for mutations, but combining the cloning and mutation part of the generator. We focus the following exposition around the algorithm proposed in [4], but other continuous-time versions can be analysed analogously. The idea is simply to sample the cloning process for each particle i together with the mutation process at the same rate $$\begin{aligned} w_k (x_i ):=\int _E W_k (x_i ,dy) =\int _E w(x_i ,dy)e^{kg(x_i ,y)}.\ \end{aligned}$$ In each combined event, a random number of clones is generated with a distribution \(p_{k,x_i }^N (n)\) such that its expectation is $$\begin{aligned} m_k^N (x_i ):=\sum _{n=0}^N np_{k,x_i }^N (n)=\big (\mathcal {V}_k (x_i )-c\big )^+ /w_k (x_i ).\ \end{aligned}$$ These clones then replace n particles chosen uniformly at random (in the sense that all subsets of size n are equally probable) from the ensemble. In this way, the rate at which a clone of particle i replaces any given particle j is $$\begin{aligned} w_k (x_i )\frac{m_k^N (x_i )}{N}=\frac{1}{N} \big (\mathcal {V}_k (x_i )-c\big )^+,\ \end{aligned}$$ as required for \(\mathcal {L}_k^N\) in (3.6). The only additional assumption on \(p_{k,x_i }^N (n)\) is that the range of possible values for n has to be bounded by N for the cloning event to be well defined. Since its mean is bounded by \(\max _{x\in E}\big (\mathcal {V}_k (x)-c\big )^+ /w_k (x)<\infty \) independently of N, any distribution with the correct mean and finite range will lead to a valid algorithm for sufficiently large N. The above cloning process is described by the generator $$\begin{aligned} \mathcal {L}_k^{N,mc} F(\underline{x})&:=\sum _{i=1}^N \sum _{n=0}^N\frac{1}{{N\atopwithdelims ()n}} {\mathop {\sum }_{\begin{array}{c} A\subseteq \{ 1,..,N\}\\ |A|=n \end{array}}} \int _E W_k (x_i ,dy) p_{k,x_i }^N (n)\nonumber \\&\quad \big ( F(\underline{x}^{A,x_i ;i,y})-F(\underline{x})\big )\ . \end{aligned}$$ Here we have used the notation \(\underline{x}^{A,w;\,i,y}\) for the vector \(\underline{z}\in E^N\) with $$\begin{aligned} z_j:={\left\{ \begin{array}{ll} x_j &{} j\not \in A\cup \{i\}\\ w &{} j\in A\setminus \{i\}\\ y &{}j=i, \end{array}\right. } \end{aligned}$$ for \(j\in \{1,\dots ,N\}\) and \(w,y\in E\). (3.21) combines cloning of \(x_i\) into a uniformly chosen subset A of size n, with a subsequent mutation event where the state of particle i changes to y. If we simply write \(\mathcal {L}_k^{N,-}\) for the killing part (third line in (3.6)) which remains unchanged, the full generator of this cloning algorithm is given by $$\begin{aligned} \mathcal {L}_k^{N,clone}:= \mathcal {L}_k^{N,mc} +\mathcal {L}_k^{N,-}.\ \end{aligned}$$ It can be shown by direct computation that for marginal test functions of the form \(F(\underline{x}) =f (x_i )\) this is equivalent to the generator (3.6) $$\begin{aligned} \mathcal {L}_k^N F(\underline{x})=\mathcal {L}_k^{N,clone} F(\underline{x}),\ \end{aligned}$$ and by linearity of generators also for all averaged functions of the form \(F(\underline{x}) =\mu ^N (\underline{x})(f)\). One can also show that for marginal test functions the carré du champ operators coincide, so the cloning algorithm produces marginal processes or particle trajectories with the same distribution as the simple particle approximation (3.6). For averaged test functions the change in the correlation structure between particles is picked up by the carré du champ operator. Instead of (3.16) one can derive the following estimate for the mutation and cloning part $$\begin{aligned} \Gamma _k^{N,mc} F(\underline{x})\le \frac{2}{N} \mu ^N (\underline{x})\Big ( \widehat{\Gamma }_k f+\mathbb {1}(\mathcal {V}_k >c)\frac{q_k^N }{m_k^N}\Gamma _{k,\mu ^N (\underline{x}),c}^+ f\Big ), \end{aligned}$$ see [18], the proof of Theorem 3.3. Here \(q_k^N(x_i):=\sum _{n=0}^N n^2 p^N_{k,x_i}(n)\) denotes the second moment of the number of clones for the particle i, and we use the decomposition (2.32) where \(\widehat{\Gamma }_k f\) is the carré du champ corresponding to the mutation dynamics \(\widehat{\mathcal {L}}_k\), and \(\Gamma _{k,\mu ^N (\underline{x}),c}^+\) the one corresponding to the cloning part (2.28). This estimate holds, of course, only for N large enough that the cloning event is well defined (see Sect. 5). Note also that \(\Gamma _{k,\mu ^N (\underline{x}),c}^+ f(x)\) is proportional to \((\mathcal {V}_k (x)-c)^+\) and with (3.20) \((\mathcal {V}_k (x)-c)^+ =0\) implies \(m_k^N (x)=0\) for the expectation of the distribution \(p_{k,x}^N\), leading to the indicator function \(\mathbb {1}(\mathcal {V}_k >c) \in \{0,1\}\). This is sufficient to carry out the full proof of the convergence results mentioned in Sect. 3 based on results in [17]. This is carried out in [18] in full detail, and here we only report the main result of that work. Recall the bounds (2.12) on \(\mathcal {V}_k\) and the total modified exit rate \(w_k\). Denote by \(\bar{\Lambda }_{k}^N (t)\) the eigenvalue estimator (3.2) corresponding to the cloning algorithm (3.22). Then there exist constants \(\alpha ,\,\gamma >0\) and \(\alpha _p,\,\gamma _p >0\) such that for all N large enough $$\begin{aligned} \sup _{t\ge 0} \Big |\mathbb {E}^N \big [ \bar{\Lambda }_{k}^N (t) \big ] -\lambda _k (t)\Big | \le \frac{\alpha }{N}\bar{v}_k \bar{w}_k \big ( \gamma +\Vert q_k^N \Vert _\infty \big ),\ \end{aligned}$$ and for all \(p>1\) $$\begin{aligned} \sup _{t\ge 0} \mathbb {E}^N \big [ |\bar{\Lambda }_{k}^N (t) -\lambda _k (t)|^p \big ]^{1/p} \le \frac{\alpha _p}{\sqrt{N}}\bar{v}_k \bar{w}_k \big ( \gamma _p+\Vert q_k^N \Vert _\infty \big )^{1/2}.\ \end{aligned}$$ Choosing the normalization of the potential \(c<\inf _{x\in E} \mathcal {V}_k (x)\) the killing rate in (3.6) vanishes and (3.21) describes the full generator \(\mathcal {L}_k^{N,clone}\) for the cloning algorithm. This is computationally cheaper and simpler to implement, since only the mutation process has to be sampled independently for all particles, and cloning events happen simultaneously. However, as is discussed in Sect. 4, this choice in general reduces the accuracy of the estimator. A common choice in the physics literature for the distribution \(p_{k,x_i}^N\) of the clone size event (see e.g. [3, 13]) is $$\begin{aligned} p_{k,x_i}^N (n)=\left\{ \begin{array}{cl} m_k^N (x_i ) -\lfloor m_k^N (x_i )\rfloor , &{} \text{ for } n=\lfloor m_k^N (x_i )\rfloor + 1\\ \lfloor m_k^N (x_i )\rfloor + 1 -m_k^N (x_i ), &{} \text{ for } n=\lfloor m_k^N (x_i )\rfloor \\ 0,&{} \text{ otherwise }\end{array}\right. , \end{aligned}$$ so the two adjacent integers to the mean are chosen with appropriate probabilities, which minimizes the variance of the distribution for a given mean. This choice therefore minimizes the contribution of the second moment \(q_k^N\) to the bound for the errors in (3.24) and (3.25), and is also simple to implement in practice. Due to (3.23), trajectories of individual particles follow the same law as the simple particle approximation (3.6) and therefore the same McKean process as explained in Sect. 2.2 The cloning approach can introduce additional correlations between particles due to large cloning events, which is quantified by the second moment \(q_k^N\) entering the error bounds in (3.24) and (3.25). 3.4 The Cloning Factor In the physics literature an alternative estimator to \(\Lambda _t^N\) (3.2) is often used, based on a concept called the 'cloning factor' (see e.g. [3, 5, 13]). This is essentially a continuous-time jump process \((C^N_k (t) :t\ge 0)\) on \(\mathbb {R}^+\) with \(C^N_k (0)=1\), where at each selection event of size \(n\ge -1\) at a given time t the value is updated as $$\begin{aligned} C^N_k (t)=C^N_k (t-) (1+n/N).\ \end{aligned}$$ Here \(n=-1\) indicates a killing event, and \(n\ge 0\) a cloning event according to the two parts of the generator (3.22). This idea comes from a branching process interpretation of the cloning ensemble related to the unnormalized measure \(\nu _t^k\), since with (2.17) we have that $$\begin{aligned} \nu _t^k (1)\approx e^{\lambda _k t}\quad \text{ as } t\rightarrow \infty \ , \end{aligned}$$ so \(\lambda _k\) corresponds to the volume expansion factor of the clone ensemble due to selection dynamics. In our setting, the dynamics of \(C^N_k (t)\) can be defined jointly with the cloning process via an extension of the generator (3.22) $$\begin{aligned} \bar{\mathcal {L}}_k^{N,clone} F(\underline{x},\zeta )=\bar{\mathcal {L}}_k^{N,mc} F(\underline{x},\zeta )+\bar{\mathcal {L}}_k^{N,-} F(\underline{x},\zeta )\ , \end{aligned}$$ acting on functions that depend on the state \(\underline{x}\in E^N\) and the cloning factor \(\zeta \in \mathbb {R}^+\). With (3.21) we have for cloning events $$\begin{aligned} \bar{\mathcal {L}}_k^{N,mc} F(\underline{x},\zeta )&:=\sum _{i=1}^N \sum _{n=0}^N\frac{1}{{N\atopwithdelims ()n}} {\mathop {\sum }_{\begin{array}{c} A\subseteq \{ 1,\ldots ,N\}\\ |A|=n \end{array}}} \int _E W_k (x_i ,dy) p_{k,x_i }^N (n) \\&\qquad \Big ( F\big (\underline{x}^{A,x_i ;i,y},\zeta (1+n/N)\big ) -F\big (\underline{x},\zeta \big )\Big ) \end{aligned}$$ and with the third line of (3.6) for killing events $$\begin{aligned} \bar{\mathcal {L}}_k^{N,-} F(\underline{x},\zeta ) =\sum _{i=1}^N\big (\mathcal {V}_k(x_i )-c\big )^-\frac{1}{N}\sum _{j=1}^N \big ( F(\underline{x}^{i,x_j},\zeta (1-1/N))-F(\underline{x} ,\zeta )\big ).\ \end{aligned}$$ So the joint process \(\big ( (\underline{X}_k^N (t),C_k^N (t)) :t\ge 0\big )\) is Markov, and observing only the cloning factor with the simple test function \(G(\underline{x},\zeta )= \zeta \) we get $$\begin{aligned} \bar{\mathcal {L}}_k^{N,mc} G(\underline{x},\zeta )=&\sum _{i=1}^N \sum _{n=0}^N\frac{1}{{N\atopwithdelims ()n}} \mathop {\sum }_{\begin{array}{c} A\subseteq \{ 1,\ldots ,N\}\\ |A|=n \end{array}}\int _E W_k (x_i ,dy) p_{k,x_i }^N (n) \zeta \cdot \frac{n}{N}\nonumber \\ =&\frac{\zeta }{N}\sum _{i=1}^N \big (\mathcal {V}_k (x_i )-c\big )^+. \end{aligned}$$ In the last line we have used (3.20), and in a similar fashion we get for killing events $$\begin{aligned} \bar{\mathcal {L}}_k^{N,-} G(\underline{x},\zeta ) =-\frac{\zeta }{N}\sum _{i=1}^N\big (\mathcal {V}_k(x_i )-c\big )^-. \end{aligned}$$ $$\begin{aligned} \bar{\mathcal {L}}_k^{N,clone} G(\underline{x},\zeta )=\frac{\zeta }{N}\sum _{i=1}^N \big (\mathcal {V}_k(x_i ){-}c\big ) = \zeta m^N (\underline{x}) (\mathcal {V}_k)-\zeta c,\ \end{aligned}$$ and analogously to (3.18), the expected time evolution of \(C^N_k (t)\) is then given by $$\begin{aligned} \frac{d}{dt}\mathbb {E}[C^N_k(t)]=\mathbb {E}[C^N_k(t)\cdot \mu ^N_t(\mathcal {V}_k-c)]. \end{aligned}$$ This is also the evolution of \(\nu _t^N(e^{-tc})=e^{-tc}\nu _t^N(1)\), since $$\begin{aligned} \frac{d}{dt}\mathbb {E}[\nu _t^N(e^{-tc})]&=\mathbb {E}[\mu _t^N(\mathcal {V}_k) e^{-tc}\nu _t^N(1)-c\ e^{-tc}\nu _t^N(1)]\\&=\mathbb {E}[\nu _t^N(e^{-tc})\cdot \mu ^N_t(\mathcal {V}_k-c)]. \end{aligned}$$ With initial conditions \(C_k^N (0)=1=\nu _t^N (1)\), a Gronwall argument analogous to (3.19) gives $$\begin{aligned} \mathbb {E}[e^{tc}C^N_k(t)]=\mathbb {E}[\nu _t^N(1)]=\nu _t(1)\quad \text{ for } \text{ all } t\ge 0 \text{ and } N\ge 1.\ \end{aligned}$$ So \(e^{tc}C^N_k(t)\) is an unbiased estimator for \(\nu _t(1)\), which leads also to an alternative estimator for \(\lambda _k(t)\) (2.17) given by $$\begin{aligned} \overline{\Lambda }_k^N(t):=\frac{1}{t}\log C^N_k(t)+c. \end{aligned}$$ Note that this is not itself unbiased as a consequence of the nonlinear transformation involving the logarithm. Since \(C_k^N (t)\) is defined as a product (3.27), we can use another simple test function \(G(\underline{x},\zeta )=\log \zeta \) to analyze the convergence behaviour of \(\overline{\Lambda }_k^N(t)\). Analogously to (3.28) and (3.29) we get $$\begin{aligned} \bar{\mathcal {L}}_k^{N,clone} G(\underline{x},\zeta )=\frac{1}{N}\sum _{i=1}^N \big (\mathcal {V}_k(x_i ){-}c\big ) = m^N (\underline{x}) (\mathcal {V}_k)-c +O\Big (\frac{1}{N}\Big ),\ \end{aligned}$$ where we have also used (3.20) and assumed that the support of \(p^N_{k,x_i}\) is bounded independently of N (which is the case for common choices in the literature such as (3.26)). This allows us to approximate \(\log (1+n/N)=n/N +O(1/N^2 )\) as \(N\rightarrow \infty \), leading to error terms of order 1 / N. Then, analogously to (3.12) we get with \(\log C_k^N (0)=0\) that $$\begin{aligned} \mathcal {M}_C^N (t):=&\log C_k^N (t) -\int _0^t \bar{\mathcal {L}}_k^{N,clone} G(\underline{X}_k (s),C^N_k(s))\, ds\\ =&\log C_k^N (t) -t(\Lambda _k^N (t)-c)+t\, O\Big (\frac{1}{N}\Big ) \end{aligned}$$ is a martingale. For the carré du champ we obtain from a straightforward computation that $$\begin{aligned} \Gamma _k^{N,clone} G(\underline{x},\zeta )=\frac{1}{N^2}\sum _{i=1}^N\big |\mathcal {V}_k(x_i )-c\big | +O\Big (\frac{1}{N^2}\Big ),\ \end{aligned}$$ and since the potential \(\mathcal {V}_k\) is bounded (2.12), the quadratic variation of the martingale is bounded by $$\begin{aligned} \langle \mathcal {M}_C^N \rangle (t)\le \frac{t}{N}\big (\bar{v}_k +|c|\big )+O\Big (\frac{1}{N^2}\Big ).\ \end{aligned}$$ Therefore the estimator (3.30) based on the cloning factor $$\begin{aligned} \bar{\Lambda }_k^N (t)=\frac{1}{t}\log C_k^N (t) +c=\Lambda _k^N (t)+O\Big (\frac{1}{N}\Big )+\frac{1}{t}\mathcal {M}_C^N (t) \end{aligned}$$ is asymptotically equal to the basic estimator \(\Lambda _k^N (t)\) (3.2), with corrections that vanish as 1 / N in the \(L^p\)-norm as \(N\rightarrow \infty \) uniformly in \(t\ge 0\), analogously to the discussion in Sect. 3.2. Therefore, the same convergence results as stated in the Theorem apply for \(\bar{\Lambda }_k^N (t)\). Similar convergence results can be shown to hold for \(e^{tc}C_k^N (t)\) as an estimator of \(\nu _t (1)\) for fixed \(t>0\), but naturally cannot hold uniformly in time. Since the object of interest is usually the long-time limit \(\lambda _k\) (2.17), the practical relevance of this is limited, in addition to the general point that random errors dominate the convergence as mentioned in Sect. 3.2. In practice, the basic ergodic average \(\Lambda _k^N (t)\) (3.2) is more useful than the cloning factor in the application areas we have in mind. In particular, for alternative particle approximations such as (3.7) or (3.8) where cloning and killing events are effectively combined, it is not clear how to define a cloning factor, whereas \(\Lambda _k^N (t)\) is always easily accessible. 4 Efficiency and Application of Particle Approximations 4.1 Efficiency of Algorithms Selection events (cloning or killing) in a particle approximation increase the correlations among the particles in the ensemble, and thereby decrease the resolution in the empirical distribution \(\mu _t^N =\mu ^N (\underline{X}_k (t))\), and ultimately the quality of the sample average in the estimator (3.2). Therefore it is desirable to minimize the total rate \(S_k (\underline{x})\) of selection events for a particle approximation. For algorithm (3.6) this is given by $$\begin{aligned} S_k^1 (\underline{x})=\sum _{i=1}^N \big | \mathcal {V}_k (x_i )-c\big |,\ \end{aligned}$$ and the same holds for the cloning algorithm (3.22), since the change in cloning rate is compensated exactly by the average number of clones created to obtain the same overall rate. It is easy to see that for a given state \(\underline{x}\) of the clone ensemble, there is an optimal choice of c to minimize this expression, given by the median of the fitness distributions \(\mathcal {V}_k (\underline{x}):=\big \{\mathcal {V}_k (x_i ):i=1,\ldots ,N\big \}\). If the distribution of \(\underline{X}_k (t)\) is unimodal with light enough tails, the median can be well approximated by the mean \(\mu _t^N (\mathcal {V}_k)\). Since both quantities can be computed with similar computational effort (or well approximated at reduced cost using only a subset of the ensemble), choosing $$\begin{aligned} c=c(t)=\text{ median }\big (\mathcal {V}_k (\underline{X}_k (t))\big ) \end{aligned}$$ should be computationally optimal. In particular, the simplest choice \(c=0\) in the cloning algorithm is in general far from optimal, so is choosing \(c=\inf _{x\in E} \mathcal {V}_k (x)\) to get rid of the killing part of the dynamics (see first remark in Sect. 3.3). Intuitively, algorithms (3.7) and (3.8) should lead to even lower total selection rates since every selection event increases the fitness potential, while in algorithms based on (3.6) it increases only on average and may also decrease as the result of selection events. Indeed for (3.7) we have $$\begin{aligned} S_k^2 (\underline{x})&=\frac{1}{N}\sum _{i,j=1}^N \big ( \mathcal {V}_k (x_i )-\mathcal {V}_k (x_j )\big )^+ =\frac{1}{2N} \sum _{i,j=1}^N \big | \mathcal {V}_k (x_i )-\mathcal {V}_k (x_j )\big | \nonumber \\&\le \frac{1}{2} \sum _{i=1}^N \Big (\big | \mathcal {V}_k (x_i )-c\big | +\big |c-\mathcal {V}_k (x_i )\big |\Big )= S_k^1 (\underline{x}),\ \end{aligned}$$ by symmetry of summations and the triangle inequality. The inequality is strict except for degenerate cases, e.g. if \(\mathcal {V}_k (x_i )\) takes only two values, and c lies in between the two. In practice, in the scenarios which we have investigated, it turns out that unless the distribution of \(\underline{X}_k (t)\) is seriously skewed, \(S_k^2\) is strictly smaller than \(S_k^1\) by a sizeable amount, as is illustrated later in Fig. 2 for the inclusion process. Algorithm (3.8) provides further improvement with $$\begin{aligned} S_k^3 (\underline{x})&=\frac{1}{N}\sum _{i,j=1}^N \big ( \mathcal {V}_k (x_i )-\mu ^N (\underline{x})(\mathcal {V}_k )\big )^- \frac{\big ( \mathcal {V}_k (x_j )-\mu ^N (\underline{x})(\mathcal {V}_k )\big )^+}{\mu ^N (\underline{x})\big ( (\mathcal {V}_k -\mu ^N (\underline{x})(\mathcal {V}_k ))^+ \big )}\nonumber \\&=\frac{1}{2}\sum _{i=1}^N \big | \mathcal {V}_k (x_i )-\mu ^N (\underline{x})(\mathcal {V}_k )\big | \le S_k^2 (\underline{x})\ . \end{aligned}$$ Here we have used \(\sum _{i=1}^N \big ( \mathcal {V}_k (x_i )-\mu ^N (\underline{x})(\mathcal {V}_k )\big ) =0\) and Jensen's inequality to compare with \(S_k^2 (\underline{x})\), since \(v\mapsto |a-v|\) is convex for all \(a\in \mathbb {R}\). Note that the rate of change of the mean fitness \(\mu _t^N (\mathcal {V}_k )\) is given by the same expression in all the above particle approximations, $$\begin{aligned} \mu _t^N (\widehat{\mathcal {L}}_k \mathcal {V}_k) +\mu _t^N (\mathcal {V}_k^2) -\mu _t^N (\mathcal {V}_k)^2.\ \end{aligned}$$ The first term due to mutation dynamics \(\widehat{\mathcal {L}}_k\) can have either sign and is identical in all algorithms, while the second due to selection is positive and given by the empirical variance of \(\mathcal {V}_k\). This follows from direct computations using the averaged test function \(F(\underline{x})=\mu ^N (\underline{x}) (\mathcal {V}_k )\) in (3.6), (3.7), (3.8) and (3.22), and is consistent with the evolution equation (2.24). So the mean fitness evolves until a mutation selection balance is reached and the rate of change (4.4) vanishes, characterizing the stationary state of the particle approximation process. Note that this basic mechanism is identical in all particle approximations discussed here, so we expect the mean fitness to show a very similar behaviour. While finite size effects can lead to deviations also in the mean, the main difference between the algorithms is found on the level of variances and time correlations, which can be significantly reduced using (3.7) or (3.8) as illustrated in the next subsections. Since our main observable of interest \(\Lambda _k^N (t)\) is an ergodic time average of \(\mu _t^N (\mathcal {V}_k)\), this can lead to significant improvements in the accuracy of the estimator (3.2). The correlations introduced by selection are counteracted by mutation dynamics, which occur independently for each particle and decorrelate the ensemble. The dynamics of correlation structures in cloning algorithms has been discussed in some detail recently in [7, 8, 13, 38], and can be understood in terms of ancestry in the generic population dynamics interpretation. Those results also discuss important non-ergodicity effects in the measurement of path properties and the interpretation of particle trajectories, which were already pointed out in [3] and are also a subject of recent research [39]. This poses interesting questions for rigorous mathematical investigations which are left to future work. Here we simply conclude with a numerical test in the next subsections, which supports the intuition that approximation (3.7) with minimal selection rates leads to variance reduction in the relevant estimators compared to the cloning algorithm. Since the selection rate in (3.7) depends on potential differences between pairs, implementation is in general more involved than for algorithms based on (3.6). While the scaling \(t N\log N\) of computational complexity with the size N of the clone ensemble is the same, the prefactor and computational cost in practice may be higher and this has to be traded off against gains in accuracy on a case by case basis. For the examples studied below we find a computationally efficient implementation of (3.7) providing a clear improvement over the standard cloning algorithm, which is the main contribution of this paper in this context. Algorithm (3.8), on the other hand, provides only marginal improvement over (3.7), but cannot be implemented as efficiently in our area of interest. 4.2 Current Large Deviations for Lattice Gases In the following we consider one-dimensional stochastic lattice gases with periodic boundary conditions on the discrete torus \(\mathbb {T}_L\) with L sites and a fixed number of particles M. Within our general framework, they are simply Markov chains on the finite state space E of all particle configurations, which have been of recent research interest in the context of current fluctuations. We denote configurations by \(\eta =(\eta _x :x\in \mathbb {T}_L )\) where \(\eta _x \in \mathbb {N}_0\) is interpreted as the mass (or number of monomers) at site x, and the process is denoted as \((\eta (t):t\ge 0)\). In order to use standard notation for lattice gases, in this and the following subsection we change notation, and in particular the use of \(x,y\in \mathbb {T}_L\) is different to the use of those symbols in previous sections where they denoted states in E. Monomers jump to nearest neighbour sites with rates \(u(\eta _x ,\eta _y )\ge 0\) for \(y=x\pm 1\) depending on the occupation numbers of departure and target site, multiplied with a spatial bias \(p=1-q\in [0,1]\). The generator is of the form $$\begin{aligned} \mathcal {L}f(\eta )&=\sum _{x\in \mathbb {T}_L} \Big [ p\, u(\eta _x ,\eta _{x+1}) \big ( f(\sigma _{x,x+1}\eta )-f(\eta )\big )\nonumber \\&\quad +q\, u(\eta _x ,\eta _{x-1}) \big ( f(\sigma _{x,x-1}\eta )-f(\eta )\big )\Big ], \end{aligned}$$ where \(\sigma _{x,y}\eta \) results from the configuration \(\eta \) after moving one particle from x to y. The number of particles \(M=\sum _{x\in \mathbb {T}_L} \eta _x\) is a conserved quantity, but otherwise we assume the process to be irreducible for any fixed M, which is ensured for example by positivity of the rates, i.e. for all \(k,l\ge 0\) $$\begin{aligned} u(k,l)=0\quad \Leftrightarrow \quad k=0.\ \end{aligned}$$ This class includes various models that have been studied in the literature, for example the inclusion process introduced in [19], where $$\begin{aligned} u(k ,l )=k (d+l)\quad \text{ for } \text{ all } k,l\ge 0,\ \end{aligned}$$ with a positive parameter \(d>0\). Particles perform independent jumps with rate d and in addition are attracted by each particle on the target site with rate 1, giving rise to the 'inclusion' interaction. This model has attracted recent attention due the presence of condensation phenomena [40, 41] and in the context of large deviations of the particle current [20], and we will use this as an example in Sect. 4.3. Other well-studied models covered by our set-up are the exclusion process with state space \(E\subset \{ 0,1\}^{\mathbb {T}_L}\) and \(u(\eta _x ,\eta _y )=\eta _x (1-\eta _y )\), or zero-range processes with \(E\subset \mathbb {N}_0^{\mathbb {T}_L}\) and rates \(u(\eta _x ,\eta _y )=u(\eta _x )\) depending only on the occupation number on the departure site. In terms of previous notation, the jump rates for a lattice gas of type (4.5) between any two configurations \(\eta \) and \(\zeta \) are given as $$\begin{aligned} W(\eta ,\zeta )=\sum _{x\in \mathbb {T}_L} \Big ( p\, u(\eta _x ,\eta _{x+1} )\delta _{\zeta ,\sigma _{x,x+1} \eta }+q\, u(\eta _x ,\eta _{x-1} )\delta _{\zeta ,\sigma _{x,x-1} \eta }\Big ).\ \end{aligned}$$ In the following we focus on lattice gases where \(\sum _{x} u(\eta _x ,\eta _{x+1}) =\sum _{x} u(\eta _x ,\eta _{x-1})\) for all configurations \(\eta \). While this is not true in general for models of type (4.5), it holds for many examples including inclusion, exclusion and zero-range processes mentioned above. With \(p+q=1\), the total exit rate out of configuration \(\eta \) is then simply given by $$\begin{aligned} w(\eta )=\sum _{x\in \mathbb {T}_L} \Big ( p\,u(\eta _x ,\eta _{x+1}) +q\,u(\eta _x ,\eta _{x-1})\Big ) =\sum _{x\in \mathbb {T}_L} u(\eta _x ,\eta _{x+1}).\ \end{aligned}$$ We are interested in an observable \(A_T\) measuring the total particle current up to time T, which is achieved by choosing \(h(\eta )\equiv 0\) in (2.5) and $$\begin{aligned} g(\eta ,\zeta )=\pm 1\quad \text{ if } \zeta =\sigma _{x,x\pm 1} \eta \quad \text{ and }\quad g(\eta ,\zeta )=0\quad \text{ otherwise }\ . \end{aligned}$$ Using (4.8) we see by direct computation that the potential (2.11) takes the simple form $$\begin{aligned} \mathcal {V}_k (\eta )=(Q_k -1)w(\eta )\quad \text{ where }\quad Q_k :=pe^k +qe^{-k}.\ \end{aligned}$$ Modified mutation rates \(W_k (\eta ,\zeta )\) are given by (4.7) replacing (p, q) by \((pe^k ,qe^{-k} )\), leading to modified total exit rates $$\begin{aligned} w_k (\eta ) =Q_k \sum _{x\in \mathbb {T}_L} u(\eta _x ,\eta _{x+1})=Q_k w(\eta ).\ \end{aligned}$$ The similarity of \(\mathcal {V}_k\) and \(w_k\) for lattice gases (4.5) that obey (4.8) provides a direct relation between mutation and selection rates, and allows us to set up an efficient rejection based implementation of a particle approximation \((\underline{\eta }_k (t):t\ge 0)\) based on the efficient algorithm (3.7). In the following we omit the subscript k for configurations and write \(\underline{\eta }(t)=(\eta ^i (t),i=1,\ldots ,N)\) to simplify notation. For given parameters \(p,q=1-p\) and fixed \(k\in \mathbb {R}\) we distinguish two cases. \(\mathbf {Q}_\mathbf {k} <\mathbf {1}\). We sample the ensemble of N clones at a total rate of \(\mathcal {W}(\underline{\eta }):=\sum _{i=1}^N w(\eta ^i )\), and pick a clone i with probability \(w (\eta ^i )/\mathcal {W}(\underline{\eta })\) for the next event. With probability \(Q_k \in (0,1)\) this is a simple mutation within clone i, and then we replace \(\eta ^i\) by \(\zeta ^i\) with probability \(W_k (\eta ^i ,\zeta ^i )/w_k (\eta ^i )\). Otherwise, with probability \(1-Q_k\) we perform a selection event following the second line in (3.7): Pick a clone j uniformly at random (including i). If $$\begin{aligned} \mathcal {V}_k (\eta ^j )>\mathcal {V}_k (\eta ^i )\quad \text{ or } \text{ equivalently }\quad w(\eta ^j )<w(\eta ^i ) \end{aligned}$$ (with (4.9) and since \(Q_k <1\)), replace \(\eta ^i\) by \(\eta ^j\) with probability \(\big ( w(\eta ^i )-w(\eta ^j )\big ) /w(\eta ^i )\). This procedure ensures that mutation and selection events are sampled with the correct rates as required in (3.7). \(\mathbf {Q}_\mathbf {k} >\mathbf {1}\). We sample the ensemble of N clones at a total rate of \(Q_k \mathcal {W}(\underline{\eta })\), and pick a clone i with probability \(w (\eta ^i )/\mathcal {W}(\underline{\eta })\) and a clone j uniformly at random. If \(w(\eta ^j )<w(\eta ^i )\) we replace \(\eta ^j\) by \(\eta ^i\) with probability \(\frac{Q_k -1}{Q_k}\frac{w(\eta ^i )-w(\eta ^j )}{w(\eta ^i )}\). Then we mutate clone i as above, combining the mutation and selection event as in the cloning algorithm. Note that \(Q_k =1\) is equivalent to \(k=0\), which corresponds to the original process with \(\lambda _0 =0\) and does not require any estimation. For \(Q_k >1\) we perform mutation and selection events simultaneously, in analogy to the cloning procedure explained in Sect. 3.3, but can use the efficient algorithm (3.7). For \(Q_k <1\) no mutation or selection event occurs with probability \((1-Q_k )\frac{w(\eta ^j )}{w(\eta ^i )} \mathbb {1}(w(\eta ^j )<w(\eta ^i ))\), and a high rate of such rejections is not desirable for computational efficiency. But even for very small values of \(Q_k\) the second factor is usually significantly smaller than 1 (or simply 0), since clone i was picked with probability proportional to \(w(\eta ^i )\) and j uniformly at random. Note also that if the cloning algorithm (3.22) is implemented with the common choice \(c=0\) for a lattice gas of the type discussed here, due to (4.9) and (4.10) the average number of clones per event (3.20) is $$\begin{aligned} m_k^N (\eta ^i )=\big (\mathcal {V}_k (\eta ^i )\big )^+ /w_k (\eta ^i )=\frac{Q_k -1}{Q_k}\in (0,1)\quad \text{ if } Q_k >1\ , \end{aligned}$$ and 0 for \(Q_k <1\), where only killing occurs. In particular, this is independent of the state \(\underline{\eta }\) of the clone ensemble, and the standard distribution of the form (3.26) is a simple Bernoulli random variable. While with (4.10) the total mutation rate is \(Q_k \mathcal {W}(\underline{\eta })\), selection rates (4.1), (4.2) and (4.3) can be written as $$\begin{aligned} S_k^1 (\underline{\eta })&=\sum _{i=1}^N \big | (Q_k -1)w(\eta ^i )-c\big |{\mathop {=}\limits ^{c=0}} |Q_k-1|\mathcal {W}(\underline{\eta })\nonumber \\ S_k^2 (\underline{\eta })&=|Q_k-1|\frac{1}{2N} \sum _{i,j=1}^N \big | w(\eta ^i )-w(\eta ^j)\big |\nonumber \\ S_k^3 (\underline{\eta })&=|Q_k-1| \frac{1}{2}\sum _{i=1}^N \big | w(\eta ^i )-\mu ^N (\underline{\eta })(w)\big |\ . \end{aligned}$$ So for very small values of \(Q_k\) close to 0 the mutation rate can become very small in comparison to selection, which means that significant computation time is devoted to re-weighting by selection, rather than advancing the dynamics via mutation events. This effect is typically much stronger for the standard cloning algorithm with \(c=0\), and occurs for example for totally asymmetric lattice gases with \(p=1\) and negative k conditioning on low currents. In Fig. 1 we include a sketch of \(Q_k\) for different values of asymmetry, including also the drift of the modified dynamics, which can be reversed in partially asymmetric systems. Open image in new window Illustration of \(Q_k\) (left) as given in (4.9) and the drift \(2pe^k /Q_k-1\) for the modified dynamics (right) as a function of k for different values of the asymmetry \(p=1-q\). The minimum of \(Q_k\) is \(2\sqrt{pq}\), attained at \(k=\frac{1}{2}\log \frac{q}{p}\in [-\infty ,\infty ]\), which is also where the modified drift vanishes Inclusion process (4.6) with \(d=1\), system size \(L=64\), \(M=128\) particles, asymmetry \(p=0.7\) and \(N=2^{11}\) clones at time \(t=42000\). (Top) The rescaled estimator \(\Lambda _k^N (t)/L\) as a function of k in the convergent regime, comparing the cloning algorithm (3.22) with \(c=0\) (orange) and algorithm (3.7) (blue). Error bars indicate 5 standard deviations, which are bounded by the size of the symbols for (3.7). (Bottom) Illustration of the relationship between \(S_k^1\) depending on c, and \(S_k^2\) and \(S_k^3\) (4.11) for \(k=-0.79\) (left) and \(k=0.1\) (right) based on the state \(\underline{\eta }(t)\) of the clone ensemble In Fig. 2 we compare the cloning algorithm to algorithms (3.7) and (3.8) for an inclusion process with \(d=1\), \(L=64\), \(M=128\) and asymmetry \(p=0.7\). It is known [20] that the SCGF \(\lambda _k\) scales linearly with the system size L, and outside the convergent regime \(k\in [-\ln (\frac{1-p}{p}),0] \approx [-0.85,0]\) the rescaled SCGF \(\lambda _k /L\) diverges as \(L\rightarrow \infty \) (divergent regime). We compare estimates \(\Lambda _k^N (t)\) for the cloning algorithm (3.22) with \(c=0\) and algorithm (3.7) in the convergent regime. We use initial conditions where M particles are distributed on L lattice sites uniformly at random, and a burn-in time of \(10\cdot L=640\) as discussed in (2.19) and (2.20). This leads to an obvious adaption of the integration interval in the estimator \(\Lambda _k^N (t)\) (3.2), but we do not alter the notation here to keep it simple. Both algorithms perform very well and agree with a simple theoretical estimate based on bias reversal, which is not the main concern in this paper and we refer the reader to [20]. Enlarged error bars indicating 5 standard deviations reveal that (3.7) is significantly more accurate than (3.22). This is due to lower total selection rates \(S_k\) illustrated at the bottom in the converging and diverging regime. While \(S_k^2\) for (3.7) is much lower than \(S_k^1\) with \(c=0\), \(S_k^3\) does not offer significant further improvement. Since the efficient rejection based implementation of (3.7) explained above does not work for (3.8), we focus on (3.7) in our context. The much higher selection rate for the cloning algorithm with \(c=0\) leads to a significantly higher time variation of the average potential in the convergent regime compared to algorithm (3.7), as is illustrated in Fig. 3. So in comparison to standard cloning, algorithm (3.7) leads to reduced finite size effects and/or a significant variance reduction in this example, and a significant improvement of convergence of the estimator (3.2). We have checked that this also holds for zero-range processes with bounded rates. These promising first numerical results pose interesting questions for a systematic study of practical properties of the algorithms and associated time correlations for future work, also in comparison with various recent results on improvements of cloning algorithms [7, 11, 12]. Inclusion process (4.6) with \(d=1\), system size \(L=64\), \(M=128\) particles, asymmetry \(p=0.7\) and \(N=2^{11}\) clones. Time series of the mean fitness \(m^N (\underline{\eta }(t))(\mathcal {V}_k)/L\) for the cloning algorithm (red dots) and algorithm (3.7) (blue crosses), with time averages indicated by full lines. (Left) In the convergent regime for \(k=-0.79\) we see a clear variance reduction using (3.7) with similar time average. (Right) In the divergent regime for \(k=0.1\) we have similar variance but (3.7) improves on the time average 4.3 Details for the Inclusion Process We summarize the procedure outlined in the previous subsection for the inclusion process with rates (4.6) on the torus \(\mathbb {T}_L\) with M particles in pseudo-code given below. Besides fixing the model parameter \(d>0\) and the tilt \(k\in \mathbb {R}\), the specific parameters for the estimator are the ensemble size N and the total simulation time t, which lead to the estimator \(\Lambda _k^N (t)\) as given in (3.2). For simplicity we do not include any burn-in times in this description, which would obviously be used in practice. In this implementation we make a further simplification which is very common for continuous-time jump dynamics of large systems: We replace exponentially distributed random time increments by their expectation, given by \(\Delta t=1/\mathcal {W}(\underline{\eta })\) for \(Q_k <1\) and \(\Delta t=1/(Q_k \mathcal {W}(\underline{\eta }))\) for \(Q_k >1\). Since with (4.9) $$\begin{aligned} \frac{1}{N}\sum _{i=1}^N \mathcal {V}_k (\eta ^i (s)) =\frac{Q_k -1}{N} \mathcal {W}(\underline{\eta })\ , \end{aligned}$$ we get for increments in the evaluation of the ergodic time integral in (3.2) $$\begin{aligned} \Delta t\frac{1}{N}\sum _{i=1}^N \mathcal {V}_k (\eta ^i (s))=\left\{ \begin{array}{cl} \frac{Q_k -1}{N}&{},\ Q_k <1\\ \frac{Q_k -1}{Q_k N}&{}, Q_k >1\end{array}\right. \ . \end{aligned}$$ These are independent of the actual state \(\underline{\eta }\) of the clones, so evaluation of \(\Lambda _k^N (t)\) in (3.2) can be achieved by a simple integer counter \(\hat{\Lambda }_k^N\) as explained in the pseudocode Algorithm 1 and 2. While this counter may appear similar to the cloning factor explained in Sect. 3.4 at first glance, we want to stress that here finer increments of \(+1\) are added after every event (not only selections). We have presented an analytical approach to cloning algorithms based on McKean interpretations of Feynman–Kac semigroups that have been introduced in the applied probability literature. This allows us to establish rigorous error bounds for the cloning algorithm in continuous time, and to suggest a more efficient variant of the algorithm which can be implemented effectively for current large deviations in stochastic lattice gases. The latter is based on minimizing the selection rate in a standard population dynamics interpretation of particle approximations of non-linear processes. We include a first application of this idea in the context of inclusion processes, but its full potential will be explored in future more systematic studies of optimization of cloning-type algorithms. The rigorous results fully reported in [18] apply under very general conditions, demanding bounded jump rates and existence of a spectral gap for the underlying jump process. These impose no restriction for lattice gases with a fixed number of particles, which are essentially finite state Markov chains. We anticipate that these techniques can also be applied for more general processes including diffusive, piecewise deterministic, or possibly non-Markovian dynamics (see [42] for first heuristic results in this direction). Another interesting direction would be a rigorous analysis of the detailed ergodic properties of trajectories in the clone ensemble based on recent results in [7, 8, 39]. in \(L^p\)-sense for any \(p>1\) following with the Burkholder–Davis–Gundy inequality, see e.g. [37, Sect. 11] This work was supported by The Alan Turing Institute under the EPSRC Grant EP/N510129/1 and The Alan Turing Institute–Lloyds Register Foundation Programme on Data-centric Engineering. AP acknowledges support by the National Group of Mathematical Physics (GNFM-INdAM), and by Imperial College together with the Data Science Institute and Thomson-Reuters Grant No. 4500902397-3408. Anderson, J.B.: A random-walk simulation of the Schrödinger equation: \({H}^+_3\). J. Chem. Phys. 63, 1499 (1975)ADSGoogle Scholar Grassberger, P.: Go with the winners: a general Monte Carlo strategy. Comput. Phys. Commun. 147(1), 64–70 (2002)ADSMathSciNetzbMATHGoogle Scholar Giardina, C., Kurchan, J., Peliti, L.: Direct evaluation of large-deviation functions. Phys. Rev. Lett. 96(12), 120603 (2006)ADSGoogle Scholar Lecomte, V., Tailleur, J.: A numerical approach to large deviations in continuous time. J. Stat. Mech.: Theory Exp. 2007(03), P03004 (2007)Google Scholar Giardina, C., Kurchan, J., Lecomte, V., Tailleur, J.: Simulating rare events in dynamical processes. J. Stat. Phys. 145(4), 787–811 (2011)ADSMathSciNetzbMATHGoogle Scholar Jack, R.L., Sollich, P.: Large deviations and ensembles of trajectories in stochastic models. Prog. Theor. Phys. Suppl. 184, 304–317 (2010)ADSzbMATHGoogle Scholar Nemoto, T., Bouchet, F., Jack, R.L., Lecomte, V.: Population-dynamics method with a multicanonical feedback control. Phys. Rev. E 93, 062123 (2016)ADSMathSciNetGoogle Scholar Hidalgo, E. G.: Cloning algorithms: from large deviations to population dynamics. Ph.D. thesis, Université Sorbonne Paris Cité — Université Paris Diderot 7, (2018)Google Scholar Nemoto, T., Guevara Hidalgo, E., Lecomte, V.: Finite-time and finite-size scalings in the evaluation of large-deviation functions: analytical study using a birth-death process. Phys. Rev. E 95, 012102 (2017)ADSMathSciNetGoogle Scholar Guevara Hidalgo, E., Nemoto, T., Lecomte, V.: Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time. Phys. Rev. E 95, 062134 (2017)ADSGoogle Scholar Ferré, G., Touchette, H.: Adaptive sampling of large deviations. J. Stat. Phys. 172(6), 1525–1544 (2018)ADSMathSciNetzbMATHGoogle Scholar Brewer, T., Clark, S.R., Bradford, R., Jack, R.L.: Efficient characterisation of large deviations using population dynamics. J. Stat. Mech. 2018(5), 053204 (2018)MathSciNetGoogle Scholar Pérez-Espigares, C., Hurtado, P. I.: Sampling rare events across dynamical phase transitions. arXiv:1902.01276 Del Moral, P., Miclo, L.: A Moran particle system approximation of Feynman-Kac formulae. Stoch. Process. Appl. 86(2), 193–216 (2000)MathSciNetzbMATHGoogle Scholar Del Moral, P., Pierre, M.: Branching and interacting particle systems approximations of Feynman-Kac formulae with applications to non-linear filtering. In: Azema, J., Emery, M., Ledoux, M., Yor, M. (eds.) Seminaire de probabilites, XXXIV, pp. 1–145. Springer, Berlin (2000)Google Scholar Del Moral, P.: Feynman-Kac Formulae. Springer, New York (2004)zbMATHGoogle Scholar Rousset, M.: On the control of an interacting particle estimation of Schrödinger ground states. SIAM J. Math. Anal. 38(3), 824–844 (2006)MathSciNetzbMATHGoogle Scholar Angeli, L., Grosskinsky, S., Johansen, A. M.: Limit theorems for cloning algorithms. under review (arxiv:1902.00509) Giardinà, C., Kurchan, J., Redig, F., Vafayi, K.: Duality and hidden symmetries in interacting particle systems. J. Stat. Phys. 135(1), 25–55 (2009)ADSMathSciNetzbMATHGoogle Scholar Chleboun, P., Grosskinsky, S., Pizzoferrato, A.: Current large deviations for partially asymmetric particle systems on a ring. J. Phys. A 51(40), 405001 (2018)MathSciNetzbMATHGoogle Scholar Lazarescu, A.: The physicist's companion to current fluctuations: one-dimensional bulk-driven lattice gases. J. Phys. A 48(50), 503001 (2015)MathSciNetzbMATHGoogle Scholar Hurtado, P.I., Espigares, C.P., del Pozo, J.J., Garrido, P.L.: Thermodynamics of currents in nonequilibrium diffusive systems: theory and simulation. J. Stat. Phys. 154(1), 214–264 (2014)ADSMathSciNetzbMATHGoogle Scholar Chleboun, P., Grosskinsky, S., Pizzoferrato, A.: Lower current large deviations for zero-range processes on a ring. J. Stat. Phys. 167(1), 64–89 (2017)ADSMathSciNetzbMATHGoogle Scholar Chetrite, Rl, Touchette, H.: Nonequilibrium Markov processes conditioned on large deviations. Annales de L'Institut Henri Poincaré 16, 2005–2057 (2015)ADSMathSciNetzbMATHGoogle Scholar Nyawo, P.T., Touchette, H.: Large deviations of the current for driven periodic diffusions. Phys. Rev. E 94, 032101 (2016)ADSGoogle Scholar Harris, R.J., Schütz, G.M.: Fluctuation theorems for stochastic dynamics. J. Stat. Mech. 2007(07), P07020 (2007)MathSciNetGoogle Scholar Den Hollander, F.: Large deviations, volume 14 of Graduate Texts in Mathematics. American Mathematical Society, Providence (2008)Google Scholar Dembo, A., Zeitouni, O.: Large deviations techniques and applications, vol. 38. Springer, New York (2009)zbMATHGoogle Scholar Bertini, L., Faggionato, A., Gabrielli, D.: Large deviations of the empirical flow for continuous time Markov chains. Annales de L'Institut Henri Poincaré 51(3), 867–900 (2015)ADSMathSciNetzbMATHGoogle Scholar Del Moral, P., Miclo, L.: Particle approximations of Lyapunov exponents connected to Schrödinger operators and Feynman-Kac semigroups. ESAIM: Prob Stat 7, 171–208 (2003)zbMATHGoogle Scholar Del Moral, P.: Mean field simulation for Monte Carlo integration. CRC Press, Boca Raton (2013)zbMATHGoogle Scholar Baker, J. E.: Adaptive selection methods for genetic algorithms. In: Proceedings of an International Conference on Genetic Algorithms and their applications, pp. 101–111 (1985)Google Scholar Kong, A., Liu, J.S., Wong, W.H.: Sequential imputations and Bayesian missing data problems. J. Am. Stat. Assoc. 89(425), 278–288 (1994)zbMATHGoogle Scholar Liggett, T.M.: Continuous time Markov processes: an introduction, volume 113 of Graduate Texts in Mathematics. American Mathematical Society, Providence (2010)Google Scholar Grosskinsky, S., Jatuviriyapornchai, W.: Derivation of mean-field equations for stochastic particle systems. Stoch. Process. Appl. 129(4), 1455–1475 (2019)MathSciNetzbMATHGoogle Scholar Pra, P. D.: Stochastic mean-field dynamics and applications to life sciences (2017). http://www.cirm-math.fr/ProgWeebly/Renc1555/CoursDaiPra.pdf Chow, Y.S., Teicher, H.: Probability Theory—Independence, Interchangeability, Martingales, 3rd edn. Springer, New Yok (1998)zbMATHGoogle Scholar Garrahan, J.P., Jack, R.L., Lecomte, V., Pitard, E., van Duijvendijk, Kristina, van Wijland, F.: First-order dynamical phase transition in models of glasses: an approach based on ensembles of histories. J. Phys. A 42(7), 075007 (2009)ADSMathSciNetzbMATHGoogle Scholar Ray, U., Chan, G.K.-L., Limmer, D.T.: Importance sampling large deviations in nonequilibrium steady states. i. J Chem Phys 148(12), 124120 (2018)ADSGoogle Scholar Grosskinsky, S., Redig, F., Vafayi, K.: Dynamics of condensation in the symmetric inclusion process. Electron. J. Prob. 18(66), 1–23 (2013)MathSciNetzbMATHGoogle Scholar Bianchi, A., Dommers, S., Giardinà, C.: Metastability in the reversible inclusion process. Electron. J. Prob. 22(70), 1–34 (2017)MathSciNetzbMATHGoogle Scholar Cavallaro, M., Harris, R.J.: A framework for the direct evaluation of large deviations in non-markovian processes. J. Phys. A 49(47), 47LT02 (2016)MathSciNetzbMATHGoogle Scholar View author's OrcID profile Email authorView author's OrcID profile 1.University of WarwickCoventryUK 2.Imperial College LondonLondonUK Angeli, L., Grosskinsky, S., Johansen, A.M. et al. J Stat Phys (2019). https://doi.org/10.1007/s10955-019-02340-1 Accepted 11 June 2019 Publisher Name Springer US
CommonCrawl
Search results for: G. Singh Items from 1 to 20 out of 1,230 results Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more The European Physical Journal C > 2019 > 79 > 7 > 1-27 A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity... Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more Journal of High Energy Physics > 2019 > 2019 > 6 > 1-34 Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production... Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation... Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level... Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like... Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based... Combinations of single-top-quark production cross-section measurements and |fLVVtb| determinations at s $$ \sqrt{s} $$ = 7 and 8 TeV with the ATLAS and CMS experiments The ATLAS collaboration, M. Aaboud, G. Aad, B. Abbott, more Abstract This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at s $$ \sqrt{s} $$ = 7 and 8 TeV corresponding to integrated luminosities of 1.17 to 5.1 fb−1 at s $$ \sqrt{s} $$ = 7 TeV and 12.2 to 20.3 fb−1 at s $$ \sqrt{s} $$ = 8 TeV. These combinations... Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a... Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta... Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm... Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states... Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in... Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and... Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence... Pancreas shrinkage following recurrent acute pancreatitis: an MRI study Steve V. DeSouza, Sunitha Priya, Jaelim Cho, Ruma G. Singh, more European Radiology > 2019 > 29 > 7 > 3746-3756 Objective Transition from the first attack of acute pancreatitis (AP) to chronic pancreatitis (CP) via recurrent AP is common. Total pancreas volume (TPV) and pancreas diameters are often reduced in advanced CP but have never been studied after AP. The objective of this study was to investigate pancreas size after clinical resolution of AP and its association with the number of AP attacks. Methods... Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system... Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),... Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive... Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent... Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The signal is characterized by a large missing transverse... Last 3 months (14) Last 3 years (368) Available (1,185) article (1,075) English (1,230) PHYSICS (100) HIGGS (20) VLSI (18) COGNITIVE RADIO (17) RADIO FREQUENCY (14) CMOS INTEGRATED CIRCUITS (12) RF SWITCH (12) DIRECTIVITY (11) GAIN (11) MICROSTRIP ANTENNA (11) DOUBLE-GATE MOSFET (10) SUPERSYMMETRY (10) SYNTHETIC APERTURE RADAR (10) THROUGHPUT (10) CMOS SWITCH (9) LOGIC GATES (9) MICROSTRIP ANTENNAS (9) MOSFET CIRCUITS (8) RADIATION EFFICIENCY (8) REMOTE SENSING BY RADAR (8) SCATTERING (8) SWITCHING CIRCUITS (8) ACUTE PANCREATITIS (7) DP4T SWITCH (7) FREQUENCY SELECTIVE SURFACE (7) PROTOCOLS (7) TERAHERTZ FREQUENCY (7) TOP QUARK (7) EXOTICA (6) HEAVY IONS (6) IMAGE COLOR ANALYSIS (6) RADAR POLARIMETRY (6) SUBSTRATES (6) TERAHERTZ SPECTRUM (6) ARID REGION (5) BACKSCATTER (5) BIT ERROR RATE (5) BSM (5) CROSS SECTION (5) DELAY (5) HIMALAYAN REGION (5) INTEGRATED CIRCUIT DESIGN (5) INTERFERENCE (5) MICROSTRIP (5) MICROSTRIP PATCH ANTENNA (5) POWER SYSTEM STABILITY (5) TERAHERTZ (5) 45-NM TECHNOLOGY (4) ARTIFICIAL NEURAL NETWORKS (4) B2G (4) BIOMEDICAL MRI (4) DIELECTRIC CONSTANT (4) DISPERSION (4) DOUBLE GATE MOSFET (4) ESTRADIOL (4) EXTRA DIMENSIONS (4) FOLLICLE (4) FORMAL VERIFICATION (4) GEOPHYSICAL SIGNAL PROCESSING (4) GRID COMPUTING (4) HEAVY ION (4) IGNITION DELAY (4) IMAGE RECONSTRUCTION (4) ISOCONVERSIONAL METHOD (4) Springer (532) Elsevier (463) ieee (169) Wiley (49) BazTech (6) PSJD (2) Journal of High Energy Physics (264) Physics Letters B (145) The European Physical Journal C (115) Journal of Infrared, Millimeter, and Terahertz Waves (10) Journal of Computational Electronics (9) Journal of Thermal Analysis and Calorimetry (9) Theriogenology (9) Medical Journal Armed Forces India (7) International Journal of Infectious Diseases (6) Computers and Structures (5) European Psychiatry (5) Microsystem Technologies (5) Pancreatology (5) Wireless Personal Communications (5) Agroforestry Systems (4) Animal Reproduction Science (4) Combustion and Flame (4) Indian Journal of Rheumatology (4) International Journal of Cardiology (4) Journal of Forestry Research (4) Journal of Heart and Lung Transplantation (4) Journal of Surgical Research (4) Nuclear Physics, Section A (4) Nuclear Physics A (4) Optical and Quantum Electronics (4) Optics Communications (4) Physica B: Physics of Condensed Matter (4) Wireless Networks (4) Acta Physiologiae Plantarum (3) American Journal of Transplantation (3) BJU International (3) Biologia Plantarum (3) Brain Research (3) Canadian Journal of Cardiology (3) Chemistry of Materials (3) Early Human Development (3) European Journal of Cancer (3) European Urology Supplements (3) Field Crops Research (3) Forest Ecology and Management (3) Gastrointestinal Endoscopy (3) IEEE Computer Graphics and Applications (3) International Journal of Oral & Maxillofacial Surgery (3) Journal of Archaeological Science (3) Journal of Membrane Science (3) Journal of Radioanalytical and Nuclear Chemistry (3) Journal of Sound and Vibration (3) Microelectronics Journal (3) New Forests (3) Optik - International Journal for Light and Electron Optics (3) Procedia Engineering (3) Reproduction in Domestic Animals (3) Value in Health (3) physica status solidi (a) (3) Acta Physica Polonica A (2) Applied Catalysis A, General (2) Biological Trace Element Research (2) Bioresource Technology (2) British Journal of Anaesthesia (2) British Journal of Surgery (2) Bulletin of Environmental Contamination and Toxicology (2) Clinical Radiology (2) Diabetologia (2) Digestive Diseases and Sciences (2) Ecology Letters (2) European Journal of Surgical Oncology (2) Experimental Techniques (2) IEEE Photonics Technology Letters (2) Inflammation Research (2) Infrared Physics and Technology (2) Injury Extra (2) International Journal of Biometeorology (2) International Letters of Chemistry, Physics and Astronomy (2) International Urogynecology Journal (2) Journal of Agricultural Engineering Research (2) Journal of Arid Environments (2) Journal of Cardiothoracic and Vascular Anesthesia (2) Journal of Environmental Management (2) Journal of Hospital Infection (2) Journal of Nuclear Materials (2) Journal of Vascular and Interventional Radiology (2) Journal of the Anatomical Society of India (2) Journal of the Geological Society of India (2) Journal of the Indian Society of Remote Sensing (2) Journal of the Neurological Sciences (2) Neurogastroenterology & Motility (2) Optica Applicata (2) Optical Materials (2) Physica A: Statistical Mechanics and its Applications (2) Physics of Wave Phenomena (2) Physiological and Molecular Plant Pathology (2) Plant Foods for Human Nutrition (2) Radioelectronics and Communications Systems (2) Research in Veterinary Science (2) Sensors & Actuators: B. Chemical (2)
CommonCrawl
the isme journal The spatial and metabolic basis of colony size variation Microbial competition reduces metabolic interaction distances to the low µm-range Rinke J. van Tatenhove-Pel, Tomaž Rijavec, … Herwig Bachmann Interaction variability shapes succession of synthetic microbial ecosystems Feng Liu, Junwen Mao, … Ting Lu Limitation by a shared mutualist promotes coexistence of multiple competing partners Sarah P. Hammarlund, Tomáš Gedeon, … William R. Harcombe Mortality causes universal changes in microbial community composition Clare I. Abreu, Jonathan Friedman, … Jeff Gore Collective behaviour can stabilize ecosystems Benjamin D. Dalziel, Mark Novak, … Stephen P. Ellner Ecological scaffolding and the evolution of individuality Andrew J. Black, Pierrick Bourrat & Paul B. Rainey An evolutionarily stable strategy to colonize spatially extended habitats Weirong Liu, Jonas Cremer, … Chenli Liu Host specificity of microbiome assembly and its fitness effects in phytoplankton Sara L. Jackrel, Jinny W. Yang, … Vincent J. Denef Generalizing game-changing species across microbial communities Jie Deng, Marco Tulio Angulo & Serguei Saavedra Jeremy M. Chacón1,2, Wolfram Möbius3,4 & William R. Harcombe1,2 The ISME Journal volume 12, pages 669–680 (2018)Cite this article Spatial structure impacts microbial growth and interactions, with ecological and evolutionary consequences. It is therefore important to quantitatively understand how spatial proximity affects interactions in different environments. We tested how proximity influences colony size when either Escherichia coli or Salmonella enterica are grown on various carbon sources. The importance of colony location changed with species and carbon source. Spatially explicit, genome-scale metabolic modeling recapitulated observed colony size variation. Competitors that determine territory size, according to Voronoi diagrams, were the most important drivers of variation in colony size. However, the relative importance of different competitors changed through time. Further, the effect of location increased when colonies took up resources quickly relative to the diffusion of limiting resources. These analyses made it apparent that the importance of location was smaller than expected for experiments with S. enterica growing on glucose. The accumulation of toxic byproducts appeared to limit the growth of large colonies and reduced variation in colony size. Our work provides an experimentally and theoretically grounded understanding of how location interacts with metabolism and diffusion to influence microbial interactions. Microbial interactions help determine functions from nutrient cycling to human health [1, 2]. Spatial structure mediates microbial interactions [3]; however, the relationship between proximity and interaction strength remains unclear [4]. Quantifying, and being able to predict, the effect of location on microbial interactions is critical for understanding the function of microbial systems as well as their ecological and evolutionary dynamics. Spatial structure modulates the resource competition that shapes microbial communities [5,6,7]. Competition influences community assembly [8] and stability [9], and influences selection on microbial traits [10]. Spatial structure alters the scope of competition [6]. In agitated liquid environments, all cells tend to have equal access to resources and interactions are global. In contrast, in structured environments, cells interact more strongly with neighbors than with distant individuals. This localizing effect of spatial structure has been repeatedly shown to influence the outcomes of microbial evo-ecological experiments [10,11,12,13,14,15,16,17,18,19,20,21,22,23]. The specific location of bacteria in spatially structured environments matters. Within a biofilm or colony, bacteria at the edge have lower local density and grow faster than those in the center [4, 24,25,26,27], which can segregate competing genotypes [21, 28,29,30,31]. Between-colony interactions are also influenced by colonies' locations. A competing colony's effect is magnified if it is located between a focal colony and a nutrient source [17]. Furthermore, the coexistence of competing genotypes can be highly sensitive to inter-colony distances [10, 20]. Finally, hints of the importance of location are also being detected in complex natural ecosystems. For example, changes in the spacing between Aggregatibacter actinomycetemcomitans and Streptococcus gordonii determine virulence in oral abscesses [32]. While it is known that location matters, we lack a rigorous framework for understanding and predicting the impact of location on interactions. Interaction strength is likely a function of distance, but by what distance-based measure: the distance to the closest competitor, a function of all competitor distances, or a measurement of how competitors divide the available territory? Ecologists often use distance metrics to explain variance in plant growth [33], and a linearly-weighted distance model captured a decline in bacterial colony size due to crowding [34]. In contrast, Voronoi diagrams, which measure the territory that is closer to a focal colony than any other colonies [35], have been used to investigate pattern formation as bacteria cover a surface [36]. To date, there has not been a rigorous test of the ability of different geometric models to explain variance in colony size. In addition to this geometric description, the question arises what minimal biophysical model can predict the location-based effects on colony growth in variable environments. Microbes typically interact through chemicals that they consume and excrete [37, 38]. Does accounting for metabolism and diffusion suffice to predict the variation in colony growth? Genome-scale metabolic models and flux-balance analysis can quantitatively predict the metabolites that microbes consume and excrete, and therefore can predict the ecological interactions that emerge from intracellular mechanisms [17, 39, 40]. Diffusion can be incorporated to predict system dynamics in structured environments [17]. We therefore can test to what extent colony variation is purely a function of metabolism and diffusion by comparing computational predictions against experimental observations. If factors such as toxicity, signals, or stochastic differences in lag time drive colony variation, then the model, which does not take these effects into account, will do a poor job. Determining the extent to which metabolic mechanisms drive spatial effects will be critical for predicting growth in complex natural settings. Here, we investigated how location influences interactions in arguably the simplest scenario—monocultures grown on homogeneous surfaces. We plated monocultures of either Escherichia coli or Salmonella enterica on various media and used high-resolution scanners to investigate the size of colonies and the associated variance within each plate. We then used simulations and geometric descriptions to determine how much colony variation is explained by metabolic mechanisms, what aspect of location best explained variation in growth, and how variation was influenced by nutrient uptake, diffusion, and duration of growth. Finally, we investigated one case in which variation differed from expectation and suggest that this deviation was caused by byproduct toxicity. Our work provides a quantitative framework for understanding and predicting the effect of location on microbial competition. Strains and media We used cells of either Salmonella enterica serovar Typhimurium LT2 or Escherichia coli K12-MG1655. In the genome-scale metabolic modeling, these strains were represented by the iRR_1083 model [41] and the iJO_1366 model [42], respectively. The Petri dish experiments either used Luria–Bertani (LB) media (10 g/L tryptone, 10 g/L NaCl, 5 g/L yeast extract) or a modified Hypho minimal media (7.26 mM K2HPO4, 0.88 mM NaH2PO4, 3.78 mM [NH4]2SO4, 0.41 mM MgSO4, 1 mL of a metal mix [43]). The minimal media contained glucose (16.6 mM), citrate (10.2 mM), lactose (8.33 mM), or acetate (12.5 mM) as the limiting resource. The glucose concentration in the low and medium glucose treatments was 4.15 and 8.33 mM. Experiments where acetate was added to glucose plates had concentrations of 16.6 mM glucose and 12.5 mM acetate. All Petri dishes contained 25 mL of media with 1% agar. Experiments to visualize acidification of the media used bromothymol blue at a concentration of 0.08 g/L. To reduce condensation dishes were left open for 30 min as the agar solidified. UV lights were used to maintain sterility as plates cooled. For Petri dish experiments, after spreading approximately 60 cells onto a Petri dish, a piece of matte black Kydex Plastic Sheet (0.08 in. thick) was placed within the upper lid of the dish to improve contrast and reduce reflections. Petri dishes were placed onto a Canon Perfection V600 scanner agar side down and a 600 dpi image was scanned every 20 min for almost 150 h. We housed scanners in a 30 °C incubator. Each treatment (a unique combination of a species and a media type) was repeated in separate Petri dishes 3–8 times. The replicates per treatment are in Supplementary Table 2. We tracked colony areas over time, and quantified acidity, using custom image analysis software (Supplementary Material). Simulations were run in COMETS, a platform developed to model growth and interactions in structured environments using genome-scale metabolic networks [17]. Biomass and resources are distributed on a lattice. Then dynamic flux-balance analysis determines optimal metabolic activity and growth for all biomass based on the local environment at each time step. Biomass and resources each diffuse with specific diffusion coefficients. Michaelis–Menten kinetics constrain resource uptake. In addition, we used simplified models in which the genome-scale metabolic model was replaced with a set of differential equations. The simplified model describes a reaction-diffusion system in which bacteria grow under Monod kinetics. Bacteria and resource spread via diffusion, with the diffusion coefficient for the bacteria, DB, being much smaller than that for the resource, DR (Eqs. 1): $$\frac{{\partial B}}{{\partial t}} = D_B\nabla ^2B + f(B,R),$$ (1a) $$\frac{{\partial R}}{{\partial t}} = D_R\nabla ^2R - \gamma f(B,R),$$ (1b) $$f\left( {B,R} \right) = B\frac{{\mu _{\rm max}R}}{{k_{\rm m} + R}}.$$ (1c) The first term in the differential equations describes diffusion (\(\nabla ^2\) is the two-dimensional Laplace operator), the second term describes conversion of the resource into biomass. γ sets the resources used per biomass produced, and was equal to 1 unless otherwise stated. The maximum growth rate, μmax, is approached as the resource concentration R increases. The saturation concentration is set by km. For the simple Monod simulations the "world" was a 5 cm × 5 cm square, into which 60 colonies were seeded at random locations. Resources were distributed uniformly at a concentration of 1e−6 mmol per box. These simulations were run until resources were fully consumed unless otherwise stated. The genome-scale metabolic model simulations were conducted in circular environments that were 90 mm in diameter and seeded with biomass and resources to mimic the experimental conditions. Genome-scale simulations were run for equal lengths of time as the laboratory experiments unless otherwise stated. Other simulation parameters are provided in Supplementary Table 1. Simulations were carried out using the University of Minnesota Supercomputing Institute's Mesabi cluster. To simulate toxicity and resource competition, we modified our simplified model to include a toxin (A), which diffuses like the resource and which is produced as biomass grows with a conversion of λ (Eqs. 2). The toxicity of A on growth changes with A*, where higher values are less toxic: $$\frac{{\partial B}}{{\partial t}} = D_B\nabla ^2B + f\left( {B,R} \right)g(B,{\rm A})$$ $$\frac{{\partial R}}{{\partial t}} = D_R\nabla ^2R - \gamma f(B,R)g(B,{\rm A})$$ $$\frac{{\partial {\rm A}}}{{\partial t}} = D_R\nabla ^2{\rm A} + \lambda f\left( {B,R} \right)g(B,{\rm A})$$ $$f\left( {B,R} \right) = B\frac{{\mu _{\rm max}R}}{{k_{\rm m} + R}}$$ (2d) $$g\left( {B,{\rm A}} \right){\mathrm{ = }}{\rm e}^{ - {\rm A/A} \ast }$$ (2e) To test whether toxicity improved the fit to experimental data for S. enterica grown on glucose, we parameterized the simplified model to match the genome-scale model in the absence of toxins (Supplementary Fig. 4B–E). To test the effect of toxicity on Voronoi response, we ran simulations of model 2 for 150 h in an environment mimicking experiments with S. enterica grown on a glucose Petri dish using λ = 0.2 mmol/g, which was conservatively set at approximately 10x less than produced by E. coli during growth on glucose [44]. The model used to generate the data in Fig. 5d used A* = 0.01 mM. In the programming language R, we used the spatstat package to find Voronoi areas with the dirichletArea function [45]. After calculation of any distance metric, colonies <5 mm from a Petri dish edge were excluded. We also used R to carry out analysis of variance (ANOVA), analysis of covariance (ANCOVA), t tests, and linear regressions as described throughout the results. Variance in colony size is context-dependent We tested whether species and resource identity influenced the variance in colony size within monoculture plates. Approximately 60 S. enterica or E. coli cells were grown on Petri dishes with different carbon sources, and colony areas were measured using flatbed scanners and custom software (Fig. 1a, see Methods for more detail). Within every plate/replicate and treatment we found a range of colony sizes, as seen in the example density plots of the final colony areas in Fig. 1b. Because the average colony size differed substantially across treatments (see, for example, Fig. 1b), we used the coefficient of variation of colony area at the end of the experiment (standard deviation over mean) within a plate to compare variation in colony size between treatments. Differences in media and species caused large differences in the coefficient of variation across treatment (ANOVA, F(7,32) = 18.9, p = 1.07e−9, Fig. 1c), suggesting that spatial effects were highly context-dependent. The variance in colony yields depends on the species and environment. a A snapshot of S. enterica colonies on LB media (left) and the yields (areas) of those colonies determined by automated image analysis (right). b Kernel density distributions of S. enterica colony yields grown on acetate or LB. c The coefficient of variation of colony yields for each species on each carbon source. Error bars are standard error of mean. Four to eight Petri dishes are included in each point Variance in colony size can be predicted with models that pair metabolism and diffusion We tested whether the observed variations in colony size could be predicted from the interplay of intracellular metabolic mechanisms, diffusion, and colony location, by running simulations that combine genome-scale metabolic modeling with diffusion calculations. Our computational platform, COMETS, uses dynamic flux-balance analysis to predict the growth and metabolic activity of bacteria by identifying the metabolic strategy that maximizes biomass production at each time step [17, 39]. Biomass and metabolites diffuse to simulate growing colonies and the resource gradients that arise as a result of microbial metabolism. Note that colony expansion is the result of both increasing biomass and diffusion [46]. Simulations were initiated with resources and colony locations that matched each experimental plate (Fig. 2a). We plotted the relative yields (yield of a colony/total yield on a Petri dish) for simulations against those for experiments (Fig. 2b). We used relative yields because the measurements of interest were the relative differences between colonies on a plate, which can be compared with relative numbers even if the specific yield measurement (area vs. biomass) differs. The relative colony sizes in simulations were well correlated with the relative colony sizes in experiments, although the predictive ability of the simulations depended on the treatment (mixed-effects linear regression with subsequent F tests, main effect of simulated yields: F(1,1479) = 2027, p < 2.2e−16, main effect of treatment: F(5,24) = 11, p = 1.3e−5, interaction: F(5,1470) = 85, p < 2.2e−16). Deviations from simulated predictions had a slope <1, meaning there was more variability in colony size in simulations than in experiments. These over-predictions were most pronounced when the carbon resource was a sugar (i.e., glucose or lactose). Below, we further explore the deviations caused by S. enterica growth on glucose. Genome-scale metabolic modeling recapitulates the variance in colony yields. a We used genome-scale metabolic modeling in the COMETS platform to test the mechanisms generating the observed variance in colony yields. The relevant genome-scale metabolic model was seeded into an environment at the sites from which colonies initiated in experiments. Dynamic flux-balance analysis calculations and subsequent metabolite and biomass diffusion were carried out in discrete time steps for a duration mimicking the laboratory experiments. b A comparison of the relative colony yields measured in experiments (y-axis) to the relative colony yields predicted by the COMETS simulations (x-axis) for the treatments accessible with COMETs (defined media, i.e., not LB). Each facet contains data from four to eight Petri dish experiments/simulations. The black line has slope = 1 and intercept = 0, while the blue lines and surrounding gray are linear regression lines with standard error. A high R2 suggests that the relative spatial effects are captured by the model, while a slope close to 1 suggests an accurate prediction of amount of variance in colony yield Relative colony size is driven by the location of adjacent competitors To understand how location determines colony size, we tested the explanatory power of different metrics that varied in the influence assigned to potentially competing colonies. We focused on metrics that had previously been used in the forestry or microbiology literature [33, 34, 36]. These metrics tested whether colony size could be best predicted by (i) the distance to the nearest neighbor (Fig. 3a), (ii) the sum of the inverse distances to all neighbors (Fig. 3b), (iii) the log of the sum of the squared inverse distances (Fig. 3c), (iv) or a colony's Voronoi area, which is the area on a Petri dish which is closer to a focal colony than to any other colony (Fig. 3d, see also Fig. 3e) [35]. Voronoi diagrams capture the effect of location on yield better than other distance metrics. Using the simplified differential equations model, four metrics were tested to determine which colonies interact to generate variation in colony size and to what extent. a–d show a cartoon of the measurement and the metric plotted against simulated colony yield (biomass). a The distance to the closest colony, such that the yield of the focal colony (indicated by the arrow) would be predicted from the distance to colony 1, which is closest, but no other colony would be considered. b The sum of the inverse linear distances to every colony, such that the yield of the focal colony would be predicted by the distance to every colony, with each colony's influence inversely proportional to its distance. c Like b, but colonies become quadratically less important as distance increases. d The territory closest to a colony, described by a Voronoi diagram. Here, the focal colony's Voronoi area is shown (solid line polygon). A Voronoi diagram divides a plane into areas around colony initiation sites such that all the space in a territory is closer to its enclosed colony than to any other colony, which is accomplished by drawing perpendicular lines half-way through lines connecting a focal colony to Voronoi neighbors. e A Voronoi diagram drawn for all colony initiation sites on a Petri dish. For a focal colony (blue), its Voronoi neighbors are the green colonies. f The change in R2 over time for the different metrics vs. simulated colony biomass. The black line shows the proportion of resources left. g The hour when the R2 for Voronoi areas vs. simulated colony biomass surpasses the R2 for all other metrics, as a function of the max growth rate (μmax) used in the simulation. h The magnitude of the spatial effect (the "Voronoi response") as a function of time. Different lines are from simulations with different average Voronoi areas, which is inversely proportional to initial cell density. i The Voronoi response as a function of the maximum potential per-mass uptake. Changes from the default (baseline) values (Supplementary Table 1) in maximum growth rate, km, starting resource concentration, or any combination of these parameters (multiple) all have similar effects j The Voronoi response is determined by the balance between the maximum uptake rate of a colony (x-axis) and the rate of resource diffusion (y-axis) To abstract away species/environment-specific intracellular metabolism, we ran these tests with a simplified model that simulated biomass growth on an explicit limiting resource using Monod kinetics paired with diffusion (see Methods). We simulated conditions with a high maximum growth rate (μmax=1/h) until all resources were used and asked which metric correlated best with colony size. While all metrics were somewhat predictive, the Voronoi areas had almost perfect prediction. The high R2 of Voronoi areas suggested that under these parameters removing non-Voronoi neighbors (Fig. 3e) would have negligible effects on the size of a focal colony; this prediction was confirmed with "colony dropout" simulations (Supplementary Figure 1A). We next tested how the predictive power of Voronoi diagrams (the R2 of Voronoi area vs. colony biomass) changed over the course of growth. At the start of simulations all colonies were the same size, and then variance between colonies increased as colonies grew and drew down the resources available to them. The correlation between Voronoi area and colony size increased through time, as did the performance of Voronoi relative to other metrics (Fig. 3f, see Supplementary Figure 1B for analysis with non-parametric Spearman's ρ). Voronoi area became the best predictor of colony size once the majority of resources were consumed (Fig. 3f) and was the best predictor across multiple resource diffusion coefficients (Supplementary Figure 1C). Consistently, as maximum growth rate increased, resources were consumed faster, and the time required for Voronoi to be the best predictor decreased (Fig. 3g). In addition to understanding which competitors caused differential growth (i.e., the relative performance of different metrics), we were also interested in the strength of spatial effects (i.e., the magnitude of the best metric). We defined the "Voronoi response" to measure the extent to which colonies have monopolized resources that started in their respective territories. The Voronoi response is the slope of the line that is generated when plotting \(\frac{{{\rm yield}_i}}{{{\sum} {{\rm yield}} }}\) against \(\frac{{{\rm Voronoi\,area}_i}}{{{\sum} {{\rm Voronoi\,area}} }}\) (Supplementary Figure 1D). Note that the Voronoi response, the R2 of Voronoi area vs. colony biomass, and the yield coefficient of variation are correlated (Supplementary Figure 1E). Voronoi response declines from a max of 1 towards 0 as colony size becomes less dependent on Voronoi area, which can occur because some colonies are accessing resources that originated in other territories, or because resources have not yet been fully consumed. As with the R2, the Voronoi response increased through time (Fig. 3h). When the size of the average Voronoi area is increased, it takes longer to reach the maximum Voronoi response, but the final Voronoi response is larger meaning colonies are able to monopolize more of the resources that start in their territory. In the other direction, extremely dense environments have small final Voronoi responses, which will approach zero as areas become extremely small (Fig. 3h). Finally, we tested how the maximum Voronoi response (when no nutrients remain) is influenced by the balance between resource uptake and diffusion. Increasing maximum resource uptake rate through a variety of parameters all increased the Voronoi response until saturating (Fig. 3i). Increasing resource diffusion (DR) reduced the Voronoi response (Supplementary Figure 1G). The final magnitude of the Voronoi response was determined by the balance of nutrient uptake to diffusion (Fig. 3j). While the ratio of uptake to diffusion changed the maximum Voronoi response, it did not change the relative performance: Voronoi outperformed other metrics if all resources were consumed (Supplementary Fig. 1C). The Voronoi response varied in laboratory experiments Voronoi area was a good predictor of colony size variation across many laboratory treatments (Fig. 4a, Supplementary Figure 2A). Voronoi area was significantly better than other metrics on rich LB media (ANOVA followed by Tukey's multiple comparisons: LB, p < 5e−4), and as good as other metrics in most other treatments. Voronoi area fared significantly worse than the inverse distance metrics for both species on acetate, and for S. enterica on glucose (ANOVA followed by Tukey's multiple comparisons: acetate, p < 1e−7; glucose p < 1e−7). The Voronoi response changes between experimental treatments, but generally increases with the maximum colony growth rate. a The performance of the different metrics on the experimental data (R2 of the metric vs. colony areas at 150 h). b The growth curves of individual colonies (left) and change in R2 of the metrics over time (right) for representative S. enterica Petri dishes on three different media. c The superiority of the Voronoi metric (the R2 of the Voronoi metric minus the R2 of the log inverse-squared distance metric) plotted over the maximum growth rates measured in experiments, with a significant least-squares linear fit. d The Voronoi response plotted over the maximum growth rate, with a significant least-squares linear fit. The bars in a, c, d are standard error of the means calculated over Petri dishes, with four to eight Petri dishes per treatment The relative performance of spatial metrics in different treatments is consistent with our simulation-based findings on the effects of resource depletion (Fig. 4b and Supplementary Figure 2B,C). For example, on LB plates S. enterica colonies reached the carrying capacity and Voronoi areas rose to outperform other metrics (Fig. 4b, compare to Fig. 3f). Conversely, on acetate S. enterica grows slowly and appeared to still be growing. S. enterica on glucose violated our expectations: colony growth had plateaued, but Voronoi did not outperform other metrics (Fig. 4b). We hypothesize why this was below. The connection between resource depletion and relative metric performance is further supported by an analysis of growth rate. Consistent with simulations (Fig. 3f), in laboratory experiments faster growth rate correlated strongly with the relative superiority of Voronoi (the R2 of Voronoi minus the R2 of the log. sum Inv. dist. metric) (Fig. 4c, linear regression, slope = 1.15, p = 5.4e−4). This correlation suggests that the variable performance of Voronoi in different treatments was at least partially explained by differences in the proportion of resource consumed by the end of the experiment. Additionally, genome-scale simulations showed incomplete resource utilization after 150 h for the substrates that caused slow growth (Supplementary Figure 3A, B, C). Experimental data also support the computational prediction that the Voronoi response (i.e., the magnitude of spatial effects) is influenced by the growth rate. The Voronoi response in laboratory experiments increased as maximum growth rate increased (Fig. 4d, linear regression, slope = 1.7, F(1,43) = 31.3, p = 1.4e−6). This is in agreement with simulations, although at least part of the small Voronoi response for low growth rate treatments might be due to residual nutrients in these treatments (see above). Deviations from expected spatial patterns suggest production of toxic waste S. enterica grown on glucose deviated from expected spatial patterns. In this treatment (i) colony size variation was poorly explained by Voronoi areas (relative to other metrics) despite rapid initial growth and cessation of growth before the end of the experiment (Fig. 4a–c, Supplementary Figure 2B, C) and (ii) the treatment was the most poorly predicted by the genome-scale metabolic modeling (Fig. 2b). This led us to hypothesize that a mechanism in addition to competition for diffusing resources was occurring in this treatment. S. enterica can generate potentially toxic acetate during growth on glucose, so we hypothesized that acetate accumulation arrested growth of large colonies [47,48,49]. This is particularly likely as we realized that our glucose concentration was in a range in which bacteria are prone to the Crabtree effect. The Crabtree effect causes fermentation to be preferred over respiration above glucose concentrations of ~8 mM, resulting in secretion of high levels of acetate even in the presence of oxygen [44]. Several lines of experimental evidence are consistent with toxic byproducts reducing colony size variation when S. enterica is grown on glucose. First, the pH indicator bromothymol blue was used to demonstrate that glucose plates became acidified by growth of S. enterica (Fig. 5a). Second, if glucose concentration was dropped below the Crabtree threshold (~8 mM, [44]), then less acidification was detected (Fig. 5a) despite more biomass being produced (Supplementary Figure 4F). Reducing the amount of glucose and thus acidification also increased the response of colony size to Voronoi area (Fig. 5b, linear regression, slope = −0.012, p = 4.9e−4). Conversely, if acetate was added to glucose plates, the maximum colony size, and the Voronoi response, decreased (Supplementary Figure 4G, H). Generation of toxic waste reduces the Voronoi response. a S. enterica colonies grown on Petri dishes with glucose concentration below (low) or above (high) the Crabtree threshold, which is the glucose concentration when acetic acid is produced during growth in the presence of oxygen. The Petri dishes contained the pH dye bromothymol blue, which is dark green at neutral pH and becomes yellow as it acidifies. High glucose dishes had higher acidity despite less total growth (see Supplementary Methods for image analysis of yellow intensity, Supplementary Figure 3F for yield data). b The Voronoi response as a function of glucose concentration. Both the 8 and 16.6 mM concentrations are at or above the Crabtree threshold. c The Voronoi response after 150 h in the simplified models modified so that toxic metabolites are generated as biomass grows. The model parameter A* is a measure for the effect of toxicity on growth (see Methods/main text). The dotted line indicates the Voronoi response when toxicity is removed from the model. d Scatter plots of a simulation mimicking S. enterica grown on a glucose Petri dish, without (gray) or with (black) toxicity in the model. The x-axis is the relative biomass in simulation, and the y-axis is the relative biomass measured in the experimental Petri dish. Adding toxicity significantly improved the fit, by bringing the relationship between the x and y data closer to the line with slope 1 passing through the origin (shown with the dashed line) Finally, incorporating production of toxic waste into the differential equation model decreased the Voronoi response observed in simulations and improved the fit between experiment and simulation. Incorporating toxins into our simulations reduced the Voronoi response (Fig. 5c). We additionally tested whether toxins improved the ability of simulations to predict colony size. Incorporating toxin production into a parameterized version of the simplified model significantly improved the ability of simulations to predict observed colony sizes for S. enterica on glucose (Fig. 5d, ANCOVA, interaction term between toxicity and simulated biomass in predicting experimental biomass, p = 7.2e−4). Understanding the quantitative way that spatial proximity affects interactions between bacterial colonies will allow us to better understand and manage microbial ecosystems. We found that the impact of location on bacterial colony size was context-dependent and strongly influenced by both species and resource identity. Encouragingly, spatially explicit, genome-scale metabolic models were able to predict much of the context-dependent variation in colony size by modeling the interaction between diffusion and intracellular metabolism. A simplified model of differential equations demonstrated that variation in colony size is driven by the size of Voronoi areas, though the relative performance of metrics changes over the course of growth. Furthermore, differences in Voronoi response are primarily driven by differences in the rate at which colonies grow and consume resources. Faster consumption causes size variation between colonies to be larger and mitigates diffusion's tendency to reduce colony size variation. These general ecological relationships serve as a useful null model from which to predict spatial effects caused by resource competition. We demonstrated the utility of this null model by identifying a toxin-mediated interaction that we did not anticipate. In summary, we provide an experimentally and theoretically grounded understanding of how location interacts with metabolism and diffusion to influence microbial interactions. Voronoi diagrams identified the competitors that drove colony variance once resources were depleted. The relative performance of Voronoi suggests that the arrangement of competing colonies (i.e., colony geometry) is an important determinant of relative growth. This is an intuitive result as diffusing resources are most likely to be consumed by the closest colony and Voronoi diagrams demarcate the resource-containing area closest to each colony. Interestingly, the superiority of this metric is time-dependent, however, and Voronoi diagrams only outperform other metrics once most resources have been consumed. Much of the variation in relative performance across experimental treatments is consistent with growth rate-specific differences in the extent of resource depletion by the end of the experiment. Simulations demonstrated that after resource depletion Voronoi diagrams outperform other metrics at explaining how the location of competitors determines relative microbial growth across treatments. The lack of resource depletion in experiments can explain situations where genome-scale simulations are good predictors of relative colony size, but Voronoi diagrams are not, for example, with E. coli grown on acetate. Metabolism influenced the importance of location by altering the balance of nutrient uptake to diffusion. Nutrients that generated rapid growth increased the variance in final colony size, and led to a larger response to location (i.e., Voronoi response). The Voronoi response also became larger as colonies were further apart on average. It is important to note that decreasing the magnitude of spatial effects is not equivalent to decreasing competition. The average colony size and total biomass on a plate are equivalent whether competition is local or global (assuming all resources are consumed). However, if the balance of uptake and diffusion causes interactions to be local, spatial location matters, and some colonies will grow much larger than others. The success of genome-scale models to predict the effects of competition suggests that it will be possible to quantitatively predict microbial metabolic interactions in complex, spatially structured environments. Genome-scale metabolic models can be generated directly from sequence data using known gene–protein reaction associations [40, 42]. High-throughput methods to generate models from sequence data are improving [50,51,52,53,54,55], and therefore spatially explicit tools such as COMETS may be increasingly useful to generate quantitative predictions of the effect of location on growth and microbial interactions. However, it should be noted that COMETS did not perfectly predict all observed variation even in our simplified system, and incorporating non-metabolic interactions such as toxicity into genome-scale modeling frameworks will likely be important. Voronoi areas and genome-scale simulations provide null models for size variance between competing colonies, and departures from the null suggest additional interactions are occurring. S. enterica deviated from model expectations when growing on glucose, leading us to suspect that toxins were altering interactions. The production and response of S. enterica to organic acids are certainly well established [48]; however, we did not anticipate that they would be sufficient to halt growth on our plates. Indeed, E. coli also produces acidic waste, but did not deviate from expectations on glucose. This lack of deviation is likely because our E. coli is less sensitive to acetate than S. enterica (Supplementary Figure 4A). More broadly, the detection of toxicity in our system serves as an example of how quantitative analysis can aid in the identification of species interactions. Different biological phenomena likely cause specific departures from the null expectations, akin to the reduced variance caused by waste accumulation. Further research will be aimed at finding spatial signatures of biological phenomena in microbial systems. A quantitative understanding of how location mediates microbial interactions has important consequences for understanding and harnessing microbial evolutionary ecology. It is well established that spatial structure can alter the interactions between microbes [4] and plays a critical role in determining health outcomes [7]. Quantifying how space mediates interactions will allow for more rigorous understanding of community composition, and improve prediction of dynamics such as competitive exclusion. Additionally, understanding organisms' interaction strengths is critical for understanding the evolution of microbial traits. For example, it was recently demonstrated that the level of antibiotic secretion can be explained by the relative strength of interaction with sensitive and resistant competitors [10]. As technology which allows for fine-scale placement of cells matures [56,57,58], we can create spatial arrangements that maximize selection of competitive phenotypes of interest. As we strive to move beyond descriptions of microbial diversity to explanations and management of diversity it will be critical to develop quantitative understanding of microbial interactions. Arrigo KR. Marine microorganisms and global nutrient cycles. Nature. 2005;437:343–8. Cho I, Blaser MJ. The human microbiome: at the interface of health and disease. Nat Rev Genet. 2012;13:260–70. Connell JL, Kim J, Shear JB, Bard AJ, Whiteley M. Real-time monitoring of quorum sensing in 3D-printed bacterial aggregates using scanning electrochemical microscopy. Proc Natl Acad Sci USA. 2014;111:18255–60. Nadell CD, Drescher K, Foster KR. Spatial structure, cooperation, and competition in biofilms. Nat Rev Microbiol. 2016;14:589–600. Foster KR, Bell T. Competition, not cooperation, dominates interactions among culturable microbial species. Curr Biol. 2012;22:1845–50. Mitri S, Foster KR. The genotypic view of social interactions in microbial communities. Annu Rev Genet. 2013;47:247–73. Stacy A, McNally L, Darch SE, Brown SP, Whiteley M. The biogeography of polymicrobial infection. Nat Rev Microbiol. 2016;14:93–105. David LA, Weil A, Ryan ET, Calderwood SB, Harris JB, Chowdhury F, et al. Gut microbial succession follows acute secretory diarrhea in humans. MBio. 2015;6:1–14. Shade A, Peter H, Allison SD, Baho DL, Berga M, Bürgmann H, et al. Fundamentals of microbial community resistance and resilience. Front Microbiol. 2012;3:1–19. Gerardin Y, Springer M, Kishony R. A competitive trade-off limits the selective advantage of increased antibiotic production. Nat Microbiol. 2016;1:16175. Dechesne A, Or D, Smets BF. Limited diffusive fluxes of substrate facilitate coexistence of two competing bacterial strains. FEMS Microbiol Ecol. 2008;64:1–8. Allen B, Gore J, Nowak MA. Spatial dilemmas of diffusible public goods. Elife. 2013;2013:1–11. Gralka M, Stiewe F, Farrell F, Möbius W, Waclaw B, Hallatschek O, et al. Allele surfing promotes microbial adaptation from standing variation. Ecol Lett. 2016;19:889–98. Greig D, Travisano M. Density-dependent effects on allelopathic interactions in yeast. Evolution. 2008;62:521–7. Hansen SK, Rainey PB, Haagensen JAJ, Molin S. Evolution of species interactions in a biofilm community. Nature. 2007;445:533–6. Harcombe H, Betts A, Shapiro JW, Marx CJ. Adding biotic complexity alters the metabolic benefits of mutualism. Evolution. 2016;70:1871–81. Harcombe WR, Riehl WJ, Dukovski I, Granger BR, Betts A, Lang AH, et al. Metabolic resource allocation in individual microbes determines ecosystem interactions and spatial dynamics. Cell Rep. 2014;7:1104–15. Allison SD. Cheaters, diffusion and nutrients constrain decomposition by microbial enzymes in spatially structured environments. Ecol Lett. 2005;8:626–35. Kerr B, Riley MA, Feldman MW, Bohannan BJM. Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors. Nature. 2002;418:171–4. Kim HJ, Boedicker JQ, Choi JW, Ismagilov RF. Defined spatial structure stabilizes a synthetic multispecies bacterial community. Proc Natl Acad Sci USA. 2008;105:18188–93. Mitri S, Clarke E, Foster KR. Resource limitation drives spatial organization in microbial groups. ISME J. 2015;10:1–12. Penn AS, Conibear TCR, Watson RA, Kraaijeveld AR, Webb JS. Can Simpson's paradox explain cooperation in Pseudomonas aeruginosa biofilms? FEMS Immunol Med Microbiol. 2012;65:226–35. Chao L, Levin BR. Structured habitats and the evolution of anticompetitor toxins in bacteria. Proc Natl Acad Sci USA. 1981;78:6324–8. Gandhi SR, Yurtsev EA, Korolev KS, Gore J. Range expansions transition from pulled to pushed waves as growth becomes more cooperative in an experimental microbial population. Proc Natl Acad Sci USA. 2016;113:6922–7. Persat A, Nadell CD, Kim MK, Ingremeau F, Siryaporn A, Drescher K, et al. The mechanical world of bacteria. Cell. 2015;161:988–97. Pirt SJ. A kinetic study of the mode of growth of surface colonies of bacteria and fungi. J Gen Microbiol. 1967;47:181–97. Cole JA, Kohler L, Hedhli J, Luthey-Schulten Z. Spatially-resolved metabolic cooperativity within dense bacterial colonies. BMC Syst Biol. 2015;9:15. Hallatschek O, Hersen P, Ramanathan S, Nelson DR. Genetic drift at expanding frontiers promotes gene segregation. Proc Natl Acad Sci USA. 2007;104:19926–30. Hallatschek O, Nelson DR. Life at the front of an expanding population. Evolution. 2010;64:193–206. Korolev KS, Muller MJ, Karahan N, Murray AW, Hallatschek O, Nelson DR, Selective sweeps in growing microbial colonies. Phys Biol. 2012;9:026008 Momeni B, Brileya KA, Fields MW, Shou W. Strong inter-population cooperation leads to partner intermixing in microbial communities. Elife. 2014;3:e02945. Stacy A, Everett J, Jorth P, Trivedi U, Rumbaugh KP, Whiteley M. Bacterial fight-and-flight responses enhance virulence in a polymicrobial infection. Proc Natl Acad Sci USA. 2014;111:7819–24. Tome M, Burkhart HE. Distance-dependent competition measures for predicting growth of individual trees. Forest Sci. 1989;35:816–31. Guillier L, Pardon P, Augustin JC. Automated image analysis of bacterial colony growth as a tool to study individual lag time distributions of immobilized cells. J Microbiol Methods. 2006;65:324–34. Okabe A, Boots B, Sugihara K, Nok Chiu S. Spatial tesselations: concepts and applications of Voronoi diagrams. 2nd ed. New York, NY: Wiley; 2000. Lloyd DP, Allen RJ. Competition for space during bacterial colonization of a surface. J R Soc Interface. 2015;12:20150608. Germerodt S, Bohl K, Lück A, Pande S, Schröter A, Kaleta C, et al. Pervasive selection for cooperative cross-feeding in bacterial communities. PLoS Comput Biol. 2016;12:1–21. Hibbing ME, Fuqua C, Parsek MR, Peterson SB. Bacterial competition: surviving and thriving in the microbial jungle. Nat Rev Microbiol. 2010;8:15–25. Mahadevan R, Edwards JS, Doyle FJ III. Dynamic flux balance analysis of diauxic growth in Escherichia coli. Biophys J. 2002;83:1331–40. Orth JD, Thiele I, Palsson BØ. What is flux balance analysis? Nat Biotechnol. 2010;28:245–8. Raghunathan A, Reed J, Shin S, Palsson B, Daefler S. Constraint-based analysis of metabolic capacity of Salmonella typhimurium during host-pathogen interaction. BMC Syst Biol. 2009;3:38. Orth JD, Conrad TM, Na J, Lerman JA, Nam H, Feist AM, et al. A comprehensive genome-scale reconstruction of Escherichia coli metabolism. Mol Syst Biol. 2011;7:535. Delaney NF, Kaczmarek ME, Ward LM, Swanson PK, Lee MC, Marx CJ, Development of an optimized medium, strain and high-throughput culturing methods for Methylobacterium extorquens. PLoS ONE. 2013;8:e62957 Luli GW, Strohl WR. Comparison of growth, acetate production, and acetate inhibition of Escherichia coli strains in batch and fed-batch fermentations. Appl Environ Microbiol. 1990;56:1004–11. Baddeley A, Rubak E, Turner R. Spatial point patterns: methodology and applications with {R}. London: Chapman & Hall/CRC Press; 2015. Murray JD. Mathematical biology, I: An introduction. 3rd ed. New York: Springer-Verlag; 2002. Rhee M, Lee S, Dougherty RH, Kang D. Antimicrobial effects of mustard flour and acetic acid against Escherichia coli O157: H7, Listeria monocytogenes, and Salmonella ent. Appl Environ Microbiol. 2003;69:2959–63. Wolfe AJ. The acetate switch. Microbiol Mol Biol Rev. 2005;69:12–50. Cappuyns AM, Bernaerts K, Vanderleyden J, Van Impe JF. A dynamic model for diauxic growth, overflow metabolism, and AI-2-mediated cell-cell communication of Salmonella typhimurium based on systems biology concepts. Biotechnol Bioeng. 2009;102:280–93. Agren R, Liu L, Shoaie S, Vongsangnak W, Nookaew I, Nielsen J. The RAVEN Toolbox and its use for generating a genome-scale metabolic model for Penicillium chrysogenum. PLoS Comput Biol. 2013;9:e1002980 Feist AM, Herrgård MJ, Thiele I, Reed JL, Palsson BO. Reconstruction of biochemical networks in microbial organisms. Nat Rev Microbiol. 2009;7:129–43. Henry CS, Dejongh M, Best AA, Frybarger PM, Linsay B, Stevens RL. High-throughput generation, optimization and analysis of genome-scale metabolic models. Nat Biotechnol. 2010;28:969–74. Krumholz EW, Libourel IGL. Sequence-based network completion reveals the integrality of missing reactions in metabolic networks. J Biol Chem. 2015;290:19197–207. Overbeek R, Olson R, Pusch GD, Olsen GJ, Davis JJ, Disz T, et al. The SEED and the Rapid Annotation of microbial genomes using Subsystems Technology (RAST). Nucleic Acids Res 2014;42:206–14. Thiele I, Palsson BØ. A protocol for generating a high-quality genome-scale metabolic reconstruction. Nat Protoc. 2010;5:93–121. Connell JL, Ritschdorff ET, Whiteley M, Shear JB. 3D printing of microscopic bacterial communities. Proc Natl Acad Sci USA. 2013;110:18380–5. Ferris CJ, Gilmore KG, Wallace GG, In Het Panhuis M. Biofabrication: an overview of the approaches used for printing of living cells. Appl Microbiol Biotechnol. 2013;97:4243–58. Xu T, Petridou S, Lee EH, Roth EA, Vyavahare NR, Hickman JJ, et al. Construction of high-density bacterial colony arrays and patterns by the ink-jet method. Biotechnol Bioeng. 2004;85:29–33. The authors thank S Zhuang for advice on the scanning technique, B Adamowicz for help with experiments, and the UMN theory group for useful discussions. J Chacón was funded by the Biocatalysis Initiative through UMN. Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, MN, USA Jeremy M. Chacón & William R. Harcombe BioTechnology Institute, University of Minnesota, St. Paul, MN, USA Living Systems Institute, University of Exeter, Exeter, UK Wolfram Möbius Physics and Astronomy, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, UK Jeremy M. Chacón William R. Harcombe Correspondence to William R. Harcombe. Supplementary Fig. 1 Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. If you remix, transform, or build upon this article or a part thereof, you must distribute your contributions under the same license as the original. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/. Chacón, J.M., Möbius, W. & Harcombe, W.R. The spatial and metabolic basis of colony size variation. ISME J 12, 669–680 (2018). https://doi.org/10.1038/s41396-017-0038-0 Revised: 11 August 2017 Ecological modelling approaches for predicting emergent properties in microbial communities Naomi Iris van den Berg Daniel Machado Kiran R. Patil Nature Ecology & Evolution (2022) Rare and localized events stabilize microbial community composition and patterns of spatial self-organization in a fluctuating environment Davide Ciccarese Gabriele Micali David R. Johnson The ISME Journal (2022) Correlation between the spatial distribution and colony size was common for monogenetic bacteria in laboratory conditions Heng Xue Masaomi Kurokawa Bei-Wen Ying BMC Microbiology (2021) Rinke J. van Tatenhove-Pel Tomaž Rijavec Herwig Bachmann Inactivation and sensitization of Pseudomonas aeruginosa by microplasma jet array for treating otitis media Peter P. Sun Jungeun Won Thanh H. Nguyen npj Biofilms and Microbiomes (2021) A metabolic modeling platform for the computation of microbial ecosystems in time and space (COMETS) Ilija Dukovski Djordje Bajić Daniel Segrè Nature Protocols Protocol 11 Oct 2021 The ISME Journal (ISME J) ISSN 1751-7370 (online) ISSN 1751-7362 (print)
CommonCrawl
TWI514767B - A data-driven charge-pump transmitter for differential signaling - Google Patents A data-driven charge-pump transmitter for differential signaling Download PDF TWI514767B TWI514767B TW101136887A TW101136887A TWI514767B TW I514767 B TWI514767 B TW I514767B TW 101136887 A TW101136887 A TW 101136887A TW 101136887 A TW101136887 A TW 101136887A TW I514767 B TWI514767 B TW I514767B TW101136887A TW201332290A (en John W Poulton Thomas Hastings Greer William J Dally 2012-10-05 Application filed by Nvidia Corp filed Critical Nvidia Corp 2013-08-01 Publication of TW201332290A publication Critical patent/TW201332290A/en 2015-12-21 Publication of TWI514767B publication Critical patent/TWI514767B/en 230000011664 signaling Effects 0 title 1 H04L25/00—Baseband systems H04L25/02—Details ; Arrangements for supplying electrical power along data transmission lines H04L25/0264—Arrangements for coupling to transmission lines H04L25/0272—Arrangements for coupling to multiple lines, e.g. for differential transmission H01L2224/00—Indexing scheme for arrangements for connecting or disconnecting semiconductor or solid-state bodies and methods related thereto as covered by H01L24/00 H01L2224/01—Means for bonding being attached to, or being formed on, the surface to be connected, e.g. chip-to-package, die-attach, "first-level" interconnects; Manufacturing methods related thereto H01L2224/02—Bonding areas; Manufacturing methods related thereto H01L2224/04—Structure, shape, material or disposition of the bonding areas prior to the connecting process H01L2224/05—Structure, shape, material or disposition of the bonding areas prior to the connecting process of an individual bonding area H01L2224/0554—External layer H01L2224/0555—Shape H01L2224/05552—Shape in top view H01L2224/05554—Shape in top view being square H01L2224/42—Wire connectors; Manufacturing methods related thereto H01L2224/47—Structure, shape, material or disposition of the wire connectors after the connecting process H01L2224/48—Structure, shape, material or disposition of the wire connectors after the connecting process of an individual wire connector H01L2224/481—Disposition H01L2224/48135—Connecting between different semiconductor or solid-state bodies, i.e. chip-to-chip H01L2224/48137—Connecting between different semiconductor or solid-state bodies, i.e. chip-to-chip the bodies being arranged next to each other, e.g. on a common substrate H01L2224/49—Structure, shape, material or disposition of the wire connectors after the connecting process of a plurality of wire connectors H01L2224/4912—Layout H01L2224/49175—Parallel arrangements H01L2924/00—Indexing scheme for arrangements or methods for connecting or disconnecting semiconductor or solid-state bodies as covered by H01L24/00 H01L2924/10—Details of semiconductor or other solid state devices to be connected H01L2924/11—Device type H01L2924/13—Discrete devices, e.g. 3 terminal devices H01L2924/1304—Transistor H01L2924/1306—Field-effect transistor [FET] H01L2924/13091—Metal-Oxide-Semiconductor Field-Effect Transistor [MOSFET] H01L2924/30—Technical effects H01L2924/301—Electrical effects H01L2924/30107—Inductance H01L2924/3011—Impedance H01L2924/30111—Impedance matching Data driven charge pump transmitter for differential signaling SUMMARY OF THE INVENTION The present invention is generally directed to transmitting signals between discrete integrated circuit devices, and more particularly to a differential signaling technique using a data driven switched capacitor transmitter or a bridged charge pump transmitter. A single-ended signaling system uses a single signal conductor per bit stream to transfer it from one integrated circuit component (wafer) to another. In contrast, differential signaling systems explicitly require two signal conductors, so single-ended signaling is typically preset to have advantages when the number of external pins and signal conductors is limited by the limitations of the package. However, a single-ended signaling system actually requires more than one signal conductor per channel and requires more circuitry. The current flowing from the transmitter to the receiver must be returned to the transmitter to form a complete electronic circuit, and in a single-ended signaling system, the returning circuit will flow over a common set of conductors, essentially Wait for the power supply terminal. In order to keep the return current flowing substantially close to the signal conductor, the common return terminals are typically physically planar in the package, such as a chip package or printed circuit board, while allowing the signal conductors to form a line or miniature band. Therefore, a single-ended signaling system requires more than N pins and conductors to carry N bit streams between the wafers, and the burden is substantially about 10-50%. A single-ended signaling system requires a reference voltage at the receiver such that the receiver can distinguish between (substantially) representing two signal levels of "0" and "1". Conversely, the differential signaling system does not require a reference voltage: the receiver only needs to compare the voltages on the two symmetric conductors of the differential signaling system to distinguish the data values. There are many ways to establish a reference voltage for a single-ended signaling system. However, it is substantially difficult to ensure compliance with the value of the reference voltage between the transmitter and the receiver, which requires compliance to ensure consistent interpretation of the signals transmitted by the transmitter to the receiver. A single-ended signaling system consumes more power for a given signal-to-noise ratio than an equivalent differential signaling system. In the case of a resistive terminal transmission line, a single-ended system must drive a current of +V/R 0 for a "1" to be transmitted to establish a voltage V above the reference voltage at the receiver, for a "0" The current -V/R0 must be pulled out to establish a voltage V below the reference voltage at the receiver, where R 0 is the terminating resistor. Therefore, the system consumes 2V/R 0 of current to establish the desired signal at the receiver. In comparison, when differential signaling is used, the transmitter requires only ±V/2R 0 of drive current to establish the same voltage (V) across the receiver terminals due to the symmetrical signal conductor pairing. A differential signaling system only needs to draw V/R 0 current from the power supply. Thus, even if a reference voltage at the receiver is perfectly matched to the transmitter, the single-ended signaling system is essentially only half the power efficiency of the differential signaling system. Finally, single-ended systems are more susceptible to externally coupled noise sources than differential systems. For example, if the noise is electromagnetically coupled to a signal conductor of a single-ended system, the voltage generated by the coupling will arrive at the receiver as unremovable noise. Therefore, the noise budget of the signaling system must be responsible for all such noise sources. Unfortunately, this kind of noise coupling usually comes from an adjacent line in a single-ended signal, called cross-talk, and the noise source is proportional to the signal voltage level, so it cannot be borrowed. This is overcome by increasing the signal level. In differential signaling, the two symmetrical signal conductors can actually operate in close proximity to one another between a transmitter and a receiver, so that noise is symmetrically coupled into the two conductors. Therefore, many external noise sources affect the two lines approximately the same, and this common mode noise can be excluded at a receiver having a higher differential gain than the common mode gain. Therefore, there is a need in the art for a technique for providing single-ended signaling that reduces the problem of establishing a reference voltage, reduces the common impedance of the return path of the signal, and crosstalk interference caused by the return path of the signal, and reduces the Power consumption of a single-ended signaling system. The first A diagram shows an exemplary prior art single-ended signaling system 100 to illustrate the reference voltage problem, sometimes referred to as a "Pseudo-open-drain" (PODL) system. The single-ended signaling system 100 includes a transmitting device 101 and a receiving device 102. The transmitting device 101 operates by drawing a current Is from the power supply when transmitting a "0", and does not draw current when transmitting a "1" (allowing the termination resistors R0 and R1 to pull the signal high to Vdd) ). In order to develop a signal of size |V| at the receiving device 102, the signal swing must be 2V, so the current is 2V/(R0/2) when driving a "0", otherwise the current is zero. For an average of an equal number of "1"s and "0"s, the system consumes 2V/R0 from the power supply. The signal at the input of the receiving device 102 is swung down from Vdd ("1") to Vdd - 2V ("0"). In order to distinguish the received data, the receiving device 102 requires a reference voltage of Vref = Vdd - V. There are three ways to generate the reference voltage, as shown in the first A, B, and C diagrams. As shown in FIG. A, an external reference voltage Vref is generated by a resistor network located adjacent to the receiving device. The external reference voltage is transmitted to the receiving device via a dedicated pin 103 and distributed to some of the receivers sharing the external reference voltage. The first problem with the external reference voltage technique shown in FIG. A is that the external reference voltage is developed across the power supply terminals Vdd and GND across the external resistors R2a and R2b, and the resulting external The reference voltages cannot be matched to the voltage developed by the current sources in the transmitting device 101 because the current sources are completely free of the external resistors R2a and R2b. A second problem is that the power supply voltage at the receiving device 102 can be different from the power supply voltage at the transmitting device 101 because the supply networks provided to the two communication chips have different impedances, and the two The chips draw different and variable currents. The third problem is that the noise injected into any of the transmitting lines 105 of the coupling transmitting device 101 to the receiving device 102 is not incident into the reference voltage, so the transmitting system must be directed to the hair The worst case of the noise voltage of the signal line 105 is budgeted. The fourth issue is the external power supply. The voltage level between the terminals Vdd and GND is different from the internal power supply network within the receiving device 102, again due to the supply impedance. Additionally, the configuration of the single-ended signaling system 100 is such that the currents in the shared supply terminals are related to the data. Accordingly, any data-related noise introduced into the inputs of the internal receiver amplifiers in the receiving device 102 is different from the external supply of the shared external reference voltages that are also input to the internal receiver amplifiers. News. The first B diagram shows an exemplary prior art single-ended signaling system 120 that uses an internal reference voltage. The internal reference voltage Vref can improve these noise problems compared to a single-ended signaling system 100 that uses an external reference voltage. The single-ended signaling system 120 tracks the reference voltage level of the transmitting device 121 more closely than the single-ended signaling system 100. A proportional transmitter is included in the receiver circuit of the receiving device 122, which can generate an internal Vref associated with the terminating resistor and the transmitter current Is. Because the internal reference voltage is generated relative to the internal power supply network, the internal reference voltage is not subject to such supply-noise problems as the external voltage reference as shown in FIG. However, there is still a problem of noise coupling in conjunction with the external reference voltage method described in FIG. Additionally, because the current source (Is/2) used to generate the internal reference voltage is in a different wafer than the current sources in the transmitter device 122, the current source may not be tracked in the receiving device 122. These current sources are in 121. The first C-picture shows an exemplary prior art single-ended signaling system 130 using an accompanying reference voltage Vref. Since the accompanying reference voltage is generated in the transmitting device 131 using a proportional transmitter, and the accompanying reference voltage is coupled to the same internal supply network as the data transmitters in the transmitting device 131, the accompanying reference is provided. Tracking between voltage and signal voltage can be improved. Therefore, the accompanying reference voltage voltage can be used to reasonably track the program-voltage-temperature variation of the transmitter device 131. The accompanying reference voltage is transmitted from the transmitting device 131 to the receiving device 132 in a line parallel to the transmitting line 135 carrying the data and as identical as possible to the transmitting line 135 transmitting the data. External noise that can be coupled into the system, including some components of the power supply noise, as the external noise appears as a common mode noise between the accompanying reference voltage and any given signal of the transmit line 135. Can be eliminated. However, the elimination of the common mode noise is not perfectly effective because the accompanying reference voltage has a terminal impedance different from a data signal at the receiving device 132; since the accompanying reference voltage must be fanned out to a large number of receivers, The capacitance on the pin receiving Vref is always greater than the capacitance at a typical signal pin, so the noise is low through relative to a data signal. The second A diagram shows current flow in a prior art single-ended signaling system 200 in which the ground plane is to be the return conductor for the common signal. The single-ended signaling system 200 illustrates the aforementioned return impedance problem. As shown in FIG. 2A, the single-ended signaling system 200 transmits a "0" by discharging current at the transmitting device 201. Half of this current flows out of the signal conductor (signal current flow 204), while the other half, the transmitter current flows 203, flows through the terminator of the conveyor 201. In this example, the return current is to flow on the ground (GND) plane, and if the signal conductors are to be referenced to the ground plane and there is no other supply, the electromagnetic coupling between the signal and the ground plane will cause the image Current flows on the ground plane directly below the signal conductor. In order to achieve a 50/50 current distribution at the transmitter, a path to the local current of the transmitter is provided by an internal bypass capacitor 205 in the transmitting device 201, i.e., the transmitter current flow 203 is returned to the terminal through the terminating resistor 206. Ground. Signal current 204 is returned to receiving device 202 via the ground plane and flows through its bypass capacitor 207 and internal termination resistor 208 into receiving device 202. The second B diagram shows current flow in the prior art single-ended signaling system 200, where the ground plane acts as the common signal return conductor when a "1" is transmitted by the transmitting device 201 to the receiving device 202. The current source is turned off (and not shown) in the transfer device 201, and the termination resistor 206 in the transfer device 201 pulls the line HI up. Again, the return current is flowing on the ground plane, so the capacitor 215 needs to be bypassed in the transfer device 201 to carry the signal current flow. Move 214. The current path in the receiving device 202 is the same as that used to transmit a "0", as shown in the second A diagram, except for the direction of current flow that is transmitted. There are several problems with the schemes shown in Figures 2A and 2B. First, if the impedance of the bypass capacitors 205 and 215 is not sufficiently low, some of the signal current will flow in the Vdd network. Any redirected current flowing through the Vdd network will in some way have to be recombined with the image current in the grounded network and reach the ground plane, the redirected current will have to flow through the external bypass Capacitors and other power supplies provide shunt impedance. Second, the return current flows through the impedance of the common grounding network at the transmitting device 201 and the common grounding impedance 212 at the receiving device 202. Because the ground is a common return path, the signal current produces a voltage across the ground network impedances 211 and 212. At receiving device 202, this voltage across ground network impedances 211 and 212 produces noise in adjacent signal paths, providing a direct source of crosstalk interference. Third, if the common ground return pin and the ground pin provide a distance from the signal pin of the return path, an inductance is associated with the formed current loop, increasing the effective ground impedance, and adding the common This crosstalk between the signals of the ground pin. In addition, the inductor is connected in series to the terminator and will cause reflections in the transmit channels to become another source of noise. Finally, the current flow shown in Figures 2A and 2B is the transient flow that occurs when a data edge is transmitted. The steady state flow is quite different in both cases because the current must flow through the power supply over the Vdd and ground networks simultaneously. Because the steady state current path is different from the transient current path, there is a transition between the two conditions in which the transient current flows simultaneously in Vdd and the ground network to reduce the voltage across the supply impedance and generate more The noise. It can be assumed that it is preferred to use the Vdd network to carry the return currents because the termination resistors are connected to the network. However, this choice will not solve the basic problem. It is still necessary to bypass capacitors 205 and 215 to direct the transient signal current, and the transient is still different from the steady state condition, so there is still a common supply resistance. Anti-crosstalk interference, as well as voltage noise from data conversion. The basic problem has two aspects: The first is that the shared supply impedance is the source of crosstalk interference and supply noise. Second, the separation of the signal currents between the two supplies makes it difficult to maintain the return current substantially adjacent to the signal current through the channel, which causes poor termination and reflection. In single-ended signaling system 200, the current drawn from the power supply is related to the data. When a "0" is transmitted, the transmitter draws a current Is. Half of this current flows from the power supply via the terminator back to the receiving device 202 on the signal line, then to the ground via the current source in the transmitting device 201, and from there back to the power supply. The other half of the current flows from the power supply via the terminator of the transfer device 201 to the Vdd network of the transfer device 201, then through the current source, and back via the ground network. When a "1" is transmitted, the steady state is such that no current flows through the power supply network. Therefore, when the data is thixotropic between "1" and "0", the peak-to-peak current in the Vdd of the transmitting device 201 and the grounding network is twice that of the transmitting current, and in the receiving device 202 Doubling; the varying current produces voltage noise on the internal power supplies of each of the transmitting device 201 and the receiving device 202 by falling across the supply impedances. When all of the data pins sharing the common combination of a Vdd/ground terminal are switched, the noise in the common impedances is added, and the size of the noise is directly from the signaling noise budget. It is difficult and expensive to combat this noise: reducing these supply impedances typically requires providing more power and grounding pins and/or adding more metal resources to the wafer to reduce impedance. The cost of improving the bypass on the wafer is area, such as a large thin oxide capacitor. One solution to solve all three of this reference voltage, the return impedance, and the power supply noise problem is to utilize differential signaling. This reference problem does not exist when using a differential signaling. Since the symmetrical second transmit line carries all of the return current without the return impedance problem. The power supply current is nearly solid No, no information about the material being transmitted. However, differential signaling requires twice the signal pin for single-ended signaling and some power/ground pins. Therefore, there is a need in the art for a technique for providing single-ended signaling that reduces the problem of establishing a reference voltage, reducing the impedance of the return path of the signal, and reducing the noise of the power supply. In addition, differential signaling is required, so common mode noise can be reduced. One embodiment of the present invention provides a technique for transmitting signals using differential transmit line pairing. A differential transmitter combines a DC (to DC) converter (which includes a capacitor) with a 2:1 multiplexer to drive a pair of single-ended transmit lines. One of the transmit lines in each differential pair is driven to HI when another transmit line of the differential pair is pulled low via the ground plane, thereby minimizing between different differential transmit pairs A series of noise sources of noise are generated. Various embodiments of the present invention include a transmitter circuit including a sub-circuit pre-charged with a capacitor and a discharge and multiplexer sub-circuit. The sub-circuit pre-charged by the capacitor includes a first capacitor disposed to be precharged to a supply voltage during a positive phase of a clock, and a second capacitor disposed to a negative phase of the clock The period is precharged to the supply voltage. The discharge and multiplexer sub-circuit is configured to couple the first capacitor to the first one of the first transmit line and the second transmit line during the negative phase of the clock to the first Transmitting a line to drive the first transmit line and configured to couple the second capacitor to the first transmit line of the differential transmit pair during the positive phase of the clock to drive the first transmit line. The benefit of the mechanism disclosed herein is that differential signaling can reduce any common mode noise sources. In the following description, numerous specific details are set forth to provide a more complete understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features are not described in order to avoid obscuring the invention. A single-ended signaling system can be constructed to use one of the power supply networks to simultaneously serve as the common signal return conductor and the common reference voltage. When a power supply or even a network under a voltage level that is not supplied as a power supply can be configured as the common signal return conductor and the common reference voltage, preferably the ground terminal, as here Further explanation. Thus, although the same reference is made to the ground reference signaling as illustrated in the following paragraphs, the same techniques can be applied to a signaling system called the positive supply network (Vdd). Or some kind of newly introduced common terminal. The third A diagram illustrates a single-ended signaling system 300 in accordance with a ground reference of the prior art. In order for the ground reference single-ended signaling system 300 to function properly, the transmitting device 301 must drive a pair of voltages ±Vline that are symmetric to ground, which are shown as symmetric voltage pairs 305. It uses a switched capacitor DC-DC converter to generate the two signal voltages +/- Vs, and then a conventional transmitter drives ±Vline into the transmit line 307. ±Vline is generated by the "real" power supply to the device, which is assumed to have a much higher voltage. For example, the supply voltage to the device is approximately 1 volt, however the +/- VS voltage is approximately +/- 200 mV. In the ground referenced signaling system 300, the ground (GND) plane 304 is this and is the only reference plane to which the transmit conductor refers. At the receiving device 302, the terminating resistor Rt is returned to a common connection to the GND plane, so the transmit current cannot flow back to the transmitting device 301 on any other conductor, thereby avoiding the previous cooperation of the second A and the second B. The problem of current separation as illustrated in the figure. Please refer back to the third A picture, the line voltage size |Vline| is usually smaller than the transmission supply voltage |Vs|. For example, if the transmitting device 301 uses a self-source terminated voltage mode transmitter, the transmitter (conceptually) includes a serial connection to a terminal A pair of data driven switches of the resistor. When the transmitter is impedance matched to the transmit line 307, then |Vline| = 0.5 | Vs|. The signal reference voltage is also defined by the GND plane 304 and the network (not shown). Preferably, the GND network is typically the lowest impedance and most robust network in a system, particularly with multiple power supplies. Therefore, the difference in voltage between different points in the GND network is as small as possible under the cost limit. Therefore, the reference noise is reduced to the most likely small amplitude. Since the reference voltage is a supply terminal (GND) and is not generated internally or externally, there is no matching problem between the signal and the reference voltage to be solved. In short, selecting GND 304 as the reference voltage avoids most of the problems outlined in Figures 2A, 2B, and 2C. The GND network is also a good choice for the common signal return conductor (or network) and the common reference voltage because it avoids the power supply that occurs when the two communication devices are powered by different positive power supplies. Supply serialization issues. Conceptually, the reference voltage and signal return path problems can be solved by citing two new power supplies that can generate the ±Vs voltages required for symmetric signaling on the transmit line 307 (at the transmitting device) 301)). Therefore, the main engineering challenge is how to use the power supply voltage to efficiently generate these ±Vs voltages. It is assumed that an input offset voltage of the receiving device 302 can be eliminated, and the thermal noise level of the input origin is at a 1 mV rms, so that the input of the receiving device 302 needs to develop a signal of about 50 mV to overcome the uncompensated Offset, crosstalk, limited gain, other bounded noise sources, and thermal noise (unbounded noise sources). Assuming that the transmit line 307 will be equalized at the transmitter (as further explained herein), it is necessary to develop between the two transmit power supplies (+Vs and -Vs), assuming self-source termination. ±Vs level = ±200 mV. The adjustment of the CMOS power supply voltage is relatively slow, so for the next generation of technology, the Vdd supply of the core device is expected to range from 0.75 to 1 volt. Therefore, the symmetrical Vs voltage is a small fraction of the power supply voltage. Want to turn The most efficient regulator to switch from a relatively high voltage to a low voltage is a switched capacitor DC-DC converter. The third B diagram illustrates a switched capacitor DC-DC converter 310 that is arranged to generate +Vs in accordance with this prior art. The switched capacitor DC-DC converter 310 is arranged to generate +Vs from a Vdd supply voltage that is well above it. The converter 310 that produces the positive power supply +Vs operates in two phases. During φ1, the "flying" capacitor Cf is discharged, and both terminals of Cf are driven to GND. A flying capacitor has two terminals, none of which are directly coupled to a power supply, such as Vdd or GND. During φ2, Cf is charged to Vdd-Vs; the charging current flows through Cf and then enters the load Rload. The bypass/filter capacitor Cb having a capacitance value much larger than Cf stores the supply voltage +Vs, and stores the supply voltage +Vs during the φ1 interval, and supplies current to the Rload. The third C diagram illustrates a switched capacitor DC-DC converter 315 configured to generate -Vs in accordance with an embodiment of the present invention. Similar to the switched capacitor DC-DC converter 310, the switched capacitor DC-DC converter 315 is arranged to generate -Vs from a Vdd well above it. The switched capacitor DC-DC converter 315 that produces the negative supply -Vs is identical in topology to the switched capacitor DC-DC converter 310, but the charge switches are reconfigured. During φ1, Cf is charged to Vdd. During φ2, Cf is discharged into the load Rload. Since the left hand terminal of the Cf is more positive than the right hand terminal, the voltage on the Rload is driven to be more negative on each operational cycle. The current supplied to the load by the switched capacitor DC-DC converter 310 or 315 is proportional to the capacitance value Cf, the frequency of the φ1/φ2 clock, and the difference between Vdd and Vs. When the two supplies +Vs and -Vs are generated using the switched capacitor DC-DC converters 310 and 315 for the single-ended ground reference signaling system 300, from the switched capacitor DC-DC converters 310 and 315 The current drawn by each is based on the data to be transmitted. When the data=1, the current is supplied from the +Vs to the transmission line 307 via the transmission device 301, and the -Vs The supply was uninstalled. When the data = 0, the current is supplied by the -Vs supply, and the +Vs supply is unloaded. The single-ended ground reference transmission system 300 has at least two significant features. The first is that if the two converters have the same efficiency, the current drawn from the Vdd supply has no data value, and this feature avoids the simultaneous switching problem that is implicit in most single-ended signaling methods. In particular, the simultaneous switching problem described in conjunction with the second A and B diagrams can be avoided. Second, the two switched capacitor DC-DC converters 310 and 315 require a control system to individually hold the output voltages +Vs and -Vs fixed in the face of varying loads. It is not practical to change the values of the flying capacitors Cf and Vdd and Vs, so the values of Cf, Vdd and Vs are usually fixed. However, the frequency of the switching clock can be used as a control variable. The third D diagram individually illustrates a pair of schematic control loops for two switched capacitor DC-DC converters 310 and 315 of the third and third C diagrams in accordance with an embodiment of the present invention. Control loop 322 compares +Vs with a power supply reference voltage Vr, and if V(Vs) < V(Vr), the frequencies of φ1 and φ2 are increased to sink more current into the load. Control loop 324 operates by comparing an intermediate voltage between +Vs and -Vs with GND, thereby attempting to maintain the two supply voltages +Vs and -Vs symmetric about GND. When the control loop and converter 320 system can be constructed as shown in the third D diagram, the two control loops 322 and 324 will likely be quite complex, as the control loops 322 and 324 may need to be processed in the load Rload. 100% change. Controlling the chopping on +Vs and -Vs requires operating the switched capacitor Cf at a high frequency and utilizing a large storage capacitor Cb. In fact, the clocks φ1, φ2, ψ0 and ψ1 will likely need to output multiple phases and drive each of a plurality of switching capacitors operating on different phases. All in all, this solution requires significant complexity and consumes a large area in a circuit on a die. The voltage level of the single-ended signal can be generated by combining a transmitter and the switched capacitor DC-DC power supply into a single entity, which is also Includes a 2:1 clock-controlled data multiplexer to avoid the complexity and large area of the tuned switched capacitor converter. In addition to operating a switched capacitor power supply at a frequency controlled by a control loop, the switched capacitor converter is driven at the data clock rate. The data is driven onto the line by controlling the charging/discharging of the flying capacitor based on the data value to be transmitted. Figure 4A illustrates a data driven charge pump transmitter 400 in accordance with an embodiment of the present invention. The structure of the data driven charge pump transmitter 400 incorporates a timed 2:1 data multiplexer with a charge pump DC-DC converter. The data driven charge pump transmitter 400 multiplexes the half rate dual bit stream dat{1,0} into a single full rate bit stream by transmitting dat1 when clk=HI, when clk=LO When, transfer dat0. The relationship between the clkP and the data signal is displayed in the clock and data signal 407. The upper half of the structure, including sub-circuits 401 and 402, is the dat1 half of the multiplexer, where dat1P = HI and dat1N = LO when dat1 = HI. When clk=LO (clkP=LO and clkN=HI), Cf1p is discharged to the ground supply voltage, and both terminals of Cf1p are restored to GND. When Cf1p is discharged, Cf1n is charged to the power supply voltage. In other words, during a negative phase of the clock (when clkN = HI), each capacitor Cf1p and Cf1n is precharged to a supply voltage by pre-charging and flying capacitor circuits in sub-circuits 401 and 402. Cf1p and Cf1n are individually discharged and charged to the power supply voltage. During a positive phase of the clk, when clk becomes HI (clkP = HI and clkN = LO), one of the two capacitors Cf1p or Cf1n drops the charge into the transmission line 405 according to the value of dat1. For example, when dat1 = HI, Cf1p is charged to Vdd-Vline, and the charging current drives the transmission line 405. The voltage level to which the transmit line 405 is driven is based on at least the values of Cf1p, Cf0p, Cf1n, and Cf0n, the Vdd value, the impedance R0, and the frequency of the clock. In one embodiment of the invention, the values of Cf1p, Cf0p, Cf1n, Cf0n, Vdd, R0 and the clock frequency are fixed at design time to drive the transmit line 405 to a voltage level of 100 mV. Cf1n does not change and remains charged. because Thus, Cf1n does not consume current on the next clk=LO phase. On the other hand, if dat1 = LO, Cf1p maintains discharge, and Cf1n discharges to the transmission line 405, which drives the transmission line 405 to a voltage as low as -Vline. During the positive phase of the clk, a 2:1 multiplexer operation is performed by the multiplexer and discharge circuitry in sub-circuits 401 and 402 to select one of the two capacitors to drive transmit line 405 to transmit dat1. The lower half of the data-driven charge pump transmitter 400, including sub-circuits 403 and 404, performs the same action, but on the opposite phase of clk, and is controlled by dat0. Since there is no charge storage capacitor (which may include an electrostatic discharge protection device in addition to the parasitic capacitance associated with the output), there may be significant chopping at the voltage level of the transmit line 405. Importantly, the chopping in the voltage level will be the bit rate at which the data is driven onto the transmit line 405. If there is significant symbol rate attenuation in the channel between the transmitting device and the receiving device, wherein the channel primarily includes the transmit line 405 and the ground plane associated with the transmit line 405, substantially including the package and printed circuit The plate conductor, the chopping in this voltage level will be strongly attenuated. However, even if the chopping in the voltage level is not attenuated, the chopping mainly affects when the data value is changing on the transmission line 405 (in time) away from the best detectable The amplitude of the signal at that point in time when the data value. The data-dependent attenuation of the transmission line can be corrected using an equalizer, as described in connection with the fourth C-picture. There are some redundant components in the data driven charge pump transmitter 400. FIG. 4B illustrates a data driven charge pump transmitter 410 implemented using a CMOS transistor in accordance with an embodiment of the present invention, which is simplified compared to the data driven charge pump transmitter 400 illustrated in FIG. The series transistors implemented as dat‧clk can be replaced by a CMOS gate, which can pre-calculate the logical AND of dat1N and clkN, dat1N and clkP, dat0N and clkP, and dat0N and clkN, and drive a single transistor. . Sub-circuits 413 and 414 pre-charged with flying capacitors pre-charge the capacitors Cf1p and Cf1n during the negative phase of the clock, while the positive phase of the clock The capacitors Cf0p and Cf0n are precharged during the bit period. The multiplexer and discharge circuitry are formed by the transistors in sub-circuits 413 and 414 that are not pre-charged with flying capacitors, which drive the transmit line 415 based on dat1 and dat0. Because the output signal is driven to a voltage below the minimum supply voltage (ground), some of the devices in the data driven charge pump transmitter 410 operate under abnormal conditions. For example, when the transmit line 415 is driven to -Vs, the output source/drain terminals of the multiplex transistors 416 and 417 are driven below ground, so their associated N+/P junctions are forward biased. Pressure. If the signal swing is limited to hundreds of mV, this situation will not cause too much difficulty. If the forward biased junction becomes a problem, the negative drive NMOS transistor required by the data driven charge pump transmitter 410 can be implemented as an isolated p-type substrate in a deep N-well, as long as The structure can be used in this target manufacturing process technology. The isolated P-type substrate can be biased to a voltage below ground using another charge pump, but does not require a large current to be supplied thereto. A negatively biased P-type substrate avoids forward conduction in the device source/drain junction. An additional problem arises with a negative conversion signal. It is assumed that the transmit line 415 is being driven to -Vs during clkP = 1 when the multiplex transistor 416 is enabled. The gates of both pre-charged transistor 418 and multiplex transistor 417 are driven to 0V (ground). However, one of the source/drain terminals of each of pre-charged transistor 418 and multiplex transistor 417 is now a negative voltage and thus becomes the source terminals of the individual devices. Because the gate to source voltages are now positive, the precharge transistor 418 and the multiplex transistor 417 are turned on and will clamp the negative output by conducting current to ground and limiting the negative oscillations that would be present. signal. However, this conduction does not become apparent until the negatively directed voltage is close to the threshold voltage of the precharge transistor 418 and the multiplex transistor 417, and in fact only as long as -Vs falls below 100 mV or is approximately below ground. This clamping current will be small. In a program that provides multiple threshold voltages, it is preferred to use a high threshold voltage transistor to implement the components of any data driven charge pump transmitter 410 that can be driven to a voltage below ground. The fourth C diagram illustrates a data driven charge pump transmitter 420 implemented using CMOS gates and transistors in accordance with an embodiment of the present invention. The data driven charge pump transmitter 420 may not require a series arrangement in the data multiplex section of the charge pump transmitter. Pre-calculation! clkN‧! dat1P and! clkP‧! These NOR gates of dat0P are actually replaced by NAND and inverter because the series PFETs are slower in most current processes. The magnitude of the change in the size of the transistors can be somewhat balanced by the additional delay of the inverter. The capacitors Cf1p and Cf1n are precharged during the negative phase of the clock, while the capacitors Cf0p and Cf0n are precharged during the positive phase of the clock. A 2:1 multiplexer and electronic discharge circuit drives the transmit line 425 based on dat0 during the negative phase of the clock during the positive phase of the clock during the positive phase of the clock. If there is a strong frequency dependent attenuation on the channel, the data driven charge pump transmitters 410 and 420 shown in the fourth and fourth C diagrams, respectively, would require an equalizer. The fourth D diagram illustrates a data driven charge pump transmitter 430 including an equalizer in accordance with an embodiment of the present invention. Sub-circuits 433 and 434 pre-charged with flying capacitors pre-charge the capacitors Cf1p and Cf1n during the negative phase of the clock, respectively, and pre-charge capacitors Cf0p and Cf0n during the positive phase of the clock. The transistors in the sub-circuits 413 and 414 or the equalizer 435 that are not pre-charged by the flying capacitor form a multiplexer and a discharge circuit that are based on dat1 and at that time during the positive phase of the clock. The transmit phase 432 is driven based on dat0 during the negative phase of the pulse. The circuits for precharging the flight capacitors and the multiplexed dual bit data onto the transmit line actually operate the same as the transmitter 410 of the fourth B diagram. A capacitively coupled pulse mode transmitter equalizer 435 is connected in parallel to the data driven charge pump transmitter 430. When the output data changes value, the equalizer 435 pushes additional current to or from the transmit line 432 to increase the voltage of the transmit line 432 during the transition. The equalization constant can be changed by changing the ratio of Ceq to Cf. The equalizer 435 can be divided into a group of segments, each of which A section will have an "enable". By turning on certain portions of the segments, Ceq can be effectively changed to change the equalization constant. The data driven charge pump transmitter 430 circuit can be arranged in an array of identical segments and added to enable each segment to allow the voltage of the transmit line 432 to be adjusted according to operational requirements. Please note that there is a problem with the equalization shown in the fourth D diagram in that the equalizer 435 has an additional gate delay (the inverter) after the multiplex function, so the voltage generated by the equalizer 435 The boost will be slightly delayed relative to the transition driven by the data driven charge pump transmitter 430. If desired, the clocks that drive the data-driven charge pump transmitter 430 can be delayed by an inverter delay, or the equalizer 435 can be implemented in the same manner as the data-driven charge pump transmitter 430. A fourth E diagram illustrates an equalizer 440 for applying pressure to the signal line voltage without additional gate delay in accordance with an embodiment of the present invention. The fourth F diagram illustrates yet another equalizer that uses the flying capacitors manipulated by the data bits dat{1, 0} instead of the clock to drive additional current to the signaling during data conversion. on-line. The data driven charge pump transmitters 410, 420 and 430 shown in the fourth, fourth, and fourth D diagrams do not address the problem of return current flow in the transmitter. It is therefore necessary to avoid the return current separation that causes problems in conventional single-ended signaling systems. However, when the transmit line is driven positively at the data driven charge pump transmitters 410, 420 and 430, current flows from the power supply to the transmit line, so the return current must flow through the transmitter. power supply. The current flowing from the power supply to the transmit line can be substantially effectively reduced by providing a bypass capacitor between the supply and the signal ground, so that most of the transmit current is drawn from the bypass capacitor And allowing the return current to flow locally to the transmitter. The addition of a small series resistance between the power supply of the transmitter and the positive supply terminal will further enhance the effect of forcing the return current to flow locally to the transmitter. The small series resistor, in conjunction with the bypass capacitor, isolates the supply from the chopping current required to charge the flying capacitors and also isolates the high frequency portion of the signal return current. Figure 5A illustrates a switched capacitor transmitter 500 that directs return current into the grounded network in accordance with an embodiment of the present invention. The positive current converter sections of the switched capacitor transmitter 500, i.e., the positive converters 512 and 514, are modified from the data driven charge pump transmitters 410, 420, and 430, and therefore flow from the positive converters 512 and 514. This current to the transmit line 502 flows only in the grounded network. Cf1p and Cf0p are first precharged to the power supply voltage and then discharged to the transmit line 502. The negative converter portions of the switched capacitor transmitter 500 remain the same as the data driven charge pump transmitters 410, 420 and 430 because the output currents of the negative converters only flow in the GND network. . Switched capacitor transmitter 500 includes an equalizer 510 that can be implemented using the circuit of equalizer 435 or 440. By adding some more switching transistors to the switched capacitor transmitter 500, it is possible to divide the grounding network into two parts: an internal network for precharging the flying capacitors, and as The external network that is part of the signaling system. Figure 5B illustrates a switched capacitor transmitter that acts on a separate ground return path for pre-charging and signal current. The circuit components of the negatively driven flying capacitors Cf0n and Cf1n are maintained in the same manner as in the fifth A diagram. Positive converters 522 and 523 have been modified with the addition of two transistors each. In positive converter 522, an NMOSfet driven by clkN carries a precharge current to the internal power supply grounding network, indicated by symbol 524. When the data is driven from the converter at dat1P=1, a second NMOSfet driven by dat1P carries the signal return current to the signal current return network, indicated by symbol 525. This configuration prevents the pre-charge currents in converters 522 and 523 from injecting noise into the signal current return network. A practical problem that must be addressed is that capacitors are typically fabricated on a wafer using thin oxides. In other words, the capacitors are MOS transistors, typically varactors (NMOS capacitors). These capacitor structures have parasitic capacitances from their terminals to the substrate and to the surrounding conductors, often with some degree of asymmetry. For example, an NMOS varactor has a gate with the NMOS The most advantageous overlap capacitance of the source and drain terminals of the varactor, but the NWell body ohmically coupled to the source and drain terminals has a capacitance to the P-type substrate. In the case of a flying capacitor, this parasitic capacitance (preferably placed on the transmit line side of the capacitor) will have to be charged and discharged at each cycle. This current that is charging the parasitic capacitance cannot be used to drive the transmit line. This parasitic capacitance, along with switching losses, reduces the efficiency of the switched capacitor transmitter 500. The Ceqa capacitors in the equalizer 510 of Figure 5B are all grounded, so the converter circuit using the Ceqa capacitors is likely to be more efficient than the converters using the flying capacitor Ceqb. The better efficiency of these Ceqa capacitors relative to the Ceqb capacitors can compensate for the different sizes of Ceqa and Ceqb. FIG. 5C illustrates a "bridge" charge pump transmitter 540 in accordance with an embodiment of the present invention, wherein the output current flows only in the signal current return network 547, and the precharge current is only at the internal power supply Flow in the grounding network. When the internal power supply grounding network and the signal current return network 547 are separate networks for the purpose of noise isolation, they are typically maintained at the same potential because the signal current is returned to the network in the channel. This external portion is shared with the power supply ground. In the "bridge" charge pump transmitter 540, a single flight capacitor Cf1 and Cf0 is used for each phase of the clock (clk, where clkN is the inverted clkP). Regarding the switched capacitor converter 542, the flying capacitor Cf1 in the sub-circuit 541 charged with the flying capacitor is precharged to the power supply voltage when clk=LO. When clk=HI, the flying capacitor Cf1 throws the charge into the transmission line 545, and if the dat1=HI, the transmission line 545 is pulled to HI, and if dat1=LO, the transmission line 545 is pulled to the LO. . The switched capacitor converter 544 performs the same operation at the opposite phase of the clock and is controlled by dat0. In particular, the flying capacitor Cf0 within the sub-circuit 543 pre-charged with the flying capacitor is pre-charged to the power supply voltage when clk = HI. When clk=LO, the flying capacitor Cf0 will be powered The load is dropped into the transmission line 545. If dat1 = HI, the transmission line 545 is pulled to HI, and if dat1 = LO, the transmission line 545 is pulled to LO. The transistors within the switched capacitor converters 542 and 544 that are not in the sub-circuits 541 and 543 pre-charged with the flying capacitor form a 2:1 multiplexer and discharge circuit. In the "bridge" charge pump transmitter 540, the precharge current can be separated from the current in the signal current return network 547. For example, when the signal current return network 547 is isolated from other ground supplies, the precharge current does not flow into the signal current return network 547 and therefore does not occur in the network coupled to the signal current return network 547. Noise. Note that the four timing NFETs in each of the "bridged" connections can be logically broken down into two devices. However, when the flying capacitor Cf1 (or Cf0) is being pre-charged, the associated data bit is thixotropic, and there is a good chance that the datP- and datN-driven NFET can be turned on simultaneously during the rectification. , the current is drawn from the pre-charge. Therefore, in fact, the four timing NFETs must not be decomposed. In order to avoid having to increase the four NFETs connected in series in each of the signal current paths, the device in the "bridge" charge pump transmitter 540 can use a pre-calculated gate. The fifth D diagram illustrates a bridge transmitter having a pre-calculated gate 550 in accordance with an embodiment of the present invention. The delay in the clock signals is balanced by gating the clock signals clkN and clkP to the data signals dat1N, dat1P, dat0P and dat0N. The gate control allows the signal delay within the transmitter 550 to closely match the delay within the equalizer 553 since both involve the same number of logic platforms. The delay through the first inverter in the multiplexer and equalizer 553 can be approximately matched to the NAND gates and the inverters associated with the NAND gates in the pre-calculated gates that generate d1NclkP Delay, and so on. Similarly, the delay of the inverter by directly driving the equalization capacitor Ceq can be matched to the delay of driving the output current to the two sets of four "bridge" transistors in the transmit line 557. Depending on the process details, the bridge transmitter with pre-calculated gate 550 provides lower overall power than the "bridge" charge pump transmitter 540 and switched capacitor transmitter 500, albeit at the expense of causing chattering. Some extra power supply noise. The circuit of the bridge transmitter having the pre-calculated gate 550 can be arranged in a significantly smaller area than the switched capacitor transmitter 500 because the flying capacitors in the sub-circuit 555 pre-charged with the flying capacitor are in each cycle Both are used, so compared to the switched capacitor transmitter 500 of FIG. 5A and the data-driven charge pump transmitters 400, 410, 420 and 430 of the fourth A, four B, four C and four D diagrams, respectively, There are half of the number of flying capacitors. Each of the bridge transmitter, bridge charge pump transmitter 540 and switched capacitor transmitter 500 having a pre-calculated gate 550 includes a terminating resistor R0 on the transmit line. If a transmitter is return terminated, the terminating resistor must be larger than the characteristic impedance of the transmitting line because the charging pumps are not ideal current sources. In some cases, return termination is not necessary, and if necessary, the charge pump only needs to supply 1/2 of the current compared to a signal line that is terminated. Excluding return termination is an opportunity to save significant power. A decision can be made as to whether such required flight capacitors are actually implemented in a CMOS fabrication technique. Assume that it is necessary to pass ±100mV to a 50Ω transmission line that is terminated. Each of these charge pumps must provide a current source of 100 mV / 25 Ω = 4 mA. Using I = CdV / dt, where dV = V (Vdd) - V (line) and dt = 1 UI, the required capacitance can be calculated once the supply voltage and the bit rate of the data are known. Assuming V(Vdd) = 0.9V and 1UI = 50psec, then C = 250fF. A 250fF capacitor can be implemented simply in a CMOS process. In a typical 28 micron CMOS process, the capacitance of an NMOS varactor is approximately 50 fF/μ2, so these flying capacitors will occupy an area of several μ2. In summary, these flying capacitors will have to be larger than the required values for the required calculated values due to switching losses and parasitic capacitance. The transmitters, that is, the bridge transmitter having the pre-calculated gate 550 and the "bridge" charge pump transmitter 540, and the switched capacitor transmitter 500 drive the voltage of the signal line to a fixed portion of the power supply voltage. Where the portion is based on the operating frequency (the bit rate of the data) and the size of the flying capacitors. The power supply voltage is typically specified to vary by ±10%, so a bridge transmitter having a pre-calculated gate 550, a "bridge" charge pump transmitter 540, and a switched capacitor transmitter 500 will cause the signal voltage to vary in a similar manner. The transmitter (and the equalizer) can be enclosed in a control loop if the voltage swing of the transmit line must be maintained at a tighter tolerance than the power supply change. Fifth E illustrates a regulator loop 560 for controlling the voltage on the transmit line 565 in accordance with an embodiment of the present invention. In regulator loop 560, switched capacitor transmitter 570 and an equalizer (not shown) are not operated by Vdd, but are operated by a regulated voltage Vreg, which is typically set lower than the wafer power supply Vdd. The lowest expected voltage. The primary regulating element is a pass transistor Ppass that charges a large filter capacitor Cfilt. The via transistor is driven by a comparator that compares a reference voltage Vref set to the desired line voltage with a Vrep of the output of a replica switched capacitor converter 572. The replica switched capacitor converter 572 is a replica of one of the data driven switched capacitor converters in the transmitter 570, which may have been adjusted. The replica switched capacitor converter 572 cycles through each clk (any polarity of clk is used), drives its resistance equal to the impedance of the transmit line 562 (when the transmitter 570 does not have a return termination) or sends A resistor 574 (possibly adjusted) of one half of the impedance of the signal line 565 (when the transmitter 570 has a return termination). The load of the replica switched capacitor converter 572 includes a large capacitor Crep to remove chopping from the Vrep output. A regulator, such as regulator circuit 560, is typically designed such that the output filter (Cfilt) establishes the main pole in the closed loop conversion function. Additional components (not shown) may be included in the circuit to stabilize the loop. The fifth F diagram illustrates a method for precharging a flying capacitor sub-circuit and driving the transmitting line on different phases of the clock in accordance with an exemplary embodiment of the present invention. Although the method steps are in conjunction with the fourth A, four B, four C and four D map data driven charge pump transmitters 400, 410, 420 and 430, the fifth A diagram of the switched capacitor transmitter 500, the fifth C The illustrated bridged charge pump transmitter 540 and the bridge transmitter having the pre-calculated gate 550 of FIG. 5D are illustrated, and those skilled in the art will appreciate that any system configured to perform the method steps in any order is known. It is within the scope of the invention. The two different clock phases include a positive phase when clkP is HI and a negative phase when clkN is HI. The data is divided into two signals dat0 and dat1, where dat0 is valid when clkN is HI, and dat0 is valid when clkP is HI. At step 585, a first flying capacitor Cf is precharged by a sub-circuit pre-charged with a flying capacitor during the positive phase of the clock. At step 587, during the positive phase of the clock, a second flying capacitor Cf is discharged and the transmit line is driven by a multiplexer electronic circuit (HI or LO). At step 590, the second flying capacitor Cf is pre-charged by the sub-circuit pre-charged with the flying capacitor during the negative phase of the clock. At step 592, during the positive phase of the clock, the first flying capacitor Cf is discharged and the transmit line is driven by the multiplexer electronic circuit (HI or LO). Single-ended transmit receiver circuit Returning to the ground reference single-ended signaling system 300, the data-driven charging pump transmitters 400, 410, 420 and 430, the switched capacitor transmitter 500, the "bridge" charging pump transmitter 540 and the pre-calculated gate 550 are both received and received. The devices are paired, which can efficiently receive signals oscillating symmetrically to GND (0V). The receiver amplifies and level shifts the signals to CMOS levels (a logic level that is approximately thixotropic between Vdd and GND). A conventional PMOS differential amplifier can be used as a receiver, but a lower power and simpler alternative is a ground gate (or common gate) amplifier. Figure 6A illustrates a grounding gate in accordance with an embodiment of the present invention Amplifier 600. If p0/p1 and n0/n1 are the same size, then p0/n0, a shortened inverter forming bias generator 602, can generate the bias voltage Vbias at the critical point of the inverter switching. The input amplifier 605 including p1/n1 has no signal on the transmission line 607, and its bit is at the same operating point, specifically, the inverter switches the critical point, so the output of the input amplifier 605 also stays. At V (Vbias). Input amplifier 605 provides current Iamp into transmit line 607. Therefore, the voltage on the transmission line 607 does not swing around 0V (GND), but the bias voltage Voff = Iamp‧R0, in which case only the receiver is terminated. When both ends of the transmission line 607 are terminated, Voff = 0.5 Iamp‧R0, and the bias resistor Rbias attached to the source of n0 is set at R0/2. In fact, the offset voltage is smaller than the signal swing Vs. Please note that the bias circuit p0/n0 and the bias resistor of the bias generator 602 can be reduced to save power. In a conventional process technique, if the MOSFET of the grounded gate amplifier 600 is implemented using a low threshold device, the gain of the input amplifier 605 is approximately 5, so when the amplitude of the transmit line 607 is 100 mV, the input amplifier 605 An output voltage level can be generated that is nearly full swing to directly drive a CMOS sampler. A second stage amplifier 610 can be added if a higher gain is required. The output of the input amplifier 605 and the output of a continuous amplifier (eg, the second stage amplifier 610) are approximately symmetrically surrounded by V (Vbias), the inverter switching critical point, and approximately halfway between Vdd and GND. swinging. The previous explanation ignores an effect implicit in the grounded gate amplifier. Because a grounded gate amplifier drives current into the input, such as the transmit line, the grounded gate amplifier has a finite input impedance that is approximately 1/gm of the input transistor n1. The impedance appears parallel to the terminating resistor R0, so the input will be terminated low unless the terminating resistor is adjusted upwards. In fact, the input amplifier 605 can be pulled out small enough to make the effect relatively small. The description of the previous ground gate amplifier 600 implicitly assumes that p0 and p1 are matched, and n0 and n1 are matched. Inevitably, process variation and input source resistance The difference between Rbias and R0 will cause Vbias to move away from the actual switching threshold of p1/n1, thus causing an offset of this voltage at this output of input amplifier 605. As shown in Figure 6A, the output of input amplifier 605 does not oscillate symmetrically around Vbias and will be biased above or below the ideal swing. To remove the introduced offset, an offset trimming mechanism can be used and a procedure to adjust the mechanism can be utilized. One of several possible ways to implement this offset trimming mechanism is to modify the bias generator. FIG. 6B illustrates an adjustable bias generator 620 in accordance with an embodiment of the present invention. The adjustable bias generator 620 can replace the bias generator 602 in the grounded gate amplifier 600. The single source resistor Rbias in the bias generator 602 within the grounded gate amplifier 600 is replaced by an adjustable resistor that can be digitally trimmed by changing the value of the Radj[n] bus of the signal. A fixed resistor Rfixed sets the highest resistance of the adjustable resistor, and a set of binary weighted resistors Ra, 2Ra...(n-1)Ra can be selectively paralleled to Rfixed to reduce the effective resistance. These resistors are basically selected such that when Radj[n] is in the middle range, the overall resistance can match the terminating resistor R0. Additionally, in addition to adjusting Rbias, one or both of the transistors p0/n0 can be adjusted in a similar manner by providing a complex bias transistor in series with the control transistor to allow the effective width of the bias transistors. It can be changed digitally. By establishing a digital trimming mechanism, a program that adjusts the mechanism is needed to remove the resulting offset. The method described herein does not require additional hardware beyond what is required by the grounded gate amplifier 600 to perform such receiving operations, except for a finite state machine to analyze the data from the receiver sampler and set the Trim the value above the adjustment bus Radj[n]. The sixth C diagram illustrates a method 640 for adjusting an offset trimming mechanism in accordance with an exemplary embodiment of the present invention. Although the method steps are described in conjunction with the system of Figure B, those skilled in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention. Method 640 is performed to adjust adjustable bias generator 620. At step 645, the transmitting device that drives the transmitting line is turned off, so the transmitting line is set to its midpoint voltage Voff. At step 650, the Radj code that changes the resistor Rbias is set. At step 655, the finite state machine controlling Radj records a number of consecutive samples from the receiver sampler or samplers attached to the input amplifier with receiver clock thixotropy. Next at step 660, the values are filtered to determine if the average is greater than or less than 0.5. At step 665, the Radj code is changed to drive the offset toward the point where half of the sampler values are "1" and half is "0". Until this point is reached, the process returns to step 650 and the Radj code is re-adjusted. When it is determined that the average from the samplers is approximately 0.5, the receiver is assumed to be trimmed and the program exits at step 670. At step 650, the finite state machine begins with the Radj code at a limit value and approaches the code on Radj toward another limit value. At this starting point, all of these samplers must output "1" or "0" depending on the details of the receiver. Since the code on Radj changes towards another limit value, there will be a point at which the digits from the samplers begin to thixify to the opposite digit value. At step 660, the finite state machine filters the values from the samplers by averaging over a number of clock cycles. When one-half of the values from the four samplers is "1" and half is "0", averaging over a certain number of sampling clock cycles, the receiver can be assumed to be trimmed. The finite state machine can be implemented as a hardware, software, or combination of hardware and software that operates on a control processor. Variations in this manner may include equal weighting of the Ra resistors and thermometer coding of the Rajj bus. Such an implementation would be helpful when a finishing operation had to be performed while using some of the programs other than the above to receive dynamic data. Another possible variation is that if the plurality of samplers in the receiver require individual offset trimming, four copies of the input amplifier can be provided (although It shares a common termination resistor) and includes four trimmable Vbias generators, each with its own Radj bus. The trimming procedure is quite similar to the above described procedure except that a plurality of finite state machines or a time multiplexed program will be required to perform the adjustment. 7A is a block diagram illustrating a processor/wafer 740 in accordance with one or more aspects of the present invention, including a ground reference single-ended transmit transmitter, such as a data driven charge pump transmitter from FIG. 410, 420 from the fourth C diagram, 430 from the fourth D diagram, the switched capacitor transmitter 500 from the fifth diagram, the bridge charge pump transmitter 540 from the fifth diagram C, or from the fifth D The bridge transmitter of the figure having a pre-calculated gate 550. Receiver circuit 765 can include a receiver configured to receive a single-ended input signal from other devices in a system, such as ground gate amplifier 600 from Figure 6A. Single-ended output signal 755 is generated by transmitter circuit 775. Receiver circuit 765 provides an input to core circuit 770. Core circuit 770 can be configured to process the input signals and produce an output. These outputs of core circuit 770 are received by transmitter circuit 775 and used to generate a single-ended output signal 755. Figure 7B illustrates a block diagram of a computer system 700 that is configured to implement one or more aspects of the present invention. Computer system 700 includes a central processing unit (CPU) 702 and a system memory 704 that communicate via an interconnect path including a memory bridge 705. The memory bridge 705 can be, for example, a north bridge wafer that is connected to an I/O (input/output) bridge 707 via a bus or other communication path 706 (e.g., a HyperTransport link). The I/O bridge 707 can be, for example, a south bridge chip that receives user input from one or more user input devices 708 (eg, a keyboard, mouse) and forwards the via 706 and memory bridge 705 via the communication path 706 Input to the CPU 702. A parallel processing subsystem subsystem 712 is coupled to the memory bridge 705 via a bus or second communication path 713 (eg, PCI (Peripheral Component Interconnect) Express, accelerated graphics communication, or HyperTransport link); In an embodiment, parallel processing subsystem 712 A graphics subsystem that passes pixels to a display 710 (eg, a conventional cathode ray tube or liquid crystal monitor). A system disk 714 is also coupled to I/O bridge 707. A switch 716 provides a connection between the I/O bridge 707 and other components such as the network adapter 718 and the various embedded cards 720, 721. Other components (not explicitly shown) include a universal serial bus (USB, "Universal serial bus") or other 埠 connection, CD (CD) drive, digital video disc (DVD) drive, film recorder and the like. It can also be connected to an I/O bridge 707. The communication paths of the various components (including the specific name communication paths 706 and 713) shown in FIG. 7B can be implemented using any suitable protocol, such as PCI (Peripheral Component Interconnect), PCI Express. (PCI Express, PCI-E), AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point protocol, and connections between different devices, can be used Different agreements known in the technology. One or more of the devices shown in Figure 7B can use a single-ended signaling to receive or transmit signals. In particular, the transmitting device can be arranged to include a ground reference single-ended transmitter, such as data driven charge pump transmitter 410 from Figure 4B, 420 from Figure 4C, 430 from the fourth D-picture, The switched capacitor transmitter 500 from FIG. 5A, the bridged charge pump transmitter 540 from FIG. 5C, or the bridge transmitter having the pre-calculated gate 550 from the fifth D diagram. The receiving device can be arranged to include a grounded gate amplifier 600 from Figure 6A. In one embodiment, parallel processing subsystem 712 incorporates circuitry that can be optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, parallel processing subsystem 712 incorporates circuitry that can be optimized for general purpose processing while retaining the underlying operational architecture, as will be described in more detail herein. In yet another embodiment, the parallel processing subsystem 712 can be integrated into one or more other system components in a single subsystem, such as in conjunction with the memory bridge 705, the CPU 702, And I/O bridge 707 forms a system on chip (SoC, "System on chip"). It will be appreciated that the systems shown herein are merely illustrative and that many variations and modifications are possible. The connection topology, including the number and configuration of bridges, the number of CPUs 702, and the number of parallel processing subsystems 712, can all be modified as needed. For example, in some embodiments, system memory 704 is directly coupled to CPU 702 and not coupled through a bridge, while other devices communicate with system memory 704 via memory bridge 705 and CPU 702. In other alternative topologies, parallel processing subsystem 712 is coupled to I/O bridge 707 or directly to CPU 702 rather than to memory bridge 705. In still other embodiments, I/O bridge 707 and memory bridge 705 can be integrated into a single wafer in addition to being one or more discrete devices. Large specific embodiments may include two or more CPUs 702, and two or more parallel processing subsystems 712. The particular components shown herein are all optional; for example, they can support any number of embedded cards or peripheral devices. In some embodiments, switch 716 is omitted and network switch 718 and embedded cards 720, 721 are directly connected to I/O bridge 707. In summary, the dual-trigger low energy flip-flop circuit 300 or 350 is fully static because all nodes are driven high or low during all steady state periods of the circuits. Because the internal nodes are only thixotropic when the data changes and the load of the clock is only three transistor gates, the flip-flop circuit is low energy. Since the data inputs d and dN can change a gate delay after the rising edge of the clock, the hold time is relatively short. Moreover, the dual trigger low energy flip flop circuit 300 or 350 does not rely on the dimensional relationship between the different transistors to function properly. Therefore, the flip-flop circuit operation can remain stable even when the characteristics of the transistors vary due to the manufacturing process. Differential letter Differential signaling avoids most of the problems associated with single-ended messaging. The data-driven charging pump described in connection with the fourth A, four B, four C and four D drawings can also be used for Implement a low swing and send a letter. In order to implement differential signaling, two replicas of the switched capacitor transmitter (along with the associated auxiliary equalizer transmitter) may be used, wherein a first switched capacitor transmitter consists of a positive polarity of a data bit. The version is driven, and a second switched capacitor transmitter is driven by a negative version of the data bit. Each of the first switched capacitor transmitter and the second switched capacitor transmitter drives one of the two conductors of the differential transmission line. These first and second switched capacitor converters are capable of operating at lower power because the receiver detects the differential voltage between the two lines. In addition, one or more of the devices shown in FIG. B may use differential signaling to receive or transmit signals. The receiving device can be arranged to include an amplifier arranged to receive the differential signal. Figure 8A illustrates a differential version of a data driven switched capacitor transmitter 800 in accordance with an embodiment of the present invention. In the data driven switched capacitor transmitter 800, the differential signal is transmitted on lineP 805 and lineN 810, where lineN 810 is complementary to lineP 805. To transmit the signal, one of line {P, N} is driven high by the positive data transmitter 812 or the negative data transmitter 814, while the other line is pulled to LO by the individual termination resistor R0. Therefore the common mode voltage is 1/2 Vs instead of 0. In transmitter 800, the voltages on each of lineP 805 and lineN 810 are thixotropic between a certain positive voltage Vs and ground. Additionally, the common mode voltage will have a chop equal to one-half of the chopping above any of the line terminals 801 and 803. The equalizer 830 can include the same circuitry as the equalizer 530 shown in FIG. Sub-circuits 802 and 804 pre-charged with capacitors precharge the capacitor C1p to the power supply voltage during the negative phase of the clock, and pre-charge capacitor C0p to the power supply voltage during the positive phase of the clock. Sub-circuits 802 and 804 pre-charged with capacitors or the transistors in equalizer 830 form a multiplexer and discharge circuit that is based on dat1 and the clock during the positive phase of the clock. The differential phase is driven based on dat0 to drive the differential transmission lines, namely lineP 805 and lineN 810. Data driven switched capacitor transmission The current converters of the transmitter 800, the positive data transmitter 812 and the negative data transmitter 814 are arranged such that the positive data transmitter 812 or the negative data transmitter 814 flows to the two differential transmission lines. The current in one flows only in the grounded network. FIG. 8B illustrates a switched capacitor differential transmitter 820 in accordance with an embodiment of the present invention. The switched capacitor differential transmitter 820 is taken from the bridged charge pump transmitter 540 of FIG. The switched capacitor differential transmitter 820 is of a fully balanced configuration and therefore does not have a net current from the power supply into the differential signal lines lineP 825 and lineN 830, except at the terminals of the flying capacitors Cf1 and Cf0. Current caused by unbalanced parasitic capacitance. These parasitic capacitances on the GND network will need to balance these parasitic capacitances as close as possible. In accordance with an embodiment of the invention, the output current flows only in the signal current return network 847, and the precharge current flows only in the internal power supply ground network. When the internal power supply grounding network and the signal current return network 847 are separate networks for the purpose of noise isolation, they are typically maintained at the same potential because the signal current is returned to the network in the channel. This external portion is shared with the power supply ground. A single flying capacitor Cf1 and Cf0 is used for each phase of the clock (clk, where clkN is the inverted clkP). Regarding the switched capacitor converter 854, the flying capacitor Cf0 in the sub-circuit 843 charged with the flying capacitor is precharged to the power supply voltage when clk=LO. When clk=HI, the flying capacitor Cf0 drops the electric charge into one of the differential transmission lines lineP 845 or lineN 850, and if the dat0=HI, the differential transmission line lineP 845 is HI, And if dat0=LO, the differential transmission line lineN 850 is pulled up to HI. When clk=HI, when dat0=HI, lineN 850 is pulled to LO via R0, and when dat0=LO, lineP 845 is pulled to LO via R0. The switched capacitor converter 852 performs the same operation at the opposite phase of the clock and is controlled by dat1. In particular, preloading with flying capacitors The flying capacitor Cf1 within the electrical sub-circuit 852 is precharged to the power supply voltage when clk = HI. When clk=LO, the flying capacitor Cf1 drops the electric charge into one of the differential transmission lines lineP 845 or lineN 850, and if the dat1=HI, the differential transmission line lineP 845 is HI, And if dat1=LO, the differential transmission line lineN 850 is pulled up to HI. Forming a 2:1 multiplexer and discharge circuit individually for each of the transistors within the switched capacitor converters 852 and 854 that are not in the sub-circuits 841 and 843 pre-charged with flying capacitors . Unlike the differential transmitter 800 of Figure 8A, the transmitter 840 of Figure 8B drives lineP and lineN (845, 850, respectively) between positive and negative voltages of approximately equal magnitude, namely +Vs and -Vs . Therefore, the common mode voltage on lineP, lineN is zero. The eighth C diagram illustrates a method for precharging a flying capacitor sub-circuit and driving a differential transmitting line on different phases of the clock in accordance with an exemplary embodiment of the present invention. Although the method steps are described in conjunction with the data driven switched capacitor transmitter 800 of FIG. 8A and the bridged charge pump transmitter 840 of FIG. B, those skilled in the art will appreciate that the method is configured to perform in any order. Any system of the method steps is within the scope of the invention. The two different clock phases include a positive phase when clkP is HI and a negative phase when clkN is HI. The data is divided into two signals dat0 and dat1, where dat0 is valid when clkN is HI, and dat0 is valid when clkP is HI. At step 885, a first flying capacitor Cf1 is pre-charged during a positive phase of the clock by a sub-circuit pre-charged with a flying capacitor. When the data driven switched capacitor transmitter 800 is used, the capacitor C0p is precharged to the power supply voltage during the positive phase of the clock. At step 887, during the positive phase of the clock, a second flying capacitor Cf is discharged, and one of the differential signaling lines is driven to HI by a multiplexer electronic circuit. The transmission line that is not driven high by the multiplexer electronic circuit is pulled to LO by the termination resistor R0. At step 890, the second flying capacitor Cf is pre-charged by the sub-circuit pre-charged with the flying capacitor during the negative phase of the clock. When the data driven switched capacitor transmitter 800 is used, the capacitor C1p is precharged to the power supply voltage during the negative phase of the clock. At step 892, during the positive phase of the clock, the first flying capacitor Cf is discharged, and one of the differential signaling lines is driven by the multiplexer electronic circuit to HI. The transmission line that is not driven high by the multiplexer electronic circuit is pulled to LO by the termination resistor R0. Figure 9A illustrates a differential version of a data driven charge pump data transfer system 900 in accordance with an embodiment of the present invention. The data driven charge pump data transfer system 900 includes a transmitter 901 including a sub-circuit 910 pre-charged with a capacitor and a multiplexer and discharge circuit 912. Transmitter 901 is a data driven charge pump that drives a differential signal to differential transmitter line 903 (lineP and lineN) to receiver 906. Transmitter 901 includes a pre-calculated gate that is configured to incorporate positive and negative versions of a data signal, and positive and negative versions of the clock to produce an output coupled to discharge and multiplexer sub-circuit 912. Receiver 906 includes a receiver input amplifier 907 that is configured to receive the differential signal and regenerate the original data stream to produce rxdat. In the data driven charge pump data transfer system 900, only the positive charge pump is utilized in the sub-circuit 910 pre-charged with the capacitor, and the signals are at 0 volts (ground) on the differential transmit line 903 with some desired The line voltage Vline (assumed to be a fraction of the usable power supply voltage on the wafer containing the transmitter 901) is nominally thixotropic. Thus, the data driven charge pump data transfer system 900 can avoid problems associated with driving a signal below ground potential, such as forward biased source/drain junctions and unwanted transistor turn-on. As in the single-ended transmitter, that is, the fourth A, four B, four C and four D map data-driven charge pump transmitters 400, 410, 420 and 430, the fifth A-type switched capacitor transmitter 500, The bridged charge pump transmitter 540 of the fifth C diagram, the bridge transmitter 550 having the pre-calculated gate, determines the current driven into the lineP and the lineN during the clkN=1 period, and the dat1 determines to be driven to during the clkP=1 period. lineP and Current in lineN. For example, during clkN=1, if dat0P=1 (dat0N=0), on the first half of the cycle when clkN=0, the capacitor Cp0 is charged to the power supply voltage, and in the subsequent half cycle The portion of sub-circuit 910 including capacitor pre-charging including Cp0 delivers a current pulse to lineP. No current pulses are injected into the lineN during this time period, and the termination resistor 902 at the transmitters 901 and 904 at the receiver 906 supplies a pull-down current to pull the lineN toward 0 volts (ground). Note that during clkN=0, Cn0 is also charged to the power supply voltage, but Cn0 remains charged during the next clkN=1. Typical voltage waveforms that can be observed on lineP and lineN during operation of data driven charge pump data transfer system 900 are shown in thumbnail 905. Note that if the consecutive bits to be transmitted have the same HI value, voltage ripple will be caused on lineP. As previously mentioned, this chopping rate is the bit rate, so this voltage chopping does not actually affect the ability of the receiver 906 to distinguish the correct data value at the center of each cell segment of the data transmission. If the consecutive bits have an LO value, the voltage on lineP is pulled to zero volts (ground) by termination resistors 902 and 904, and no chopping is observed. Although these individual line voltages have different waveforms for the HI and LO data values on lineP and lineN, the differential voltage V(lineP, lineN) will be completely symmetrical for the two data values, and its amplitude is approximately twice that of Vline. . Those skilled in the art will immediately recognize that the transmitter 901 of Figure 9A can be simplified and reduced in area, as shown in Figure IXB. Figure IXB illustrates another differential version of a data driven charge pump transmitter 915 in accordance with an embodiment of the present invention. Similar to the transmitter 901 of FIG. A, the data driven charge pump transmitter 915 also includes a pre-calculated gate that is configured to combine the positive and negative versions of a data signal, and the positive and negative versions of the clock to generate The output of the discharge and multiplexer sub-circuit portion coupled to the data driven charge pump transmitter 915. The capacitors Cp0 and Cn0 of the sub-circuit 910 pre-charged by the capacitor have been incorporated into a single device C0 of the sub-circuit 920 pre-charged by the capacitor in the data-driven charge pump transmitter 915, and C0 is commonly used to drive the differential The transmission line 913, that is, lineP and lineN. Similarly, capacitors Cp1 and Cn1 of sub-circuit 910 pre-charged with capacitors have been incorporated into a single device C1 of sub-circuit 921 pre-charged with capacitors in data-driven charge pump transmitter 915. C0 of sub-circuit 920 pre-charged with capacitor is pre-charged during clkN=0. When clkN = 1, {, N P } driven one of those two NFETs opened by clkN * dat0, and the driving current to lineP (dat0P = 1) or lineN (dat0N = 1). C1 of the sub-circuit 921 pre-charged with the capacitor is precharged on the other phase of the clock by the control of dat1{P, N}. When clkP = 1, one of the two NFETs driven by clkP * dat1 {P, N} is turned on and drives current into lineP (dat1P = 1) or lineN (dat1N = 1). The data driven charge pump transmitter 915 in the transmitter 901 and the ninth diagram can be modeled as a current source. Therefore, the output currents of the transmitter 901 and the data-driven charge pump transmitter 915 can be added and subtracted, so any equalization or other signal adjustment technique that can be implemented by adding the differential current can use the transmitter 901 and the data. The charge pump transmitter 915 is driven to implement. The ninth C diagram illustrates an equalized data driven charge pump transmitter 930 that is equalized using a 2-tap pre-emphasis FIR filter in accordance with an embodiment of the present invention. The equalized data-driven charge pump transmitter 930 has two portions, a master data transmitter 932 that outputs a differential current in the same manner as the data-driven charge pump transmitter 915 of FIG. That is, the equalizer 935. The internal capacitors of the equalizer 935 are drawn (and fabricated) to be smaller than those of the transmitter 932, so the equalizer 935 provides a smaller current per cycle of operation. In particular, C0e<C0, C1e<C1, but of course C0e=C1e and C0=C1. The internal structure of the equalizer 935 is the same as the transmitter 932 except that the equalizer 935 receives data that is delayed by half of the clock cycle. So, for example, equalizer 935 outputs a current during clkN=1, which represents the value of dat1 on the previous half of the clock. Please note that the output connections of the equalizer 935 are compared The output connections to transmitter 932 are perceived to be opposite, so the current from equalizer 935 is "opposite" to the current of transmitter 932. The ninth D diagram illustrates the differential transmit lines of the data driven charge pump transmitter 930, i.e., the typical voltage waveforms 940 of lineP and lineN, in accordance with an embodiment of the present invention. When the previous bit is different from the current bit to be transmitted, the equalizer 935 and the transmitter 932 drive the current in the same direction, so that the line voltage on one of lineP or lineN rises to a full Vline (or always drop to 0 volts). On the other hand, if consecutive bits have the same value, the voltages on the differential lines are driven to intermediate values. For example, suppose dat0P = 0 and the previous dat1P = 0; in this example, transmitter 932 drives lineP to 0 volts, but equalizer 935 drives a small current into lineP to make its voltage 0 volts and Vline/2 between. Similarly, if dat0P=1 and the previous dat1P=1, the transmitter 932 drives lineP to HI, but the equalizer 935 does not drive current into the lineP. Therefore, lineP only rises to a voltage between Vline/2 and Vline. Therefore, the differential output voltage V(lineP)-V(lineN) can take one of four values: +2 * Vline, -2 * Vline, +2 * A * Vline and -2 * A * Vline, where A is a constant between 0 and 1. The value of this constant A sets the strength of the 2-tap filter used to equalize a channel with frequency dependent attenuation. In summary, a practical transmitter would require a method for adjusting the value of the constant A defined in the previous paragraph. This can be achieved during design by appropriately adjusting the magnitude of the capacitances C{0, 1} and C{0, 1}e. In other embodiments, both the transmitter 932 and the equalizer 935 can be decomposed into equal or weighted segments, and the segments are turned on or off to set the relative strength of the transmitter 932 to the equalizer 935, Thereby the constant A is changed. In other embodiments, the capacitively coupled or "linear" equalizer 435 shown in the fourth diagram is implemented in a differential version and connected in parallel with a transmitter, i.e., in place of the ninth C diagram. The izer 935 forms an equalized differential transmitter. In still other specific embodiments, as a unipolar charge pump, Figure IX The sub-circuit 910 shown in the capacitor pre-charging is replaced by a bipolar charge pump, such as sub-circuits 433 and 434 of the fourth D diagram pre-charging with the flying capacitor, the pre-charging circuit shown in FIG. Sub-circuits 541 and 543 pre-charged with flying capacitors as shown in FIG. 5C, and sub-circuit 555 pre-charged with flying capacitors. The bipolar charge pump can be configured to generate a differential signal pair having a common mode voltage intermediate the wafer supply voltage (Vdd) and ground. Since the differential signaling system does not require a fixed reference voltage and plane, since the reference is implied by the average of the two signal voltages on line {P, N}, the common mode voltage can be set to any convenient Value. An embodiment of the invention can be implemented as a program product for use by a computer system. The program of the program product defines the functions of the specific embodiments (including the methods described herein) and can be included on a variety of computer readable storage media. Exemplary computer readable storage media include, but are not limited to: (i) non-writable storage media (eg, a read-only memory device in a computer, such as a CD-ROM disc that can be read by a CD-ROM, flashing Memory, ROM chip, or any other kind of solid non-volatile semiconductor memory) on which information can be stored permanently; and (ii) writeable to a storage medium (eg, a floppy disk in a disk drive, or A hard disk drive, or any kind of solid state random access semiconductor memory, on which information that can be changed can be stored. The invention has been described above with reference to specific embodiments. It will be apparent to those skilled in the art that various modifications and changes can be made without departing from the spirit and scope of the invention. Accordingly, the foregoing description and drawings are to be regarded as illustrative 100‧‧‧Single-end signaling system 101‧‧‧Transfer device 102‧‧‧ receiving device 103‧‧‧ feet 105‧‧‧Send line 135‧‧‧ transmission line 203‧‧‧Transmitter current flow 204‧‧‧Signal current flow 205‧‧‧bypass capacitor 206‧‧‧Terminal resistor 211‧‧‧Common ground impedance 300‧‧‧ Ground reference transmission system 301‧‧‧Transportation device 304‧‧‧ Grounding 305‧‧‧Symmetric voltage pairing 310‧‧‧ converter 320‧‧‧Control loops and converters 322,324‧‧‧Control loop 400‧‧‧Data Driven Charge Pump Transmitter 401, 402, 403, 404‧‧ ‧ subcircuit 407‧‧‧clock and data signal 413,414‧‧‧Subcircuits pre-charged with flying capacitors 416‧‧‧Multiple Electric Crystals 418,419‧‧‧Precharged transistor 430‧‧‧Data Driven Charge Pump Transmission Feeder 435‧‧‧ equalizer 450‧‧‧ Equalizer Transmitter 500‧‧‧Switching capacitor transmitter 512,514‧‧‧ positive converter 522, 523‧‧ positive converter 540‧‧‧Bridged Charge Pump Transmitter 542,544‧‧‧Switched Capacitor Converter 547‧‧‧Signal current return network 547‧‧‧Signal grounding 550‧‧‧Bridge transmitter with pre-calculated gate 555‧‧‧Subcircuits pre-charged with flying capacitors 560‧‧‧Regulator loop 570‧‧‧transmitter 572‧‧‧Copy Switched Capacitor Converter 600‧‧‧ Grounding Gate Amplifier 602‧‧‧ bias generator 605‧‧‧Input amplifier 610‧‧‧Second stage amplifier 620‧‧‧Adjustable bias generator 700‧‧‧ computer system 702‧‧‧Central Processing Unit 704‧‧‧System Memory 705‧‧‧Memory Bridge 706,713‧‧‧Communication path 707‧‧‧Input/Output Bridge 708‧‧‧ Input device 710‧‧‧ display device 712‧‧‧Parallel Processing Subsystem 714‧‧‧System Disc 716‧‧‧Switch 718‧‧‧Network Adapter 720,721‧‧‧ embedded card 740‧‧‧Processor/wafer 751‧‧‧ single-ended input signal 755‧‧‧ single-ended output signal 765‧‧‧ Receiver Circuit 770‧‧‧ core circuit 775‧‧‧transmitter circuit 800‧‧‧Data Driven Switched Capacitor Transmitter 801,803‧‧‧ line terminal 802,804‧‧‧Subcircuits pre-charged with capacitors 812‧‧‧正 data transmitter 814‧‧‧negative data transmitter 840‧‧‧Bridge Charge Pump Transmitter 900‧‧‧Data Driven Charge Pump Data Transmission System 902,904‧‧‧ terminating resistor 903‧‧‧Differential transmission line 905‧‧‧Typical voltage waveform 906‧‧‧ Receiver 907‧‧‧ Receiver Input Amplifier 910‧‧‧Subcircuits pre-charged with capacitors 912‧‧‧Multiplexer and electronic circuit 920, 921‧‧‧Subcircuits pre-charged with capacitors 930‧‧‧ Equalized data driven charge pump transmitter Therefore, a more detailed description of one of the embodiments of the present invention can be understood by reference to the specific embodiments, which are illustrated in the accompanying drawings. But it should be noted that such ancillary The drawings are merely illustrative of typical embodiments of the invention, and are not intended to 1A shows an exemplary single-ended signaling system illustrating a reference voltage problem in accordance with the prior art; FIG. 1B shows an exemplary single-ended signaling system using an internal reference voltage in accordance with the prior art; Figure C shows an exemplary single-ended signaling system using an accompanying reference voltage in accordance with the prior art; Figure 2A shows current flow in a single-ended signaling system according to the prior art, wherein the ground plane is required As the common signal return conductor; the second B diagram shows the current in the single-ended transmission as shown in FIG. 2A when a "1" is to be transmitted by the transmitting device to the receiving device according to the prior art. Flow; FIG. 3A illustrates a single-ended signaling system in accordance with a ground reference of the prior art; and FIG. 3B illustrates a switched capacitor DC-DC converter that generates +Vs according to the prior art setting; The figure illustrates a switched capacitor DC-DC converter that generates -Vs according to the prior art setting; the third D diagram illustrates two switched capacitor DC-DC converters according to the third and third C diagrams of the prior art. Pair of summary controls a circuit; a fourth embodiment illustrates a data driven charge pump transmitter in accordance with an embodiment of the present invention; and a fourth block diagram illustrates a data driven charge pump transfer using a CMOS transistor in accordance with an embodiment of the present invention. The device is simplified compared to the data driven charge pump of FIG. A; FIG. 4C illustrates a data driven charge pump transmitter implemented by using a CMOS gate and a transistor according to an embodiment of the invention. ; FIG. 4D illustrates a data driven charge pump transmitter including an equalizer in accordance with an embodiment of the present invention; and FIG. 4E illustrates pressurizing the signal line voltage according to an embodiment of the present invention without An equalizer having an additional gate delay; FIG. 5A illustrates a switched capacitor transmitter for directing return current into the ground network in accordance with an embodiment of the present invention; FIG. 5B illustrates a first embodiment of the present invention In a specific embodiment, the precharge current is maintained from an averaging circuit of the GND network; and FIG. 5C illustrates a "bridge" charge pump transmitter according to an embodiment of the invention, wherein the output current is only Flowing in the GND network; FIG. 5D illustrates a bridge transmitter having a pre-calculated gate in accordance with an embodiment of the present invention; and FIG. 5E illustrates a control for controlling in accordance with an embodiment of the present invention a regulator circuit for the voltage on the transmission line; a fifth F diagram illustrating a method for precharging a flying capacitor sub-circuit and driving on different phases of the clock in accordance with an exemplary embodiment of the present invention Method of transmitting a line; FIG. 6A illustrates a grounding gate amplifier according to an embodiment of the present invention; and FIG. 6B illustrates an adjustable biasing generator according to an embodiment of the present invention; A method for adjusting an offset trimming mechanism according to an exemplary embodiment of the present invention is illustrated; FIG. 7A is a diagram illustrating a single-ended signaling for a ground reference according to one or more aspects of the present invention and including the A block diagram of a processor/wafer of the transmitter and receiver circuits; and a seventh block diagram of a computer system configured to implement one or more aspects of the present invention. FIG. 8A illustrates a data-driven switched capacitor transmitter for differential pairing transmission according to an embodiment of the present invention; FIG. 8B illustrates a differential pairing method according to an embodiment of the present invention a bridged charge pump transmitter of the letter; FIG. 8C illustrates a method for precharging a capacitor subcircuit and driving a differential transmit line on different phases of the clock, in accordance with an exemplary embodiment of the present invention; 9A illustrates a differential version of a data driven charge pump data transfer system in accordance with an embodiment of the present invention; and FIG. 9B illustrates another differential of a data driven charge pump transmitter in accordance with an embodiment of the present invention. Version IX C illustrates an equalization transmitter using a 2-tap pre-emphasis FIR filter in accordance with an embodiment of the present invention; and a ninth D diagram illustrating such an embodiment in accordance with an embodiment of the present invention The data drives the differential transmit lines of the charge pump transmitter, ie the typical voltage waveforms of lineP and lineN. A transmitter circuit comprising: a sub-circuit pre-charged with a capacitor, comprising a first capacitor configured to be precharged to a substantially constant Vdd supply voltage during a positive phase of a clock, and a first a second capacitor disposed to be precharged to the substantially constant Vdd supply voltage during a negative phase of the clock, wherein the first capacitor is configured as a first flying capacitor and the second capacitor is configured as a second flying capacitor And a discharge and multiplexer sub-circuit configured to couple the first capacitor to the differential transmit pair including a first transmit line and a second transmit line during the negative phase of the clock The first transmit line drives the first transmit line and is configured to couple the second capacitor to the first transmit line of the differential transmit pair during the positive phase of the clock to drive the a first transmit line, wherein the second transmit line is a negative transmit line, and the discharge and multiplexer subcircuit is further configured to transfer charge from the first flight capacitor and generate a current to the second transmit In the letter line When a data signal is driven low during the negative phase of the clock of the second transmission line is high. The transmitter circuit of claim 1, wherein the first transmission line is a positive transmission line, and the discharge and multiplexer sub-circuit is further configured to transfer charge from the first flight capacitor and generate a A current is supplied to the first transmit line to drive the first transmit line high during a negative phase of the clock when a data signal is high. A transmitter circuit comprising: a sub-circuit pre-charged with a capacitor, comprising a first capacitor configured to be precharged to a substantially constant Vdd supply voltage during a positive phase of a clock, and a first a second capacitor disposed to be precharged to the substantially constant Vdd supply voltage during a negative phase of the clock, wherein the first capacitor is configured as a first flying capacitor and the second capacitor is configured as a second flying capacitor ;and a discharge and multiplexer sub-circuit configured to couple the first capacitor to the differential transmit pair including a first transmit line and a second transmit line during the negative phase of the clock a first transmit line to drive the first transmit line and configured to couple the second capacitor to the first transmit line of the differential transmit pair during the positive phase of the clock to drive the first a transmission line, wherein the second transmission line is a negative transmission line, and the discharge and multiplexer subcircuit is further configured to transfer charge from the second flight capacitor and generate a current to the second transmission line The second transmit line is driven high during a positive phase of the clock when a data signal is low. A transmitter circuit comprising: a sub-circuit pre-charged with a capacitor, comprising a first capacitor configured to be precharged to a substantially constant Vdd supply voltage during a positive phase of a clock, and a first a second capacitor disposed to be precharged to the substantially constant Vdd supply voltage during a negative phase of the clock, wherein the first capacitor is configured as a first flying capacitor and the second capacitor is configured as a second flying capacitor And a discharge and multiplexer sub-circuit configured to couple the first capacitor to the differential transmit pair including a first transmit line and a second transmit line during the negative phase of the clock The first transmit line drives the first transmit line and is configured to couple the second capacitor to the first transmit line of the differential transmit pair during the positive phase of the clock to drive the a first transmit line, wherein the sub-circuit pre-charging with a capacitor includes: the first flying capacitor having a first terminal coupled to the substantially constant Vdd supply voltage via a first clock-enabled transistor Which is activated during the phase of the clock timing, and when the enable pulse having a second via The transistor is coupled to a second terminal of a ground supply voltage that is also activated during a positive phase of the clock; and the second flying capacitor has a transistor coupled via a third clock enable to the ground supply a first terminal of the voltage that is activated during a negative phase of the clock and has a second terminal coupled to the substantially constant Vdd supply voltage via a fourth clock-enabled transistor, The negative phase period of the clock is also activated. The transmitter circuit of claim 1, further comprising an equalizer coupled to the first transmit line and the second transmit line of the differential transmit pair, respectively, and configured to An additional current is boosted to one of the first transmit line and the second transmit line when a profile signal transmitted on the differential transmit pair transitions from low to high or from high to low. The transmitter circuit of claim 1, further comprising a pre-calculated gate configured to combine the positive and negative versions of a data signal and the positive and negative versions of the clock to generate coupling to the discharge and The output of the sub-circuit of the tool. A method for differential signaling, the method comprising: precharging a first capacitor to a substantially constant Vdd supply voltage during a positive phase of a clock; and applying a second capacitor during the positive phase of the clock And coupled to a first transmit line of a differential transmit pair, the differential transmit pair includes a first transmit line or a second transmit line, wherein the first capacitor is configured as a first flight capacitor And the second capacitor is configured as a second flying capacitor; precharging the second capacitor to the substantially constant Vdd supply voltage during a negative phase of the clock; and during the negative phase of the clock a capacitor coupled to the first transmit line of the differential transmit pair, wherein the second transmit line is a negative transmit line, and the discharge and multiplexer subcircuit is further configured to The first flying capacitor transmits a charge and generates a current to the second transmission The second transmit line is driven high during a negative phase of the clock when the data signal is low. TW101136887A 2012-01-30 2012-10-05 A data-driven charge-pump transmitter for differential signaling TWI514767B (en) US13/361,843 US9338036B2 (en) 2012-01-30 2012-01-30 Data-driven charge-pump transmitter for differential signaling TW201332290A TW201332290A (en) 2013-08-01 TWI514767B true TWI514767B (en) 2015-12-21 TW101136887A TWI514767B (en) 2012-01-30 2012-10-05 A data-driven charge-pump transmitter for differential signaling US8611437B2 (en) 2012-01-26 2013-12-17 Nvidia Corporation Ground referenced single-ended signaling US9171607B2 (en) 2013-03-15 2015-10-27 Nvidia Corporation Ground-referenced single-ended system-on-package US9076551B2 (en) 2013-03-15 2015-07-07 Nvidia Corporation Multi-phase ground-referenced single-ended signaling US9153314B2 (en) 2013-03-15 2015-10-06 Nvidia Corporation Ground-referenced single-ended memory interconnect US9170980B2 (en) 2013-03-15 2015-10-27 Nvidia Corporation Ground-referenced single-ended signaling connected graphics processing unit multi-chip module EP2884643A1 (en) * 2013-12-11 2015-06-17 Nxp B.V. DC-DC voltage converter and conversion method EP2884642B1 (en) * 2013-12-11 2016-10-19 Nxp B.V. DC-DC voltage converter and conversion method EP2911281B1 (en) * 2014-02-24 2019-04-10 Nxp B.V. Electronic device US9525441B2 (en) * 2014-12-11 2016-12-20 Intel Corporation Common mode noise introduction to reduce radio frequency interference KR20160105101A (en) * 2015-02-27 2016-09-06 에스케이하이닉스 주식회사 Stack Package and Semiconductor Integrated Circuit Device Using Variable voltage US10122392B2 (en) * 2016-08-18 2018-11-06 Advanced Micro Devices, Inc. Active equalizing negative resistance amplifier for bi-directional bandwidth extension CN109213708A (en) * 2017-06-29 2019-01-15 华为技术有限公司 A kind of driver for the link transmitters that serially unstring US20060232311A1 (en) * 2005-04-15 2006-10-19 Elpida Memory, Inc. Duty detection circuit and method for controlling the same US20090002066A1 (en) * 2007-06-29 2009-01-01 Meng-Chang Lee Multi-Tap Direct Sub-sampling Mixing System for Wireless Receivers US20110199122A1 (en) * 2009-08-31 2011-08-18 Panasonic Corporation Direct sampling circuit and receiver US6125047A (en) 1993-12-14 2000-09-26 Seagate Technology, Inc. Regulated inverting power supply US5696031A (en) 1996-11-20 1997-12-09 Micron Technology, Inc. Device and method for stacking wire-bonded integrated circuit dice on flip-chip bonded integrated circuit dice US6188877B1 (en) 1997-07-03 2001-02-13 Ericsson Inc. Dual-band, dual-mode power amplifier with reduced power loss US6754838B2 (en) 2001-01-26 2004-06-22 Hewlett-Packard Development Company, L.P. Method for reducing tuning etch in a clock-forwarded interface US7702293B2 (en) 2001-11-02 2010-04-20 Nokia Corporation Multi-mode I/O circuitry supporting low interference signaling schemes for high speed digital interfaces EP1333380A1 (en) 2002-01-30 2003-08-06 Stmicroelectronics, Ltd. DMA access generator US7010637B2 (en) 2002-05-02 2006-03-07 Intel Corporation Single-ended memory interface system JP4060696B2 (en) * 2002-12-13 2008-03-12 日本オプネクスト株式会社 Optical transmission module US7173547B2 (en) 2004-03-22 2007-02-06 International Business Machines Incorporated Offset compensation in local-probe data storage devices US7385607B2 (en) 2004-04-12 2008-06-10 Nvidia Corporation Scalable shader architecture US7633505B1 (en) 2004-11-17 2009-12-15 Nvidia Corporation Apparatus, system, and method for joint processing in graphics processing units US7375992B2 (en) 2005-01-24 2008-05-20 The Hong Kong University Of Science And Technology Switched-capacitor regulators KR101119066B1 (en) 2005-08-12 2012-03-15 삼성전자주식회사 Multi-chip package US7652922B2 (en) 2005-09-30 2010-01-26 Mosaid Technologies Incorporated Multiple independent serial link memory US7728841B1 (en) 2005-12-19 2010-06-01 Nvidia Corporation Coherent shader output for multiple targets TWI269462B (en) 2006-01-11 2006-12-21 Advanced Semiconductor Eng Multi-chip build-up package of an optoelectronic chip and method for fabricating the same US7423458B2 (en) 2006-03-08 2008-09-09 Analog Devices, Inc. Multiple sampling sample and hold architectures US20070211711A1 (en) 2006-03-08 2007-09-13 Clayton James E Thin multichip flex-module US20080189457A1 (en) 2006-12-06 2008-08-07 International Business Machines Corporation Multimodal memory controllers JP2009022093A (en) 2007-07-11 2009-01-29 Ricoh Co Ltd Multi-output power supply unit US8892942B2 (en) 2007-07-27 2014-11-18 Hewlett-Packard Development Company, L.P. Rank sparing system and method US7760010B2 (en) 2007-10-30 2010-07-20 International Business Machines Corporation Switched-capacitor charge pumps TWI347097B (en) 2007-12-31 2011-08-11 Ind Tech Res Inst Circuit with programmable signal bandwidth and method thereof US7583133B2 (en) 2008-01-25 2009-09-01 Texas Instruments Incorporated Self-oscillating regulated low-ripple charge pump and method US7701763B2 (en) 2008-04-23 2010-04-20 Micron Technology, Inc. Leakage compensation during program and read operations US7848153B2 (en) 2008-08-19 2010-12-07 Qimonda Ag High speed memory architecture US8065540B2 (en) 2008-10-31 2011-11-22 Dell Products, Lp Power control for information handling system having shared resources US20110063305A1 (en) 2009-09-16 2011-03-17 Nvidia Corporation Co-processing techniques on heterogeneous graphics processing units JP2011114622A (en) 2009-11-27 2011-06-09 Panasonic Corp Filter automatic adjustment circuit and method, and radio communication equipment JP5032623B2 (en) 2010-03-26 2012-09-26 株式会社東芝 Semiconductor memory device KR101710658B1 (en) 2010-06-18 2017-02-27 삼성전자 주식회사 Three dimensional stacked structure semiconductor device having through silicon via and method for signaling thereof JP2012059991A (en) 2010-09-10 2012-03-22 Toshiba Corp Semiconductor integrated circuit US8395416B2 (en) 2010-09-21 2013-03-12 Intel Corporation Incorporating an independent logic block in a system-on-a-chip US8352676B2 (en) 2010-10-26 2013-01-08 Hitachi, Ltd. Apparatus and method to store a plurality of data having a common pattern and guarantee codes associated therewith in a single page JP6030298B2 (en) 2010-12-28 2016-11-24 株式会社半導体エネルギー研究所 Buffer storage device and signal processing circuit US8621131B2 (en) 2011-08-30 2013-12-31 Advanced Micro Devices, Inc. Uniform multi-chip identification and routing system US8624626B2 (en) 2011-11-14 2014-01-07 Taiwan Semiconductor Manufacturing Co., Ltd. 3D IC structure and method TWI478490B (en) 2011-12-14 2015-03-21 Ind Tech Res Inst Charge domain filter and method therof DE112012005383T5 (en) 2011-12-22 2014-09-11 International Business Machines Corp. Storage unit access system US8538558B1 (en) 2012-03-01 2013-09-17 Texas Instruments Incorporated Systems and methods for control with a multi-chip module with multiple dies US9111601B2 (en) 2012-06-08 2015-08-18 Qualcomm Incorporated Negative voltage generators US8953384B2 (en) 2012-07-31 2015-02-10 Winbond Electronics Corporation Sense amplifier for flash memory US8854123B1 (en) 2013-03-15 2014-10-07 Nvidia Corporation On-package multiprocessor ground-referenced single-ended interconnect 2012-09-28 DE DE201210217836 patent/DE102012217836A1/en active Pending 2012-10-05 TW TW101136887A patent/TWI514767B/en active DE102012217836A1 (en) 2013-08-01 Kim et al. 2002 Adaptive supply serial links with sub-1-V operation and per-pin clock recovery US6812767B2 (en) 2004-11-02 High speed source synchronous signaling for interfacing VLSI CMOS circuits to transmission lines US6377638B1 (en) 2002-04-23 Signal transmission system for transmitting signals between lsi chips, receiver circuit for use in the signal transmission system, and semiconductor memory device applying the signal transmission system US5959472A (en) 1999-09-28 Driver circuit device US7199728B2 (en) 2007-04-03 Communication system with low power, DC-balanced serial link US5646558A (en) 1997-07-08 Plurality of distinct multiplexers that operate as a single multiplexer Poulton et al. 2007 A 14-mW 6.25-Gb/s transceiver in 90-nm CMOS DE10101066B4 (en) 2015-03-12 Driver circuit, receiving circuit and signal transmission bus system AU759089B2 (en) 2003-04-03 High speed signaling for interfacing VLSI CMOS circuits JP6170663B2 (en) 2017-07-26 Flexible receiver architecture DE60106541T2 (en) 2005-10-13 LVDS circuits connected in series for power US7071756B1 (en) 2006-07-04 Clock multiplexing system JP3866625B2 (en) 2007-01-10 Segmented mixed signal circuit device and switch circuit device US5811992A (en) 1998-09-22 Dynamic clocked inverter latch with reduced charged leakage and reduced body effect US6166971A (en) 2000-12-26 Method of and apparatus for correctly transmitting signals at high speed without waveform distortion US7161378B2 (en) 2007-01-09 Semiconductor memory device with on die termination circuit US6560290B2 (en) 2003-05-06 CMOS driver and on-chip termination for gigabaud speed data communication US20060214695A1 (en) 2006-09-28 Keeper circuits having dynamic leakage compensation US20140347108A1 (en) 2014-11-27 Method and apparatus for source-synchronous signaling US6803790B2 (en) 2004-10-12 Bidirectional port with clock channel used for synchronization US7777522B2 (en) 2010-08-17 Clocked single power supply level shifter Wong et al. 2004 A 27-mW 3.6-gb/s I/O transceiver US20060071691A1 (en) 2006-04-06 Input/output cells with localized clock routing US6426656B1 (en) 2002-07-30 High speed, low-power inter-chip transmission system O'Mahony et al. 2010 A 47$\,\times\, $10 Gb/s 1.4 mW/Gb/s Parallel Interface in 45 nm CMOS
CommonCrawl
Does basic QM allow for superluminal "particle movement" during wavefunction collapse? Can particles move superluminally away from their "expected values" using basic quantum theory? Here's an example: The eigenstates of a harmonic oscillator are defined from $(-\infty, \infty)$. This means there's a nonzero chance of measuring it at any arbitrarily far distance (at exponentially small, but albeit nonzero values). It seems as though nothing stops a wavefunction from collapsing arbitrarily far away from its expected value. For example, for the groundstate of a harmonic oscillator, the wavefunction is: $$\psi_0(x) = C{e^{-x}}^2$$ with an expected value of $\langle x \rangle = 0$. When this wavefunction is collapsed, there's a probability of it "jumping" away from its expected value an arbitrarily far distance. (As a side-note, things don't make a ton of sense trying to get a hold of it simply: If we crudely define "speed" as $\frac{\Delta x}{\Delta t} = \frac{x_f - x_i}{t_f - t_i}$... it seems as though if $\langle x \rangle$ is used instead of $x_i$ then basically any nonzero value that our system collapses to will cause some superluminal jump [from the point where the wavefunction was still uncollapsed and had an expected value of zero]. ) You might argue that using the expected value as the particles location is the "mistake" here and that I (to identify superluminal travel) must make two separate measurements separated in time. But intuitively, the location of an electron in a groundstate is very, very certain within a certain area. You would think that if it appeared to 'jump' very far away from that region that this would count as some sort of "motion". What if I have a grid of independent electrons all in groundstates. I can individually prepare information in the z-components of their spin (spin up or down), and I can do so without disturbing their wavefunction in X (Note: I'm talking about position, not the x-component of spin, which is an independent degree of freedom). Immediately after I prepare these states, I make a measurement of position for all of the electrons. Each electron has a super small, but finite probability of collapsing very far away (say the moon), for example. So there is a nonzero chance that my entire encoded information travels superluminally to a target destination. It's small, sure, but it's nonzero. This suggests to me that casuality is only preserved probabilistically, (The average can't violate causality, but individual events can.) but seems a little bit too 'out there' to be true. quantum-mechanics quantum-information faster-than-light causality wavefunction-collapse Steven Sagona Steven SagonaSteven Sagona $\begingroup$ If you had an equation for the wave function collapse, then we could say how it happened. As to me, I think each measurement gives a certain point, and many-many measurements represent a process of "collecting points" to figure out what $|\psi(x)|^2$ is, which is determined with a preparing apparatus. $\endgroup$ – Vladimir Kalitvianski Dec 2 '18 at 6:23 $\begingroup$ It has a different drift, but maybe you find this article interesting: arxiv.org/abs/quant-ph/0107025 notice the author. $\endgroup$ – lalala Dec 2 '18 at 9:55 No, there is no superluminal movement. The "wave function collapse" is not a "movement". To define a movement, you would first need an initial position and a final position - i.e. two fixed real numbers (or real vectors), and then to define its speed you need moreover the path taken between them to get the distance, and divide by the time elapsed from the point at which the particle departs the initial position to the point it arrives at the final position. There are many different quantum interpretations, of what constitutes the precise physical meaning (or not) to the wave function, and they might all understand this in different ways and all of them are, as far as we can tell, equivalent. However, the key here is - which is independent of interpretation - that in the collapse you do not have a movement as just defined. There is simply not a real-number value that we can say as the initial position, even if the final position can be more "confidently" assigned a value as it in this case it is obtained through a measurement. The expectation value is not a position you can say the particle is actually at any more than any other possible value (though this again may depend on your interpretation - if you use Bohmian mechanics, then there are "hidden" positions "underneath" the wave functions, but I'm saying with the common mathematical framework), it's just the average that many repeated measurements on many copies of that quantum state will produce. Thus its change from one value to another, likewise, does not constitute a motion. You can't measure at the initial, wait, and then measure again because measurements change things in proportion to the amount of information they extract from the system. After you did that first measurement, your second one won't even be statistically the same anymore as the original scenario. The only way to measure a quantum system's physical parameters like position and not disturb it is to extract zero information, which is, as one can imagine, rather useless and would not allow you to define any notion of "motion". Causality is not violated. To properly treat causality, you need to upgrade from usual particle-and-wave QM to relativistic quantum field theory, because the former is essentially formulated on the background of Newtonian absolute space and time, the latter is the proper Einsteinian view. In this case, quantum particles become energized states of a quantum field (thus how that "mass can be a form of energy"), and measurements are performed on the field. Thus each point in space gets its own quantum operator representing a field measurement (like holding a voltmeter between two points) at that point in space. Wave function collapse occurs to the field as a whole. The operators representing the field at space-like - i.e. "faster than light"-reachable, or causality-violating - positions commute, meaning that collapse of one does not lead to collapse in the other. (Namely the field wave functions for each operator only change phase, which is unobservable, but not magnitude.) The collapses occur within a "meta" probability space of field values, not physical space (e.g. a probability distribution of voltages you can measure with your voltmeter when it's connected between two points.). Conversely, from the viewpoint of basic particle-and-wave QM, there is no violation of causality either. Basic QM assumes Newtonian absolute space and time. There is no speed limit of causality any more than in classical mechanics. Neither theory predicts causality violations. It's just that the real-life Universe is not based on Newtonian absolute space and time, and we can only treat it as being such under suitably limited circumstances. Using the instant collapse of spatial wave function in particle-and-wave QM to argue for a causality violation is actually a fallacy of equivocation, equivocating on the meaning of space and time. You're having it play the role of both Newtonian and Einsteinian space-time at once. The problem of causality vs. relativity in QM is a done problem thanks to this and thanks to QFT, which is the correct framework, in terms of what we have available, to address this problem since it fundamentally is a relativity-related one and thus you need a proper relativistic version of QM to make any sense out of it. QFT is that relativistic version. It's only in interpretations where one tries to assign some hidden classical states "underneath", like Bohmian mechanics, where it might be a problem. (Generalizations of BM to QFT are significantly non-unique and subject to considerable dispute, and QFT, not QM, is the more fundamental theory.) Even then, posited causality violations would be unobservable because the mathematics that actually describes what you can observe is unchanged between interpretations. ("Interpretations" that do change it are not, strictly speaking, interpretations. They are separate physical theories.) The_SympathizerThe_Sympathizer Can particles move superluminally away from their "expected values" using basic quantum theory? It seems as though nothing stops a wavefunction from collapsing arbitrarily far away from its expected value. [...] This suggests to me that casuality is only preserved probabilistically. Yes, in the basic QM taught in a first course, particles can move faster than light. That is, wavefunctions can spread arbitrarily faster than light, and it's possible to detect a particle on Andromeda one second after it's detected on Earth, and these effects do not cancel out probabilistically. This is, of course, not in contradiction with relativity because basic QM is explicitly nonrelativistic. It knows absolutely nothing about the speed of light. You only have causality in the theory of relativistic quantum mechanics, also known as quantum field theory. Unfortunately, this point is muddled because of the famous EPR thought experiment, which uses entangled spins. Since spins and spin measurements behave the same way in both nonrelativistic quantum mechanics (NRQM) and quantum field theory, one often treats the system in NRQM for simplicity. One can then prove that no information can be transmitted faster than light in NRQM using spin measurements alone. (In fact, no information at all can be transmitted by spin measurements, faster or slower than $c$, because NRQM doesn't know what $c$ is.) Many textbooks and all popular books then oversimplify this to "no information can be transmitted faster than light in NRQM", but this is absolutely false. knzhouknzhou $\begingroup$ Of course you can encode information in spins, and transmit information by spin measurements. It's just the specific spin measurements in the EPR experiment that can't transmit information. You need to explain your answer better. $\endgroup$ – Peter Shor Dec 2 '18 at 12:55 $\begingroup$ @PeterShor I'm not familiar with that, can you give an example? $\endgroup$ – knzhou Dec 2 '18 at 12:56 $\begingroup$ Suppose I want to send you a bit. I prepare an electron with a $+\frac{1}{2}$ spin if I want to send a $0$ and a $-\frac{1}{2}$ spin if I want to send you a $1$. You measure the spin of the electron. I have just transmitted information to you via a spin measurement. I understand what you're trying to say (my measurements can't affect the probabilities of your measurements), but you've explained it very confusingly. $\endgroup$ – Peter Shor Dec 2 '18 at 13:04 $\begingroup$ @PeterShor Hmm, to me that's just another example of things moving in position space faster than $c$. The spin isn't even important, as you could e.g. send an electron or a positron. I was trying to say that the only source of nonlocality in NRQM is that things can move in position space faster than $c$, but it definitely is tricky to phrase that in terms of measurements. $\endgroup$ – knzhou Dec 2 '18 at 13:04 $\begingroup$ @PeterShor I want to say something like, spin measurements won't transmit information as long as the spin and position are decoupled. Maybe that's still not right, I need to think about it a bit more. $\endgroup$ – knzhou Dec 2 '18 at 13:06 Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-information faster-than-light causality wavefunction-collapse or ask your own question. Approximate probability of a person's wavefunction collapsing to the moon? Does quantum mechanics allow faster than light (FTL) travel? Does QFT prevent preparation of an entangled particle pair as in EPR experiment? Are black holes in a binary system with white holes, and are they both wormholes?
CommonCrawl
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up. Can expected "depth" of an element and expected "height" differ significantly? When analysing treaps (or, equivalently, BSTs or Quicksort), it is not too hard to show that $\qquad\displaystyle \mathbb{E}[d(k)] \in O(\log n)$ where $d(k)$ is the depth of the element with rank $k$ in the set of $n$ keys. Intuitively, this seems to imply that also $\qquad\displaystyle \mathbb{E}[h(T)] \in O(\log n)$ where $h(T)$ is the height of treap $T$, since $\qquad\displaystyle h(T) = \max_{k \in [1..n]} d(k)$. Formally, however, there does not seem to be an (immediate) relationship. We even have $\qquad\displaystyle \mathbb{E}[h(T)] \geq \max_{k \in [1..n]} \mathbb{E}[d(k)]$ by Jensen's inequality. Now, one can show expected logarithmic height via tail bounds, using more insight into the distribution of $d(k)$. It is easy to construct examples of distributions that skrew with above intuition, namely extremely asymmetric, heavy-tailed distributions. The question is, can/do such occur in the analysis of algorithms and data structures? Are there example for data structures $D$ (or algorithms) for which $\qquad\displaystyle \mathbb{E}[h(D)] \in \omega(\max_{e \in D} \mathbb{E}[d(e)])$? Of course, we have to interpret "depth" and "height" liberally if we consider structures that are not trees. Based on the posts Wandering Logic links to, "Expected average search time" (for $1/n \cdot \sum_{e \in D} \mathbb{E}[d(e)]$) and "expected maximum search time" (for $\mathbb{E}[h(D)]$) seem to be used. A related question on math.SE has yielded an interesting answer that may allow deriving useful bounds on $\mathbb{E}[h(D)]$ given suitable bounds on $\mathbb{E}[d(e)]$ and $\mathbb{V}[d(e)]$. algorithms data-structures algorithm-analysis probability-theory average-case Raphael♦Raphael In chained hash tables with uniform hashing a similar question arises. Given a table with $n$ elements hashed into $m$ buckets, the expected number of elements per bucket is $n/m$, but what is the expected length of the longest chain? I found a Stackoverflow question about chained hash tables. The answer by btilly argues that for fixed ratio $n/m$ the expected worst case longest chain is $\Theta(\log n/ \log\log n)$. (Since $n/m$ is fixed, $m$ doesn't need to come into the answer.) And the answer by templatetypedef references the following paper for a more detailed analysis: Gaston H. Gonnet: Expected Length of the Longest Probe Sequence in Hash Code Searching. J. ACM_ 28(2):289-304, 1981. [Free techreport version at University of Waterloo] Slightly more generally I found a math.stackexchange question about the balls and bins problem. The answer by Yuval Filmus cites the following paper, which gives even tighter bounds: Martin Raab; Angelika Steger: Balls into Bins - A Simple and Tight Analysis. 2nd Intl Wkshp on Randomization and Approximation Techniques in Computer Science, pp. 159-170, 1998. Finally, there is apparently a branch of statistics called extreme value theory culminating in the Fisher-Tippett-Gnedenko theorem, which, I think, gives expected maxima for a huge variety of distributions. Wandering LogicWandering Logic $\begingroup$ While nice, the FTP-theorem only applies to i.i.d. random variables; in this setting here we don't have that, and that's probably true for most comparable situations in data structure analysis. Or is it? $\endgroup$ – Raphael♦ Jun 23 '13 at 20:58 $\begingroup$ I really am out of my depth. i.i.d. seems to be an okay approximation in the hash-table case. btilly's answer, for example, assumes that you have $m$ independent samples of a Poisson random variable with mean $n/m$, and argues that the approximation is "good enough" (for large $n$.) I started googling "extreme value theory" and it looks like extreme value theory for non-stationary processes is a pretty active area of research. $\endgroup$ – Wandering Logic Jun 24 '13 at 0:46 Thanks for contributing an answer to Computer Science Stack Exchange! Not the answer you're looking for? Browse other questions tagged algorithms data-structures algorithm-analysis probability-theory average-case or ask your own question. What is the expected number of nodes at depth d of a tree after i random insertions What is the difference between expected cost and average cost of an algorithm? Average depth of a Binary Search Tree and AVL Tree Huffman Coding and Depth Calculation? Randomized Meldable Heap - Expected Height Height and depth of every node in Path Compression Randomized BST height analysis : How $Z_{n,i}$ and $Y_{k-1}$ are independent? Do height-balanced binary trees have logarithmic depth?
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Analysis & PDE Anal. PDE Volume 10, Number 6 (2017), 1455-1495. Structure of sets which are well approximated by zero sets of harmonic polynomials Matthew Badger, Max Engelstein, and Tatiana Toro More by Matthew Badger More by Max Engelstein More by Tatiana Toro Full-text: Access denied (no subscription detected) However, an active subscription may be available with MSP at msp.org/apde. We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text Article info and citation The zero sets of harmonic polynomials play a crucial role in the study of the free boundary regularity problem for harmonic measure. In order to understand the fine structure of these free boundaries, a detailed study of the singular points of these zero sets is required. In this paper we study how "degree-k points" sit inside zero sets of harmonic polynomials in ℝn of degree d (for all n ≥ 2 and 1 ≤ k ≤ d) and inside sets that admit arbitrarily good local approximations by zero sets of harmonic polynomials. We obtain a general structure theorem for the latter type of sets, including sharp Hausdorff and Minkowski dimension estimates on the singular set of degree-k points (k ≥ 2) without proving uniqueness of blowups or aid of PDE methods such as monotonicity formulas. In addition, we show that in the presence of a certain topological separation condition, the sharp dimension estimates improve and depend on the parity of k. An application is given to the two-phase free boundary regularity problem for harmonic measure below the continuous threshold introduced by Kenig and Toro. Anal. PDE, Volume 10, Number 6 (2017), 1455-1495. Received: 1 February 2017 First available in Project Euclid: 16 November 2017 Permanent link to this document https://projecteuclid.org/euclid.apde/1510843529 doi:10.2140/apde.2017.10.1455 Mathematical Reviews number (MathSciNet) MR3678494 Zentralblatt MATH identifier Primary: 33C55: Spherical harmonics 49J52: Nonsmooth analysis [See also 46G05, 58C50, 90C56] Secondary: 28A75: Length, area, volume, other geometric measure theory [See also 26B15, 49Q15] 31A15: Potentials and capacity, harmonic measure, extremal length [See also 30C85] 35R35: Free boundary problems Reifenberg-type sets harmonic polynomials \Lojasiewicz-type inequalities singular set Hausdorff dimension Minkowski dimension two-phase free boundary problems harmonic measure NTA domains Badger, Matthew; Engelstein, Max; Toro, Tatiana. Structure of sets which are well approximated by zero sets of harmonic polynomials. Anal. PDE 10 (2017), no. 6, 1455--1495. doi:10.2140/apde.2017.10.1455. https://projecteuclid.org/euclid.apde/1510843529 F. J. Almgren, Jr., "Dirichlet's problem for multiple valued functions and the regularity of mass minimizing integral currents", pp. 1–6 in Minimal submanifolds and geodesics: proceedings of the Japan-United States Seminar (Tokyo, 1977), edited by M. Obata, North-Holland, New York, 1979. Mathematical Reviews (MathSciNet): MR574247 Zentralblatt MATH: 0439.49028 S. Axler, P. Bourdon, and W. Ramey, Harmonic function theory, 2nd ed., Graduate Texts in Mathematics 137, Springer, 2001. Mathematical Reviews (MathSciNet): MR1805196 J. Azzam and M. Mourgoglou, "Tangent measures and absolute continuity of harmonic measure", preprint, 2015. To appear in Rev. Mat. Iberoam. arXiv: 1507.00926 J. Azzam, M. Mourgoglou, X. Tolsa, and A. Volberg, "On a two-phase problem for harmonic measure in general domains", preprint, 2016. J. Azzam, S. Hofmann, J. M. Martell, K. Nyström, and T. Toro, "A new characterization of chord-arc domains", J. Eur. Math. Soc. $($JEMS$)$ 19:4 (2017), 967–981. Zentralblatt MATH: 06705845 Digital Object Identifier: doi:10.4171/JEMS/685 J. Azzam, M. Mourgoglou, and X. Tolsa, "Mutual absolute continuity of interior and exterior harmonic measure implies rectifiability", Comm. Pure Appl. Math. (online publication February 2017). M. Badger, "Harmonic polynomials and tangent measures of harmonic measure", Rev. Mat. Iberoam. 27:3 (2011), 841–870. Digital Object Identifier: doi:10.4171/RMI/654 Project Euclid: euclid.rmi/1312906779 M. Badger, "Null sets of harmonic measure on NTA domains: Lipschitz approximation revisited", Math. Z. 270:1-2 (2012), 241–262. Digital Object Identifier: doi:10.1007/s00209-010-0795-1 M. Badger, "Flat points in zero sets of harmonic polynomials and harmonic measure from two sides", J. Lond. Math. Soc. $(2)$ 87:1 (2013), 111–137. Digital Object Identifier: doi:10.1112/jlms/jds041 M. Badger and S. Lewis, "Local set approximation: Mattila–Vuorinen type sets, Reifenberg type sets, and tangent sets", Forum Math. Sigma 3 (2015), art. id. e24. Digital Object Identifier: doi:10.1017/fms.2015.26 G. Beer, Topologies on closed and closed convex sets, Mathematics and its Applications 268, Kluwer Academic Publishers Group, Dordrecht, 1993. C. J. Bishop, "Some questions concerning harmonic measure", pp. 89–97 in Partial differential equations with minimal smoothness and applications (Chicago, IL, 1990), edited by B. Dahlberg et al., IMA Vol. Math. Appl. 42, Springer, 1992. S. Bortz and S. Hofmann, "A singular integral approach to a two phase free boundary problem", Proc. Amer. Math. Soc. 144:9 (2016), 3959–3973. Digital Object Identifier: doi:10.1090/proc/13035 L. Capogna, C. E. Kenig, and L. Lanzani, Harmonic measure: geometric and analytic points of view, University Lecture Series 35, Amer. Math. Soc., Providence, RI, 2005. J. Cheeger, A. Naber, and D. Valtorta, "Critical sets of elliptic equations", Comm. Pure Appl. Math. 68:2 (2015), 173–209. Digital Object Identifier: doi:10.1002/cpa.21518 G. David and T. Toro, "Reifenberg flat metric spaces, snowballs, and embeddings", Math. Ann. 315:4 (1999), 641–710. Digital Object Identifier: doi:10.1007/s002080050332 G. David and T. Toro, Reifenberg parameterizations for sets with holes, Mem. Amer. Math. Soc. 1012, Amer. Math. Soc., Providence, RI, 2012. Digital Object Identifier: doi:10.1090/S0065-9266-2011-00629-5 G. David, C. Kenig, and T. Toro, "Asymptotically optimally doubling measures and Reifenberg flat sets with vanishing constant", Comm. Pure Appl. Math. 54:4 (2001), 385–449. Digital Object Identifier: doi:10.1002/1097-0312(200104)54:4<385::AID-CPA1>3.0.CO;2-M M. Engelstein, "A two-phase free boundary problem for harmonic measure", Ann. Sci. Éc. Norm. Supér. $(4)$ 49:4 (2016), 859–905. Digital Object Identifier: doi:10.24033/asens.2297 J. B. Garnett and D. E. Marshall, Harmonic measure, New Mathematical Monographs 2, Cambridge University Press, 2005. D. Girela-Sarrión and X. Tolsa, "The Riesz transform and quantitative rectifiability for general Radon measures", preprint, 2016. Q. Han, "Nodal sets of harmonic functions", Pure Appl. Math. Q. 3:3 (2007), 647–688. Digital Object Identifier: doi:10.4310/PAMQ.2007.v3.n3.a2 D. S. Jerison and C. E. Kenig, "Boundary behavior of harmonic functions in nontangentially accessible domains", Adv. in Math. 46:1 (1982), 80–147. Digital Object Identifier: doi:10.1016/0001-8708(82)90055-X P. W. Jones, "Rectifiable sets and the traveling salesman problem", Invent. Math. 102:1 (1990), 1–15. Digital Object Identifier: doi:10.1007/BF01233418 C. E. Kenig and T. Toro, "Free boundary regularity for harmonic measures and Poisson kernels", Ann. of Math. $(2)$ 150:2 (1999), 369–454. Digital Object Identifier: doi:10.2307/121086 C. Kenig and T. Toro, "Free boundary regularity below the continuous threshold: 2-phase problems", J. Reine Angew. Math. 596 (2006), 1–44. Digital Object Identifier: doi:10.1515/CRELLE.2006.050 C. Kenig, D. Preiss, and T. Toro, "Boundary structure and size in terms of interior and exterior harmonic measures in higher dimensions", J. Amer. Math. Soc. 22:3 (2009), 771–796. Digital Object Identifier: doi:10.1090/S0894-0347-08-00601-2 J. Kollár, "An effective Łojasiewicz inequality for real polynomials", Period. Math. Hungar. 38:3 (1999), 213–221. Digital Object Identifier: doi:10.1023/A:1004806609074 J. L. Lewis, G. C. Verchota, and A. L. Vogel, "Wolff snowflakes", Pacific J. Math. 218:1 (2005), 139–166. Digital Object Identifier: doi:10.2140/pjm.2005.218.139 H. Lewy, "On the minimum number of domains in which the nodal lines of spherical harmonics divide the sphere", Comm. Partial Differential Equations 2:12 (1977), 1233–1244. Digital Object Identifier: doi:10.1080/03605307708820059 A. Logunov and E. Malinnikova, "On ratios of harmonic functions", Adv. Math. 274 (2015), 241–262. Digital Object Identifier: doi:10.1016/j.aim.2015.01.009 S. Łojasiewicz, "Sur le problème de la division", Studia Math. 18 (1959), 87–136. Digital Object Identifier: doi:10.4064/sm-18-1-87-136 P. Mattila, Geometry of sets and measures in Euclidean spaces: fractals and rectifiability, Cambridge Studies in Advanced Mathematics 44, Cambridge University Press, 1995. P. Mattila and M. Vuorinen, "Linear approximation property, Minkowski dimension, and quasiconformal spheres", J. London Math. Soc. $(2)$ 42:2 (1990), 249–266. Digital Object Identifier: doi:10.1112/jlms/s2-42.2.249 P. Mörters and Y. Peres, Brownian motion, Cambridge Series in Statistical and Probabilistic Mathematics 30, Cambridge University Press, 2010. A. Naber and D. Valtorta, "Volume estimates on the critical sets of solutions to elliptic PDEs", preprint, 2014. arXiv: 1403.4176 T. S. Ph\dam, "An explicit bound for the Łojasiewicz exponent of real polynomials", Kodai Math. J. 35:2 (2012), 311–319. Digital Object Identifier: doi:10.2996/kmj/1341401053 Project Euclid: euclid.kmj/1341401053 D. Preiss, "Geometry of measures in $\mathbb{R}^n$: distribution, rectifiability, and densities", Ann. of Math. $(2)$ 125:3 (1987), 537–643. Digital Object Identifier: doi:10.2307/1971410 E. R. Reifenberg, "Solution of the Plateau Problem for $m$-dimensional surfaces of varying topological type", Acta Math. 104:1-2 (1960), 1–92. Project Euclid: euclid.acta/1485889149 R. T. Rockafellar and R. J.-B. Wets, Variational analysis, Grundlehren der Mathematischen Wissenschaften 317, Springer, 1998. A. Szulkin, "An example concerning the topological character of the zero-set of a harmonic function", Math. Scand. 43:1 (1978), 60–62. Digital Object Identifier: doi:10.7146/math.scand.a-11763 New content alerts Email RSS ToC RSS Article Turn Off MathJax What is MathJax? Harmonic polynomials and tangent measures of harmonic measure Badger, Matthew, Revista Matemática Iberoamericana, 2011 The one-phase problem for harmonic measure in two-sided NTA domains Azzam, Jonas, Mourgoglou, Mihalis, and Tolsa, Xavier, Analysis & PDE, 2017 Tangent measures of elliptic measure and applications Azzam, Jonas and Mourgoglou, Mihalis, Analysis & PDE, 2019 Obstacle problem with a degenerate force term Yeressian, Karen, Analysis & PDE, 2016 Hausdorff measure of p-Cantor sets. Cabrelli, C., Molter, U., Paulauskas, V., and Shonkwiler, R., Real Analysis Exchange, 2005 Harmonic Analysis on the Space of $p$-adic Unitary Hermitian Matrices, Mainly for Dyadic Case HIRONAKA, Yumiko, Tokyo Journal of Mathematics, 2017 The harmonic measure of balls in random trees Curien, Nicolas and Le Gall, Jean-François, The Annals of Probability, 2017 Complex projective structures: Lyapunov exponent, degree, and harmonic measure Deroin, Bertrand and Dujardin, Romain, Duke Mathematical Journal, 2017 Boundary regularity for p-harmonic functions and solutions of the obstacle problem on metric spaces BJÖRN, Anders and BJÖRN, Jana, Journal of the Mathematical Society of Japan, 2006 Jump-type Hunt processes generated by lower bounded semi-Dirichlet forms Fukushima, Masatoshi and Uemura, Toshihiro, The Annals of Probability, 2012 euclid.apde/1510843529
CommonCrawl
måndag 2 mars 2015 Bengtsson and Pierrehumbert: Back Radiation, Fossil Fuel and Pre-Industrial Hell Lennart Bengtsson is Sweden's leading climate scientist and main author of the Statement by the Royal Swedish Academy of Sciences on the Scientific Basis of Climate Change, which is the scientific rationale of Swedish climate politics aiming at a fossil free society by 2050. When I ask Bengtsson as top Swedish expert about the scientific origin and documentation of the element of radiative heat transfer from the atmosphere to the Earth's surface named "back radiation", which serves as a key element to support CO2 global warming alarmism, Bengtsson responds by saying that he is not able to give an answer and then kindly suggests that I pose instead my questions to Raymond Pierrehumbert presently King Carl XVI Gustaf Visiting Professorship in Environmental Science at the University of Stockholm: Chance took Ray Pierrehumbert to Stockholm where he fell for the city - and Sweden. On the King's environmental professorship, he now works on to illuminate the complex systems that explain the earth's climate and to get decision makers to switch to a fossil free society. "We are most likely going to pass the point where the earth's climate will be two degrees warmer". According to Ray Pierrehumbert that does not mean it´s automatically a disaster for humanity, but the risk of a catastrophic development increases the more the temperature rises. "The decisions we make today determine how the climate will be for the next ten thousand years!" "When the total amount of coal burned and ended up in the atmosphere amounts to three trillion tonnes, the earth's average temperature rises by two degrees. So far, we have transferred two trillion tonnes of carbon into the atmosphere. Reducing the consumption of fossil fuels is humanity's greatest challenge". Ray Pierrehumbert practices what he preaches. He has not owned a car in 20 years, he travels by public transport or by bicycle, lives in a small apartment and does not buy things unnecessarily. The problem from a climate perspective for him, and many other researchers, is air travel. But he is trying to use telephone conferencing and Skype as much as possible. Accordingly, he has informed King Carl XVI Gustaf that heating of the Royal Castle will no longer be allowed. We read that Prof Pierrehumbert has invested heavily in CO2 alarmism. When Bengtsson suggests me to ask Prof Pierrehumbert to check if my analysis of "back radiation" as non-physics is correct, he can safely count on a negative answer or no answer. This is clever, but why does Bengtsson bother at all? Why does Bengtsson not say simply stick to his earlier statement that my work is BS? Why involve Pierrehumbert? Here you can listen to Pierrehumbert crushing Lindzen, Spencer and Christy. What would he say about LB? Or watch: We are climate scientists, Chicago style. A fossil free Sweden may be possible, with our nuclear and hydro power, but what about the rest of the world now surviving on 80% fossil energy? How much will human population have to get reduced to save itself by reducing fossil fuel to zero by 2050? What if the King would read The Moral Case of Fossil Fuels by Alex Epstein: Renouncing oil and its byproducts would plunge civilization into a pre-industrial hell—a fact developing countries keenly realize. Following Pierrehumbert Sweden will be returned to a restart as an early developing country without even a King. PS1 Here is a copy of a letter I sent to Profs Raymond Pierrehumbert and Henning Rodhe KVA: Dear Professors Raymond Pierrehumbert and Henning Rodhe Upon suggestion by Prof Lennart Bengtsson I want to direct your attention to an analysis of the concept of "back radiation", as a key element in the standard description of atmospheric radiative heat transfer summarized in the Kiehl-Trenberth energy budget, which I present in detail on the web site https://computationalblackbody.wordpress.com and in posts on http://claesjohnson.blogspot.se under the categories "myth of back radiation" and "DLR". The analysis is based on a new proof of Planck's radiation law which indicates that "back radiation" has no physical reality. This is in direct contradiction to the Kiehl-Trenberth energy budget, where "back radiation" has a key role. I asked Prof Bengtsson about the original scientific sources documenting "back radiation" as a real physical phenomenon, but I did not receive an answer. Instead Prof Bengtsson suggested me to contact Prof Pierrehumbert as world leading expert on planetary climate and Prof Rodhe in charge of the new statement on the scientific basis of climate change to be made by the Royal Swedish Academy of Sciences. I thus pose the same question to you and I hope to get a clear answer. Since "back radiation" is such a fundamental part of the standard conception of CO2 global warming, documentation must exist. I also hope that you will seriously consider my evidence of the non-physics of "back radiation " and of course I am willing to present this evidence directly to you in a meeting to get your view, if you would prefer. Fossil free society. PS2 Key posts on non-physics of Back Radiation and Downwelling Longwave Radiation DLR are: The Inventors of DLR and thus GHE Josef Stefan: Back Radiation and DLR Pure Fiction The Big DLR/Back Radiation Bluff The Big Bluff of CO2 Alarmism: DLR PS3 Recall that Stefan-Boltzmann's radiation law for radiative transfer of heat energy $E>0$ between two blackbodies of temperature $T_1$ and $T_2$ with $T_1>T_2$ reads: $E =\sigma\times (T_1^4-T_2^4)$, (1) where $\sigma$ is a Stefan-Boltzmann's constant. Note that with $T_2=0$, (1) takes the form $E=\sigma\times T_1^4$ as the radiation from a blackbody of temperature $T_1$ into a background at 0 K. Back radiation/DLR results from a misrepresentation of (1) of the form $E = E_1-E_2\equiv\sigma\times T_1^4 - \sigma\times T_2^4$, (2) where E comes out as the difference bewteen two fictitious opposite heat transfers $E_1$ and $E_2$. Here $E_1$ and $E_2$ are fictitious because they represent heat transfers into a background of 0 K, which is not the case. Thus the algebraic decomposition of (1) into (2) has no physical reality as a decomposition of one way heat transfer into net transfer of two opposite heat transfers, because the transfer from cold to warm violates the 2nd law of thermodynamics. What is possible with symbols on a piece of paper, does not have to correspond to any reality. The fate of human civilization may depend on understanding this basic fact of science. PS4 Read Wall Street Journal: Political Assault on Climate Skeptics. Of course I do not expect to get any response from Pierrehumbert and Rodhe. Climate skeptics are subject to suppression not only in the US. PS5 Notice that Svante Arrhenius, the Swedish semi-god of CO2 alarmism, uses (1) and not (2) in his legendary article from 1896 with the triggering title On the Influence of Carbon Acid in the Air upon the Temperature on the Ground, as observed in an earlier post from 2010. PS6 The "effective blackbody temperature" of the Earth+atmosphere system is 255 K, as the temperature of a blackbody emitting the 240 W/m2 absorbed by the Earth+atmosphere out of a total of 340 W/m2 coming in from the Sun mostly as visible and ultraviolet light with a smaller portion as infrared. The "effective emission altitude" is about 5 km as the altitude where the temperature is 255 K. The difference 33 K between the ground temperature of 288 K and 255 K is commonly termed the "total greenhouse effect" as the ground temperature difference of an Earth with and without an atmosphere. But to compare Earths with and without atmosphere is not reasonable, unless you want to find a "greenhouse effect" which is as large as possible. It is more reasonable to compare Earths with different "atmospheric window", as the part of the total emission from the Earth-atmosphere system emitted from the ground. With a full window and a ground albedo of 0.3, the ground temperature emitting the 240 W/m2 would be about 280 K. The corresponding "total greenhouse effect" would thus be 8 K, to be compared with the standard estimate of 33 K. Closing the current window of 40W/m2 would then increase the effect with 2 K. With a "total greenhouse effect" of 8 K instead of 33 K, perturbations in the greenhouse effect will be correspondingly smaller: The 1 K upon doubling of CO2 in the standard perspective, would be reduced to 0.25 C and thus not measurable. This analysis can be viewed to reflect "double albedo", a first from absorption of sun light and a second from emission of infrared, as a double "transaction cost". Etiketter: Lennart Bengtsson, myth of backradiation Christopher Hoen-Sorteberg 4 mars 2015 10:24 An often cited paper by Ray Pierrehumbert is: "Infrared radiation and planetary temperature" (http://geosci.uchicago.edu/~rtp1/papers/PhysTodayRT2011.pdf) The following phrases are from the section of the paper defining "A Few Fundamentals": "At planetary energy densities, photons do not significantly interact with each other; their distribution evolves only through interaction with matter. The momentum of atmospheric photons is too small to allow any significant portion of their energy to go directly into translational kinetic energy of the molecules that absorb them. Instead, it goes into changing the internal quantum states of the molecules. A photon with frequency ν has energy hν, so for a photon to be absorbed or emitted, the molecule involved must have a transition between energy levels differing by that amount. Coupled vibrational and rotational states are the key players in IR absorption. An IR photon absorbed by a molecule knocks the molecule into a higher-energy quantum state. Those states have very long lifetimes, characterized by the spectroscopically measurable Einstein A coefficient. For example, for the CO2 transitions that are most significant in the thermal IR, the lifetimes tend to range from a few milliseconds to a few tenths of a second. In contrast, the typical time between collisions for, say, a nitrogen-dominated atmosphere at a pressure of 104 Pa and temperature of 250 K is well under 10−7 s. Therefore, the energy of the photon will almost always be assimilated by collisions into the general energy pool of the matter and establish a new Maxwell–Boltzmann distribution at a slightly higher temperature. That is how radiation heats matter in the LTE limit." As I read the paper this is how Pierrhumbert describes the mechanisms for absorption of IR by CO2 and transferring the IR as energy to the other elements of the atmosphere. I am not able to follow his reasoning, To me it seems to be a kind of adjusting the physics to fit his cause by using elements from quantum mechanics in a rather "artistic" manner. I would appreciate your comments on the paper in general, and the way he applies basic physics in particular. Claes Johnson 4 mars 2015 16:04 Notable is the total effect of CO2 is claimed to be 1/3 of the "greenhouse effect" of 33 C, however without any support but hand waiving. Neither is any estimate made of the warming effect of the CO2 "ditch". Jane 4 mars 2015 17:46 The problem with the actual world is the fact that fossil fuels will be used in the future even more intense than today because many countries (especially developing ones) due to an increased demand of energy of any kind (fossil + green) produced by the overpopulation will burn more coal oil and gas to cover this increased energy demand. I saw a recent prediction for the year 2035 that shows how the global energy mix will look twenty years from now: http://www.alternative-energies.net/a-prediction-regarding-the-global-electricity-mix-in-2035/ Luckily, developed areas like North America, the EU, Eurasia and Japan will decrease the fossil fuel usage so we might have a chance. Svar från KVA: Akademien Svarar Aldrig på Brev frå... Schwarzschild, Chandrasekhar, Back Radiation and G... Phlogiston Theory vs Radiation Theory with Back Ra... Nytt Brev till KVA om "Återstrålning" Comparison of Two Basic Models of the Greenhouse E... Basic Model of Greenhouse Effect: Climate Sensitiv... Correspondence with Pierrehumbert on Back Radiatio... Double Albedo Model: Greenhouse Effect Reduced fro... Öppet Brev till KVA om "Återstrålning" och "Växthu... KlimatAnpassningsPerspektivet Skall Ingå i All Utb... Bengtsson and Pierrehumbert: Back Radiation, Fossi...
CommonCrawl
First large-scale ethnobotanical survey in the province of Uíge, northern Angola Thea Lautenschläger ORCID: orcid.org/0000-0003-4013-94561, Mawunu Monizi2, Macuntima Pedro2, José Lau Mandombe2, Makaya Futuro Bránquima2, Christin Heinze1 & Christoph Neinhuis1 Journal of Ethnobiology and Ethnomedicine volume 14, Article number: 51 (2018) Cite this article Angola suffered a long-lasting military conflict. Therefore, traditional knowledge of plant usage is still an important part of cultural heritage, especially concerning the still very poor health care system in the country. Our study documents for the first time traditional knowledge of plant use of local Bakongo communities in the northern province of Uíge on a large scale with a focus on medicinal plants and puts data in context to different parameters of age, gender and distance to the provincial capital. Field work was carried out during nine field trips in 13 municipalities between October 2013 and October 2016. In 62 groups, 162 informants were interviewed. Herbarium specimens were taken for later identification. Database was analysed using Relative Frequency of Citations, Cultural Importance Index, and Informant Consensus Factor. Furthermore, significances of influence of age, gender and distance were calculated. Our study presents 2390 use-reports, listing 358 species in 96 plant families, while just three out of 358 mentioned species are endemic to Angola about one-fifth are neophytes. The larger the distance, the higher the number of use citations of medical plants. Although women represent just a fifth of all citations (22%), their contribution to medicinal plants was proportionally even higher (83%) than those of men (74%). Fifty percent of all plants mentioned in the study were just listed by men, 12% just by women. We made some new discoveries, for example. Gardenia ternifolia seems to be promising for treatment of measles, and Annona stenophylla subsp. cuneata has never been ethnobotanically nor phytochemically investigated. While the study area is large, no significant influence of the distance in regard to species composition in traditional healer's concepts of the respective village was pointed out. Although several plants were just mentioned by women or men, respectively, no significant restriction to gender-specific illnesses in medical plant use could be found. Merely concerning the age of informants, a slight shift could be detected. Visual representation of the ethnobotanical study in Uíge, northern Angola. Angola is regarded as a country with an unusually rich biodiversity covering a high amount of vegetation zones and habitats [1, 2]. Although several botanists, among them Friedrich Welwitsch (1806–1872), Hugo Baum (1867–1950) and John Gossweiler (1873–1952), visited and studied this richness, the war lasting 40 years did not allow them to carry out continuous botanical or ethnobotanical investigations [1]. Bossard (1987, 1993) investigated Ovimbundu traditional medicine, listing plant names just in Ovimbundu language without identifying botanical species [3, 4]. Nowadays, the considerable work of Figueiredo and Smith [1] creating a plant checklist for the country with about 7000 species represents a useful database for following and future studies. While quite a number of surveys were conducted in Southern Angola, just a few are located in the northern part [5, 6]. Göhre et al. [7] collected ethnobotanical data in disturbed areas around the city of Uíge. Monizi et al. [8] described a high variety of wild plants used for securing human survival in Ambuila, one of the 16 municipalities in the province of Uíge [8]. Heinze et al. [9] conducted the first ethnobotanical studies in the neighbouring province Cuanza Norte. Specific descriptions of fibre uses were given by Senwitz et al. [10]. According to the distribution of the ethnic tribe Bakongo, covering northern Angola as well as the adjacent Bas-Congo area, ethnobotanical studies conducted in the Democratic Republic of Congo should reveal comparable results of ethnobotanical uses in Angola [11]. Traditional knowledge is essential for the healthy cultural and social life within a society [12]. It is generally assumed that indigenous traditional knowledge information is going to be lost because it is, at least partly no longer essential for the survival of people. This is either due to influences such as the rapid development of rural areas or because of displacement of indigenous people [13, 14]. Although several infrastructure measures were undertaken in Angola, development is still slow, especially regarding the public health sector. Even if child mortality in Africa decreased during the last two decades, it is still very high. More specifically, Angola has the highest rate in Africa and worldwide and, following Sierra Leone the lowest life expectancy for women and men worldwide [15, 16]. Sousa-Figueiredo et al. [17] detected malnutrition and anaemia as public health problems. Smith et al. [18] documented that the overall prevalence of malnutrition is higher in rural than then in urban areas. In this context, ethnobotanical studies in northern Angola seemed reasonable not only in terms of documentation of the current state but urgently needed to record still existing knowledge. Furthermore, Moyo et al. [19] stated that the rich flora of sub-Saharan Africa suggests enormous potential for discovery of new chemical components with therapeutic value. In our large-scaled survey in the northern province of Uíge, covering about 60,000 km2, 13 out of 16 municipalities were visited, including both savannah and forest formations. Therefore, this study for the first time Provides an overview of traditional plant uses and health methods in the province of Uíge Highlights native as well as introduced plant species used in traditional medicine Analyses the influence of gender, age and distance from the province capital Uíge with regard to uses and methods The studies were conducted in the province of Uíge located in the very north of Angola, bordering in the north and east to the Democratic Republic of the Congo, in the south to the provinces of Malanje, Cuanza Norte, and Bengo, and in the west to Zaire province (Fig. 1). According to the Köppen climate classification, the province has a tropical wet or dry or savannah climate Aw [20, 21]. This so-called Guineo-Congolian rainforest climate is characterized by a rainy season lasting at least 6 months, relative air humidity above 80% and typical dense fog, locally called Cacimbo [22,23,24]. Due to the global ecoregions map defined by the World Wildlife Fund (WWF), the province of Uíge covers the ecoregion called the Western Congolian Forest-Savannah Mosaic [25]. A more precise description of the region was given by White [24] who classified Angola north between the Guineo-Congolian and the Zambesian Regions, i.e., the Guinea-Congolian/Zambesian Regional Transition Zone. According to that classification, this zone is characterized by a high complexity since elements of both formations are present. Edaphic conditions and the existence of a diverse topography strongly influence the formation of distinctive patterns of mosaic vegetation shown in Fig. 1c. Barbosa [26] subdivided the area into six vegetation zones, shown in Fig. 1d. a Location of Angola in Africa, b province of Uíge in Angola, c mosaic of forest and savannah patches in the municipality of Ambuila (d) map of study area with vegetation zones, collection sites marked with a black dot and circles representing the distance to Uíge city: inner circle ≤ 160 km, outer circle > 160 km; vegetation zones according to Barbosa [26]. Carta fitogeográfica de Angola. Instituto de Investigação Científica de Angola, Luanda. Graphic: Andreas Kempe The long lasting war in Angola had a highly negative impact on biodiversity [27]. But also prior to the conflicts, several species of economic value on international timber markets like Milicia excelsa (Welw.) C.C.Berg or species of Entandrophragma were historically exploited and are still under increasing pressure [22]. This rising forest loss is confirmed by global analysis of satellite data [28]. Moyo et al. [19] calculate for Guinean Forests in West Africa a remaining area of 15%. On the other hand, the National Report on forest resources by the FAO [29], based on data captured by Horsten, reported not more than 4% of the Uíge area as productive [7, 30]. Beside deforestation, Göhre et al. [7] reported uncontrolled burning caused by growing agricultural activities. Hence, large areas are heavily disturbed anthropogenically resulting in an increased abundance of Zambezian floristic elements following the destruction of the original vegetation leaving only secondary grass- and woodland [24]. Recordings in the remaining forest patches exhibit tropical rainforest and savannah species assemblages comparable with the Bas-Congo region [11, 31, 32]. Since the vegetation formations are very heterogeneous, traditional use of plants by people is prevalent and manifold. The province comprises 16 municipalities, covering an area of 58.698 km2 inhabited by more than 1.4 million people [33], the majority of which belongs to the Kikongo speaking Bakongo ethnic group [33]. As this Bantu group is also living in the neighbouring countries Democratic Republic of the Congo, Republic of the Congo, and Gabon, manifold influences caused by migration due to political problems and conflicts are part of its culture. Very little is known about the health care system in Angola. Faith-based organizations' contribution to Angola's health care system is very low, compared to other sub-Saharan countries [34]. In turn, the government is cutting the health budget due to the falling prices for oil [35]. The lack of health infrastructure, especially in rural areas, is a serious problem resulting in the constant importance of traditional healers and herbal medicines [36]. Data sampling was carried out between 5° 58′ 59.2″ and 7° 56′ 59.4″ southern latitude and between 14° 33′ 53.7″ and 16° 17′ 04.5″ longitude, covering 35 localities in 13 municipalities (Fig. 1). According to the distance from the provincial capital Uíge, two distance levels A (≤ 160 km) and B (170–330 km) were defined. During nine field trips between October 2013 and October 2016, 162 informants were involved in the study, 30 of those were interviewed on their own, 132 were interviewed in groups of two to five persons, bringing the total number of interviews to 62. In advance, the University Kimpa Vita formulated credentials to inform the mayors of the municipalities about the planned activities. To establish contact with potential informants, local authorities of the visited villages (called soba and seculo) were informed about the aims and methods of the study and asked to suggest persons with experience in traditional medicine that might participate (prior informed consent). Hence, all the interviews were conducted with at least one traditional herbalist sometimes accompanied by laypeople. We tried to form a gender-balanced research without violating cultural and/or sacred taboos [37]. The specification of the obtained knowledge varied from location to location and person to person. Information was collected during semi-structured interviews, transect walks and group discussions [38]. Criteria used to define the uses reported are based on informant's statements. Since Silva et al. [39] recommended vegetation inventories to guarantee a correct identification of species and better identification by informants, walks into the traditionally used plant collecting areas were always part of interviews, including forest and savannah formations, since these two habitats alternate very frequently. During field-work, Portuguese language was mainly used, however, in some cases, Angolan colleagues translated into Kikongo. Gender and age of every informant was documented wherever possible. In those cases where the informant did not know his exact age, it was estimated whether the person was younger or older than 40. The following data sets were requested: local plant name, its usage, used plant part and preparation techniques. In case of medicinal plants administration techniques were also documented. Local market surveys and field trips for collecting herbarium specimens completed the investigations. All processes of the surveys were permitted and accompanied by the local authorities. Following the advice made by Ramirez [14] to allow a better contribution and exchange of knowledge, we invited several informants to our presentations and discussions at the University Kimpa Vita in Uíge city. The code of ethics of the International Society of Ethnobiology was followed. The study was carried out in compliance with the agreement of Access and Benefit Sharing. For identification, plants were photographed and plant voucher specimens were collected, dried and stored at the Dresden herbarium (Herbarium Dresdense), Technische Universität Dresden, Germany. In a Memorandum of Understanding between the Instituto Nacional da Biodiversidade e Áreas de Conservação (INBAC), Angola and the Technische Universität Dresden, Germany, signed in 2014, it was agreed upon that duplicates will be returned to Angola as soon as appropriate conditions to store the herbarium vouchers are established. The Ministry of Environment Angola and the Province Government of Uíge issued the required collection and export permits. Identification of collected plant specimens and data analysis was completed in Dresden, Germany. For identification, several floristic works were used: Conspectus Florae Angolensis [40], Plantas de Angola [1], Flore Analytique du Bénin [41], Flora of Tropical West Africa [42,43,44,45,46], and Flora Zambesiaca [47]. Additional information was retrieved from Kew Herbarium Catalogue [48] and Naturalis Biodiversity Center [49]. Furthermore, for some plant families, specialists were consulted. The Herbario LISC and Herbario COI were visited in July 2016 and 2017 for comparing plant samples [50]. Use-reports of identified plants were only included in the results if the specimen was at least determined to genus level. The nomenclature used refers to Plantlist.org. Voucher specimen numbers of Herbarium Dresdense as well as photo voucher numbers are given in Table 1. Due to the poor availability of data regarding the information of endangered species, Table 1 includes only additional details on endemism and states of neophytes. Table 1 Overview of all collected and identified useful plants from the Province Uíge: Species listed alphabetically; additional information on usage, used plant part (PP), preparation and administration, use category (UC), number of citations and number of informants. Species information provided: Origin: E = endemic; + = listed; (-) = not listed; * = naturalised according to Plants of Angola (Figueiredo and Smith, 2008); vernacular names in Portugues (Port.) and Kikongo (Kik.); Voucher number according to Herbarium Dresdense or Foto voucher (F); Plant parts: B = bark, BU = bulb, F =fruit, FL = flower, L = leaf, LA = latex, MY = mycel, R = root, RE = resin, RH = RH, S = seed, SS= stem sap, ST = stem, W = whole plant, WO = wood; Use Category: C = drugs and cigarettes, D = domestic and charcoal, F = hunting and fishing, H = handicrafts, L = ludic, M = medicine, N = nutrition, O = other, R = ritual, T = dental care and cosmetics Data analysis and ethnobotanical indices All collected data sets were put into a database using Microsoft Excel. Corresponding to the research issue, the use of pivot-tables allowed the systematic processing of the large and detailed data set (nearly 40,000 data fields) to correlate different features with each other. Tableau Software was used to create selected diagrams. The basic structure of use-reports to list the information follows the principle "informant i mentions the use of species s in the use category u" [51, 52]. Out of the collected data, 10 use categories were defined: "medicinal use (M)"; "nutrition, spices and herbal teas (N)"; "domestic and charcoal (D)"; "Hunting, fishing and animal feed (F)"; "dental care and cosmetics (T)"; "drugs and cigarettes (C)"; "handicrafts (H)"; "ludic, childrens' toys (L)"; and "rituals (R)". Uses mentioned less than eight times were summarized in "Others (O)", including soaps, toilet paper, glue or agricultural purpose like soil improvement inter alia. Since the majority of data refers to medicinal plants, this category was differentiated into 41 secondary categories according to the treated illnesses (Table 5). We used this detailed classification to enable later pharmaceutical studies because in this field the local people who provide information are not capable of classifying different subcategories according to modern medicine since ethnobotanical indigenous knowledge in several cases does not clearly distinguish. Statistical methods were performed to figure out the influence of age, gender, plant habitat, and distance to Uíge city, use categories and application forms to each other. Chi-square test of independence was used to determine whether a significant relation between two variables exists [53]. Using the Checklist of Plants in Angola [1], the proportion of neophytes was determined. In order to allow comparing recorded data to other studies, the following quantitative ethnobotanical indices were calculated: Relative Frequency of Citations (RFC), Cultural Importance Index (CI) as well as the Informant Consensus Factor (FIC) regarding the secondary categories of illnesses. The Relative Frequency of Citations presents the local significance of each plant species and is calculated for each species as the quotient of the frequency of citations (FC) and the total number of informants (N) [54] (Formula 1). Tardío and Pardo-de-Santayana [51] introduced the CI to ensure data of different studies being compared due to versatility of species use. If the species use would be mentioned in every use category, ten in our study, the CI would be this total number of use categories, i.e. also 10 [51]. In case the species is used in just one use category the CI would be equal to the RFC (Formula 2). Since interviews often were conducted in groups of informants, the number of groups (62) instead of the number of informants (162) was used to calculate the indices. FIC indicates the homogeneity of the knowledge of the informants [55] (Formula 3). Values differ from 0 (no concordance) to 1 (full accordance). High values therefore illustrate that healers use the same species for the treatment of the same illness. $$ {\mathrm{RFC}}_{\mathrm{s}}=\frac{{\mathrm{FC}}_{\mathrm{s}}}{N}=\frac{\sum \limits_{i={i}_1}^{i_N}{UR}_i}{N} $$ Formula 1: Calculation of the Relative Frequency of Citations (RFC): s = species, FC = Frequency of Citation by one informant; N = total number of informants [54]. $$ {\mathrm{CI}}_s=\sum \limits_{u={u}_1}^{u_{\mathrm{NC}}}\sum \limits_{i={i}_1}^{i_{\mathrm{NI}}}\frac{{\mathrm{UR}}_{ui}}{NI} $$ Formula 2: Calculation of the Cultural Importance Index (CI): s = species, u = use categories; N = total number of informants, i = informants, NC = the number of use categories, URui = the use report of informant I in use [51]. $$ {F}_{ic}=\frac{n_{\mathrm{ur}}-{n}_t}{n_{\mathrm{ur}}-1} $$ Formula 3: Calculation of the Informant Consensus Factor (Fic): nur = number of use-reports in each use category; nt = number of taxa used [56]. Literature available on medicinal applications of the listed plant species were used for comparison: Neuwinger [57], Iwu [58] and Latham and Konda ku Mbuta [11] of which the latter two reported data in the adjacent Democratic Republic of Congo [11, 57, 58]. In the following the term citation is used in the same way as use-report. General findings on vegetation of used plants The heterogeneity of Uíge's landscapes and vegetation formations is mirrored by a high variability of data. Nevertheless, several tendencies can be postulated. Our study presents 2390 use-reports (Table 1). Three hundred fifty-eight species representing 96 plant families were identified, 17 of them only to genus level. Of these used plant species, 35% were trees, 26% perennial herbs, 16% shrubs, 12% climbers, 10% annuals and less than 1% parasites. In contrast to a study in southern Angola [6] and one in Namibia [59], woody plants are not used more frequently in our study area compared to herbs since herbaceous plants are found all year around due to the humid forest habitats, and because the much shorter dry season results in a higher availability of plants from savannah areas [6, 24]. Apparently, men (13%) use more climbers than women (8%) certainly due to the fact that climbers are a characteristic element of forest and transition zone where men are going to hunt regularly. However the difference is not significant (chi-square test, P = 0.108, χ2 = 2.578). The use patterns of the other growth forms do not differ between genders in contrast to, e.g. in Eastern Tanzania, where women are more responsible for collecting herbaceous plants while men work with arborescent species [60]. Concurrently, 27% are plants growing in different savannah types, 24% in forests, and 21% in the transition zone connecting these two ecosystems. Furthermore, 20% of the used plants are cultivated, 7% were collected in disturbed areas and 1% are water plants. Comparing habitat and growth form data, some features become apparent. Forty-five percent of the forest species are trees, 21% climbers. This proportion is shifting towards the transition zone where 40% are trees and 31% climbers. These often anthropogenically induced forest edges are characterized by a moist climate with a simultaneous high solar radiation imitating natural gaps caused by treefall. As tropical rainforest disturbance increases, relative abundance of climbers increases, as well [61, 62]. In contrast, from the collected plant species of the studied savannah formations, 42% are trees and 32% perennial herbs [24]. Fifty percent of species collected in disturbed areas are annual herbs, which confirms the fact that annuals are typical for disturbed areas [63]. While just three out of 358 mentioned species are endemic to Angola, 71 species are naturalized that is equivalent to one fifth, 73% of which are still cultivated. In total, 15% of all citations refer to these species. This high number is not surprising. Different studies document the integration of introduced plants into the ethnobotanical repertoires of people [7, 64, 65]. In a study in Brazil, Santos et al. [66] even detected that invasive species overall were considered useful more often than non-invasive species. A closer look reveals that the naturalized species do not fill a gap described in Alencar et al. or Medeiros et al. [67, 68]. They make up a small part in all medicinal categories with an average of 14%, with just one exception in the category "fevers, malaria" where they represent 36%. Out of the 53 citations for this disease category, 15 citations are based only on Chromolaena odorata (L.) R.M.King & H.Rob. (8) and Dysphania ambrosioides (L.) Mosyakin & Clemants (7). Although a wide range of species exist to treat stomachache, the most frequently used species is Senna occidentalis (L.) Link, introduced from tropical America, and used for various applications worldwide [69]. Angola's turbulent history as a Portuguese colony and the resulting cultural influences from other Portuguese colonies such as Brazil led to an interchange of plant use and knowledge as for Nicotiana tabacum L., which arrived in Africa in the 1600s or Arachis hypogaea L., which was incorporated at the same time into African ethnomedical systems [70]. In particular, certain arable crops from the New World were introduced in Angola, especially from the Solanaceae and Euphorbiaceae. Actual international listings and reports on neophytes and invasive species are still very incomplete for Angola [71, 72]. According to the list of invasive species in Eastern Africa [73], 24 species of our study are detected to have invasive potential. Due to our observations in northern Angola, six plant species display an invasive behaviour: Chromolaena odorata (L.) R.M.King & H.Rob., Inga edulis Mart., Lantana camara L., Senna occidentalis (L.) Link, Solanum mauritianum Scop., and Tithonia diversifolia (Hemsl.) A.Gray. The species of most invasive power Chromolaena odorata forms dense thickets in savannah and forest gaps, disrupting forest successions. Local people are aware that this plant is not native to their region. Different myths surround its arrival suggesting that Chromolaena odorata was introduced rather recently [32, 72]. Nevertheless, in terms of its traditional use in our study, it is in 6th position regarding its RFC-value (Table 2). Table 2 List of the 11 species with the highest Relative Frequency Citation (RFC) including habitat, used plant parts (PP), use categories (UC), number of citations (NC), and Cultural Importance Index (CI). Habitat (Hab.): C cultivated, F forest, S savannah. Plant Parts: B bark, F fruit, L leaf, R root, S seed, SS stem sap, ST stem, W whole plant, Wo wood. Use category: C drugs and cigarettes, D domestic and charcoal, F Hunting, fishing and animal feed, H handicrafts, L ludic, childrens' toys, M medicinal use, N nutrition, spices and herbal teas, R rituals, T dental care and cosmetics, O others; *neophyte With regard to the species number, the predominant used plant families are Fabaceae (11.7%), Asteraceae (6.1%) and Rubiaceae (5.6%), followed by Apocynaceae, Malvaceae and Euphorbiaceae (4.2%). The distribution of plant families is difficult to discuss without referring to the occurring vegetation units. Our results therefore confirm the mosaic like heterogeneity of the studied area, influenced by Guineo-Congolian rain forests, Zambesian dry evergreen forests, Miombo woodlands and secondary (wooded) grasslands [24]. This shows a respective preference: species from Fabaceae and Asteraceae have a high percentage of used savannah plants (> 50%) while the percentage of forest plants increases within the other families, especially in Rubiaceae (26%). The quotient of citations and species number within one plant family (C/S) emphasizes the importance of citations within one plant family, including the fact that big families like Fabaceae or Asteraceae inherently show high citation numbers. As illustrated in Fig. 2, some plant families were mentioned with just a few species but high citation rate (high C/S). For example in Annonaceae, 120 citations for 5 species lead to a C/S of 24. While the families Annonaceae and Asteraceae exhibit an equally high number of citations, the number of species is considerably higher in Asteraceae. The proportion for one species therefore is much higher in Annonaceae than in Asteraceae. By contrast, in Solanaceae, 34 citations for 11 species lead to a C/S of 3.1. Plant family distribution correlating species number with the number of citations including data about its C/S-Quotient depicted by the size of the circles. Abbreviations of families: ANN Annonaceae, ACA Acanthaceae, ANA Anacardiaceae, APO Apocynaceae, ARE Arecaceae, AST Asteraceae, EUP Euphorbiaceae, HYP Hypericaceae, LAM Lamiaceae, MAL Malvaceae, PHY Phyllanthaceae, POL Polygalaceae, RUB Rubiaceae, SOL Solanaceae, ZIN Zingiberaceae Ethnobotanical results Willingness of visited people to collaborate was very high. One hundred sixty-two informants were interviewed in 62 groups. Two thirds were older than 40 years. Some healers specialized on one or two diseases only while others demonstrated their broad knowledge to heal a large variety of diseases (Additional file 1). Seventy-six percent of the citations collected in our study refer to medicinal uses, 10% to nutritional use and 4% to its use as fodder plant. The remaining 10% are divided into the other 7 use categories. Although the unequal split of citations within the 10 use categories suggests a low use of plants in some of them, plenty of species are used for several purposes and daily needs. Thus, 41 species are used for domestic applications, 33 species for rituals, 29 species as drugs or cigarettes, 21 species for handicrafts and 9 for ludic ambits. Compared to other studies (e.g. Vodouhê, [74]), the percentage of medicinal uses is very high, although Göhre et al. [7] detected quite similar use category distributions. One reason might be that our study design required at least one person with knowledge of traditional medicine to accompany the interview. On the other side, this split is an indication of the crucial role of plants in rural health care. In general, the predominantly used part is the leaf (37%, 890 citations, 220 species), followed by the different stem tissues wood, bark, bast fibres, and resins (17%, 407 citations, 110 species), underground organs like roots, tubers and rhizomes (15%, 367 citations, 140 species) as well as fruits and seeds with 15% (354 citations, 114 species). In some cases, the whole plant (56 citations, 27 species) or flowers (16 citations, 12 species) were used. Regarding only the medicinal use category, the proportion of the citations describing the use of leaves remains almost unchanged with 39% (689 citations, 178 species) while the proportion of the use of underground organs increases by more than the double to 32% (582 citations, 137 species). This detected plant part percentage is consistent with the one observed by Urso et al. [6], Giday et al. [75], or Cheikhyoussef and Embashu, [76]. As already mentioned and discussed in Urso et al. [6], the intensive use of underground organs in medical applications may be due to the fact that underground organs need effective defence strategies based on a high content of secondary metabolites [6, 77]. By contrast, the studies of Upadhyay and Kumar [78] and Panghal et al. [79] confirm leaves as the most frequently used plant part in remedies [78, 79]. In this context, strain and increasing mortality in species of which primarily bark and roots or bulbs are collected for remedies, are discussed [80,81,82,83]. No awareness of interviewed people in Uíge province for this emerging problem was detected during our study. As expected, in the category "nutrition", the main plant parts used are fruits (57%), leaves (31%), and seeds (5%) which is in concordance with literature [6, 76]. Fruits are consumed fresh, except the fruits of Adansonia digitata L., Piper guineense Schumach. & Thonn. and Xylopia aethiopica (Dunal) A.Rich. which can also be dried. Tubers, although an important source of starch, were seldom mentioned. This may be because during field trips these tubers are not abundant but normally cultivated. For fodder purposes, mainly, leaves are used (75%); fruits and stem tissues only play a subordinate role. Stems and timber, respectively, are the main plant parts used domestically (67%). Except the significantly more frequent utilization of fruits and seeds by women (chi-square test, P = 7 × 10−5, χ2 = 15.8), no other gender-specific difference was detected. This result could be caused by the daily behaviour and responsibilities women have; those inter alia walk to and work in the field, carry and take care of the children while collecting edible fruits along the wayside. For the Ibo women in Nigeria for instance, their ownership of fruit trees was described [84]. Ethnobotanical indices of used plants RFC and CI of all mentioned species were calculated to evaluate the importance of the species use. Here, 67% of the species have a RFC below 0.05, 14% between 0.05 and 0.1, and 20% more than 0.1. The values range from 0.37 to 0.02. The species with the highest RFC also show a high variety of its used plant parts and use categories (Table 2). The calculated CI covers values from 0.47 to 0.02 with an average value of 0.07 while Göhre et al. [7] calculated an average value of 0.09 in savannah regions near Uíge city. Eight of the 11 species listed in Table 2 are typical savannah species demonstrating the importance of this vegetation in traditional plant usage [7]. The most important species is Annona stenophylla subsp. cuneata (Oliv.) N. Robson, a subshrub, which, due to its woody rhizome, is able to regrow after periodical fires. Fruits are edible, frequent and therefore known by everyone. Its medicinal use is broad but with focus on gastrointestinal disorders. This application was mentioned for the related A. stenophylla and A. stenophylla subsp. nana [57]. Hymenocardia acida Tul. as well as Psorospermum febrifugum Spach are frequent small savannah trees often used for treating bloody diarrhoea, bleeding or anaemia due to its red root bark producing a reddish coloured decoction and therefore related to blood, according to the tradition of local people. We noticed a comparable relationship between the bark of Erythrina abyssinica DC., which produces a yellow decoction and is used to treat yellow fever, and the use of pulverized thorns of the trunk of Zanthoxylum gilletii (De Wild.) P.G.Waterman to treat injuries to the feet. Hence, for some plants, appearance is related to functionality, comparable to the doctrine of signatures developed by Paracelsus in the sixteenth century [85, 86]. The shrub Vitex madiensis Oliv. produces edible fruits and has a wide variety of healing properties. Another frequent tree is Sarcocephalus latifolius (Sm.) E.A.Bruce whose roots are often sold at local markets as a tonic. Aframomum alboviolaceum (Ridl.) K.Schum. is a common perennial that produces edible fruits sold at local markets during the late rainy season. Smilax anceps Willd. is the only climbing plant in this list, widespread in African savannahs and therefore used diversely [57]. Secondly, three species (Elaeis guineensis Jacq., Raphia matombe De Wild., Xylopia aethiopica (Dunal) A.Rich) are an important part of remedy mixtures and thus quite well known in the literature [57]. Several liposoluble substances can be dissolved in the oil of Elaeis guineensis fruits, which is therefore used for skin diseases [57, 58]. At the same time, palm fruits present a food for better nutrition and health due to its components such as palmitic-oleic rich semi solid fat as well as vitamin E, carotenoids and phytosterols [87]. Xylopia aethiopica is most commonly used as an addition to remedy mixtures of pulverized seeds due to its diverse constituents [58, 88]. In contrast, the other palm species Raphia matombe is one of the most important species in Bakongo culture inter alia because of its common traditional utilization to produce palm wine [89]. In addition, alcohol also serves as solvent for active ingredients [90,91,92]. Some plants traditionally are macerated in alcoholic beverages and used in medical applications, mainly as aphrodisiac or against pain [57]. However, parts of Raphia also serve as base for other applications such as the leaf rachis for domestic use, edible fruits or fibres for handicrafts [32]. Interestingly, the invasive species Chromolaena odorata, native to Central-America, is part of the list but also used worldwide for the same purpose or other applications [90, 91, 93,94,95]. Its biochemical and antimicrobial activities as well as anticancer properties are already well studied [96, 97]. Ethnobotanical indices of medical plants If we consider only medical plants, the selected ethnobotanical indices attain values similar to those for the useful plants in general. Out of the 1813 use reports, 68% of the listed plant species exhibit a RFC below 0.05 (corresponding to 3 citations maximal, 13% between 0.05 and 0.1 (4 to 6 citations), and 19% more than 0.1 (from 7 citations up). The values range from 0.34 to 0.02. The species with the highest RFC in this use category are shown in Table 3. Eight of them are already mentioned in Table 3. The calculated CI values range from 0.44 to 0.02 with an average value of 0.08. Table 3 List of 11 medical plant species with the highest Relative Frequency Citation (RFC) including habitat, used plant parts (PP), and Cultural Importance Index (CI). Habitat: C cultivated, F forest, S savannah. Plant parts: B bark, F fruit, L leaf, R root, S seed, SS stem sap, ST stem, W whole plant; *neophyte Ten percent out of 1813 citations for medicinal uses refer to stomach ache (183 citations), 8% to respiratory diseases (140 citations), 7% to pain and rheumatism (124 citations), 6% to diarrhoea (115 citations) and 6% to headache and weakness (101 citations). According to Heinrich et al. [55], the informant's consensus can help to select plant species for further pharmaceutical analyses. The calculated Informant Consensus Factor (FIC) of the 41 secondary use categories ranged between 0 and 0.78. The disease "measles" has the highest FIC (0.78), followed by the disease groups "diarrhoea" (0.61), "skeletal deformation" (0.6), "anaemia" (0.58) and "stomach ache" (0.58). For 14 out of the 41 defined disease categories, FIC was below 0.2. Table 4 shows the plant species, which were cited at least five times for one disease, sorted by the Informant Consensus Factor (FIC) of each disease. Statistical analysis with Chi-square test of independence did not detect any significance in gender-specific treatment of the 41 disease categories, except in the treatment of scoliosis (chi-square test, P = 1 × 10−9, χ2 = 37.1). Table 4 Diseases with at least one species mentioned with 5 citations listed in order of its Informant Consensus Factor (FIC). In square brackets the number of citations of disease category (UR) and the FIC; Known to literature: + known, – not known, *indirectly related; Literature used: Neuwinger, Iwu, Latham and Konda ku Mbuta [11, 57, 58] The importance of traditional medicinal plants is demonstrated by the high number of medical use-reports (76%). This value coincides with those of former studies in this area [7, 9]. The relatively low FIC values could be explained by the heterogeneity of vegetation forms in the studied area. In case of non-availability of one plant species, another will be chosen to treat the same disease. Cheikhyoussef et al. [59] reported much higher FIC due to the considerably lower number of citations, described species and disease categories [59]. The FIC encouraged us to choose reliable data of plants which could be analysed either in medical or phytochemical studies. The majority of the medical applications mentioned at least five times (Table 4) is already known and documented [11, 57, 58], but a few citations are new to science (18%), e.g. Gardenia ternifolia subsp. jovis-tonantis (Welw.) Verdc. seems to be promising for treatment of measles; except in Göhre et al. [7], the use of Brillantaisia owariensis P.Beauv. for cardiovascular diseases was still not documented and Annona stenophylla subsp. cuneata was neither ethnobotanically nor phytochemically investigated although several studies document the use of related species [7]. With decreasing number of citations the quantity of still unknown uses increases. The disease skeletal deformation/scoliosis is rarely mentioned in ethnobotanical literature as its management is dominated by physiotherapies and bracing and not by herbal preparations. Hulse mentioned deer antlers to cure skeleton deformities according to Chinese medicine and called it of dubious credibility [98]. A study from Namibia mentioned Ximenia americana L. as a cure for scoliosis [59]. The standard reference Neuwinger [57] neither mentions skeletal deformation nor scoliosis as traditionally treated diseases [57]. Nevertheless, we documented this traditional healing concept as part of Bakongo health treatment culture. Administration methods vary from community to community, from healer to healer and from disease to disease. Using a decoction to prepare a remedy is the most frequently found method of preparation (45%), followed by the manufacture of an ointment (13%), maceration (12%) and the application as raw material, while nearly half of all preparations are administered orally (45%), followed by dermal application (20%) in only 16% is an enema used. This is in contrast to commonly used methods used in West African traditional health systems [99]. According to these analyses of administrations, the four most important combinations of preparation and application of medicinal plants are (1) decoction taken orally (21%); (2) raw material crushed, taken orally, chewed or swallowed (14%); (3) maceration of plant parts taken orally (11%); and (4) the preparation of ointment applied to the skin (11%). These findings are in line with those of several studies [6, 7, 100]. Nutritional plants Thirty percent of mentioned plant species do have a certain nutritional value for local people. Out of the 107 species used for nutrition, 10 were cited more than five times. Besides the species already listed above (Aframomum alboviolaceum (F), Annona stenophylla subsp. cuneata (F), Vitex madiensis (F)), these are as follows: Anisophyllea quangensis Engl. ex Henriq. (F), Dialium englerianum Henriq. (F), Mondia whitei (Hook.f.) Skeels (L), Parinari capensis Harv. (F), Pteridium aquilinum subsp. africanum (L.) Kuhn (L), Strychnos cocculoides Baker (F) and Syzygium guineense (Willd.) DC. (F). The use of these species is comparable to Biloso and Lejoly [101], who found very similar results in the province Kinshasa, Democratic Republic of Congo. Termote and Van Damme [102] as well as Latham and Konda ku Mbuta [11] also point out the economic importance of these species. On the other hand, 12% of the citations (13 species) are plants which up to now are not known to literature [7, 9, 11, 103,104,105]. Especially one species should be highlighted: Dracaena camerooniana, whose leaves are locally known as nsalabayakala, is also sold at local markets and therefore of economic value. By contrast, fruits like those of Cnestis ferruginea or Renealmia africana might be edible but not of good taste, so that just a few people do consume these wild fruits, found in the forests. Furthermore, for the consumed aerial parts of Hilleria latifolia toxicity studies showed histopathological changes at high doses [106]. As several species are just cited once, further studies on reliability of data as well as on distribution of species, and their nutritive values and toxicities are recommended. Influence of gender, age and distance It is postulated that women and men have separate and unique relationships with biodiversity [37]. Different studies detected either a gender-specific plant use [59, 107] or gender-independent knowledge [108]. In our study, two thirds of informants were male, one third female. In average, female informants concentrate on using plants from savannahs (49%) and villages (38%) while male interviewees focus on the use of forest (40%) and savannah (44%) species. Although women represent just a fifth of all citations (22%), their contribution to medicinal plants was proportionally even higher (83%) than those of men (74%) (chi-square test, P = 9 × 10−6, χ2 = 19.7). Deleting use categories "medicinal plants" and "nutritional plants", the remaining use categories can be broken down in detail. It appears that all use categories are nearly homogenously distributed regarding their number of citations between genders and do not differ significantly from each other (chi-square test, P > 0.05). Fifty percent of all plants mentioned in the study were listed just by men, 12% just by women. When looking at more details of the use category "medicinal plants", a similar pattern can be seen: 48% of the plants were brought up by men only and 14% just by women. The ten most important species mentioned for medical application by women and men, with a percentage of more than 50%, respectively, and the highest numbers of use-reports are shown in Table 5. There is thus a strong suspicion that these species might have a medical application for illnesses specific to women as in Cheikhyoussef et al. [59] or mentioned by Kamatenesi-Mugisha [109]. By contrast, our analyses do not confirm this assumption. Medical plant applications especially for women's illnesses (menstruation problems, birth, pregnancy, open cervix, lactation, and abortive use) are not significantly more frequently quoted by women than others (chi-square test, P > 0.05). On the other hand, mens' specific illnesses (erectile dysfunction, impotence) and the associated plants are not mentioned just by men, but by women too. On the contrary, percentages are almost evenly distributed. Table 5 List of 10 species representing ≥50% citations of women and men, respectively (%) and highest number of use-reports (UR), with their habitat (H), Habitat: S = savanna, F = forest, V = village; *neophyte In Bakongo culture, both sexes play a plurality of roles. Nevertheless, a majority of men hunts while women maintain the household, take care of the children and work in the field. However, individual differences from person to person blur these culturally not strictly fixed boundaries, so that men also help on the fields. The results of our study on the influence of gender on plant usage in all areas of daily life did not show prominent differences of genders in traditional plant usage of Bakongo tribes. Handicraft and house constructing activities are performed by both sexes, depending on the transfer of knowledge within the families rather than on gender. Not even in the context of gender-specific illnesses, significant differences could be detected. That all adds up to the conclusion that treatment of illnesses is open and pragmatic and not biased by gender. This notion also contradicts the self-perception of male healers who "use plants of whose women do not know their effects". But further studies should be undertaken to support this observation, also because the percentage of women was low. Distance to Uíge city As the study was conducted in the whole province covering an area of 59,000 km2, different vegetation zones are included which merge together seamlessly forming a complex mosaic. For this reason, it is difficult to detect a clear influence of the distance in regard to species composition in traditional healer's concepts. What could be detected significantly with respect to the distance to Uíge city are differences in two use categories. The larger the distance, the higher the number of use citations of medical plants ranging from 72% (zone A) to 80% (zone B) (chi-square test, P = 9 × 10−6, χ2 = 19.6) while the use of nutritional plants decrease from 12% (zone A) to 8% (zone B) (chi-square test, P = 0.002, χ2 = 9.6). Neither plant part utilization nor medical plant explanation or age of informants was significantly different. With increasing distance from the city Uíge and its manifold offers of modern society such as health centres or supermarkets, no significant difference of plant usage could be detected (chi-square test, P > 0.05). Similar results were achieved by Ávila et al. [64] who, depending on different urbanization levels, documented the maintenance of a similar ethnobotanical repertoire in Brazilian Quilombola groups. In contrast, Pirker et al. [110] stated an influence of rural–urban urbanization and globalization processes on traditional knowledge. This should be more fully investigated, especially in accordance with the shifts from traditional healing to modern health care in Angola. Nearly one third of informants were younger than 40 years whereas only a quarter of all citations were mentioned by this group. The older people therefore show a significantly greater knowledge (chi-square test, P = 0.000955, χ2 = 10.913). Especially concerning the use category "medicine", significantly more uses were mentioned by the older people (chi-square test, P = 0.00097, χ2 = 10.877). Voeks [107] described a similar situation in northeast Brazil and justified his results to show that the greater knowledge of plant medicinal properties was linked to the greater age of the participant. The reason that the number of young healers is comparatively low is explained by the slow process of transferring knowledge from one generation to another [59]. Further studies should compare firstly younger people and secondly people from urban and rural areas, regardless of their knowledge. Despite (or because of) the long-lasting military conflict in Angola, traditional knowledge of plant usage is still an important part of cultural heritage. Plants therefore are essential elements in all areas of livelihood, especially in the medical sector. This situation is compounded by the still very poor health care system in the country, especially in rural areas. The study reveals the following key messages: A considerable heterogeneity in plant usage of the studied area could be detected, influenced by the high complexity of flora composed of both, Guineo-Congolian and Zambesian elements and the diverse topography. Although the area is large, no significant influence of the distance in regard to species composition in traditional healer's concepts of the respective village was found. Although several plants were just mentioned by women or men, respectively, no significant restriction to gender-specific illnesses in medical plant use could be found. Merely concerning the age of informants a slight shift could be detected, because one third of informants were younger than 40 years whereas only one fourth of all citations were mentioned by this group. Regarding the analysis within use categories, this tendency could not be substantiated significantly. At least three species are worth evaluating for their pharmacological potential due to their high FIC value regarding the following diseases: Gardenia ternifolia subsp. jovis-tonantis seems to be promising for treatment of measles; Brillantaisia owariensis has still not been analysed for treating cardiovascular diseases; Annona stenophylla subsp. cuneata was mentioned for treating anaemia. People in Angola still depend very much on the natural environment, and the knowledge of how to use plants in their daily life is fundamental—even people living in the large cities or urban areas do have family in the rural regions or at least have lived part of their life there. But by virtue of the already existing and for the future expected urbanization and the resultant loss of direct dependence upon nature, traditional knowledge is expected to be lost in future [111], especially if taking into account that Angola has a high amount of unused land, suitable for crops which will be converted in near future, resulting in a negative impact on biodiversity [112]. The study therefore at the same time provides an important contribution of traditional knowledge documentation, which so far is very rare for the area investigated here. Collected data are a worthwhile base for the establishment of a Botanical Garden integrated in the Universidade Kimpa Vita in Uíge with focus on useful plants. Furthermore, ethnopharmacological studies of several selected plant species might usefully be undertaken. CI: Cultural Importance Index Herbarium Coimbra FIC : Informant Consensus Factor LISC : Herbarium Lisbon RFC: Relative Frequency of Citations Figueiredo E, Smith G. Plants of Angola = Plantas de Angola. South African National Biodiversity Institute (SANBI); 2008. Caldecott JO, Jenkins MD, Johnson TH, Groombridge B. Priorities for conserving global species richness and endemism. Biodivers Conserv. 1996;5:699–727. Bossard E. La medecine traditionnelle au centre et a l'ouest de l'Angola. Lisboa: Ministério da Ciência e da Tecnologia, Instituto de Investigação Científica Tropical; 1987. Bossard E. Angolan medicinal plants used also as piscicides and/or soaps. J Ethnopharmacol. 1993;40:1–19. Bruschi P, Urso V, Solazzo D, Tonini M, Signorini MA. Traditional knowledge on ethno-veterinary and fodder plants in South Angola: an ethnobotanic field survey in Mopane woodlands in Bibala, Namibe province. J Agric Environ Int Dev. 2017; Available from: http://www.jaeid.it/ojs/index.php/JAEID/article/view/559. [cited 24 Apr 2018] Urso V, Signorini MA, Tonini M, Bruschi P. Wild medicinal and food plants used by communities living in mopane woodlands of southern Angola: results of an ethnobotanical field investigation. J Ethnopharmacol. 2016;177:126–39. Göhre A, Toto-Nienguesse ÁB, Futuro M, Neinhuis C, Lautenschläger T. Plants from disturbed savannah vegetation and their usage by Bakongo tribes in Uíge, Northern Angola. J Ethnobiol Ethnomed. 2016;12. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC5030725/. [cited 3 Mar 2017] Mawunu M, Bongo K, Eduardo A, Vua MMZ, Ndiku L, Mpiana PT, et al. Contribution à la connaissance des produits forestiers non ligneux de la Municipalité d'Ambuila (Uíge, Angola): Les plantes sauvages comestibles [Contribution to the knowledge of no-timber forest products of Ambuila Municipality (Uíge, Angola): The wild edible plants]. Int J Innov Sci Res. 2016;26 Heinze C, Ditsch B, Congo MF, Lautenschläger T, Neinhuis C. First ethnobotanical analysis of useful plants in Cuanza Norte, North Angola. Res Rev J Bot Sci. 2017;6:44–53. Senwitz C, Kempe A, Neinhuis C, Mandombe JL, Branquima MF, Lautenschläge T. Almost forgotten resources – biomechanical properties of traditionally used Bast fibers from northern Angola. Bioresources. 2016;11:7595–607. Latham P, Konda ku Mbuta A. Useful plants of Bas-Congo province, Democratic Republic of Congo. 2014. Lorenz K, Mündl K. "Rettet die Hoffnung": Konrad Lorenz im Gespräch mit Kurt Mündl. 2. Aufl. Wien u.a: Jugend- u. Volk-Verl.-Ges; 1989. Gandolfo ES, Hanazaki N. Etnobotânica e urbanização: conhecimento e utilização de plantas de restinga pela comunidade nativa do distrito do Campeche (Florianópolis, SC). Acta Bot Bras. 2011;25:168–77. Ramirez CR. Ethnobotany and the loss of traditional knowledge in the 21st century. Ethnobot Res Appl. 2007;5:245–7. WHO. Health status and trends [Internet]. 2015. Available from: http://www.aho.afro.who.int/sites/default/files/publications/5101/Atlas2016-en_Health-status-and-trends.pdf. [cited 29 May 2017] WHO | World Health Statistics 2016: Monitoring health for the SDGs [Internet]. WHO. 2016. Available from: http://www.who.int/gho/publications/world_health_statistics/2016/en/. [cited 29 May 2017] Sousa-Figueiredo JC, Gamboa D, Pedro JM, Fançony C, Langa AJ, Soares Magalhães RJ, et al. Epidemiology of malaria, schistosomiasis, geohelminths, anemia and malnutrition in the context of a demographic surveillance system in Northern Angola. Noor AM, editor. PLoS One 2012;7:e33189. Smith LC, Ruel MT, Ndiaye A. Why is child malnutrition lower in urban than in rural areas? Evidence from 36 developing countries. World Dev. 2005;33:1285–305. Moyo M, Aremu AO, Van Staden J. Medicinal plants: an invaluable, dwindling resource in sub-Saharan Africa. J Ethnopharmacol. 2015;174:595–606. Peel MC, Finlayson BL, McMahon TA. Updated world map of the Köppen-Geiger climate classification. Hydrol Earth Syst Sci. 2007;11:1633–44. Briggs DJ, Smithson P. Fundamentals of physical geography: Rowman & Littlefield; 1986. Romeiras MM, Figueira R, Duarte MC, Beja P, Darbyshire I. Documenting biogeographical patterns of African timber species using herbarium records: a conservation perspective based on native trees from Angola. Vendramin GG, editor PLoS ONE 2014;9:e103403. Marquardsen H, Stahl A. Angola. Berlin: Dietrich Reimer; 1928. White F. The vegetation of Africa / a descriptive memoir to accompany the Unesco/AETFAT/UNSO vegetation map of Africa [Internet]. Paris: Unesco; 1983. Available from: http://primoproxy.slub-dresden.de/cgi-bin/permalink.pl?libero_mab213695056 Olson DM, Dinerstein E, Wikramanayake ED, Burgess ND, Powell GVN, Underwood EC, et al. Terrestrial ecoregions of the world: a new map of life on earth. Bioscience. 2001;51:933. Barbosa LAG. Carta fitogeográfica de Angola: Instituto de Investigação Científica de Angola; 1970. Biodiversity MNJA. War, and tropical forests. J Sustain For. 2008;16:1–20. Hansen MC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, et al. High-resolution global maps of 21st-century Forest cover change. Science. 2013;342:850–3. FAO. EVALUATION DES RESSOURCES FORESTIÈRES MONDIALES 2010 RAPPORT NATIONAL ANGOLA [Internet]. Rom; 2010. Available from: http://www.fao.org/docrep/013/al442F/al442f.pdf Horsten F, Natureza DN da C da, Por L (Angola). Madeira: una analise da situacao actual. 1983; Available from: http://agris.fao.org/agris-search/search.do?recordID=XF2015031279. [cited 26 May 2017] Pauwels L. Nzayilu N'ti, guide des arbres et arbustes de la région de Kinshasa--Brazzaville: Jardin botanique national de Belgique; 1993. Lautenschläger T, Neinhuis C, editors. Riquezas naturais de Uíge: uma breve introdução sobre o estado atual, a utilização, a ameaça e a preservação da biodiversidade. Dresden: Techn. Univ; 2014. Censo. Censo 2014 [Internet]. 2014. Available from: http://censo.ine.gov.ao/xportal/xmain?xpid=censo2014&xpgid=provincias&provincias-generic-detail_qry=BOUI=10504841&actualmenu=10504841&actualmenu=8377485. [cited 3 Mar 2017] Kagawa RC, Anglemyer A, Montagu D. The scale of faith based organization participation in health service delivery in developing countries: systemic review and meta-analysis. Beck EJ, editor PLoS ONE 2012;7:e48457. Kristof N. Deadliest Country for Kids. N Y Times [Internet]. 2015 Mar 19; Available from: https://www.nytimes.com/2015/03/19/opinion/nicholas-kristof-deadliest-country-for-kids.html. [cited 26 May 2017] Queza AJ. Sistema Nacional de Sade Angolano e Contributos Luz da Reforma do SNS Portugus.pdf [Internet]. 2010. Available from: https://repositorio-aberto.up.pt/bitstream/10216/50407/2/Sistema%20Nacional%20de%20Sade%20Angolano%20e%20Contributos%20%20Luz%20da%20Reforma%20do%20SNS%20Portugus.pdf. [cited 26 May 2017] Pfeiffer JM, Butz RJ. Assessing cultural and ecological variation in ethnobiological research: the importance of gender. J Ethnobiol. 2005;25:240–78. Cunningham AB. Applied ethnobotany: people, wild plant use, and conservation. London: Earthscan; 2001. Silva HCH, Caraciolo RLF, Marangon LC, Ramos MA, Santos LL, Albuquerque UP. Evaluating different methods used in ethnobotanical and ecological studies to record plant biodiversity. J Ethnobiol Ethnomedicine. 2014;10:48. Carrisso LW. Conspectus florae angolensis. 1937. Akoègninou A, van der Burg WJ, van der Maesen LJG. Flore analytique du Bénin: Backhuys; 2006. Hutchinson J, Dalziel JM, Hepper F. Flora of West Tropical Africa: The British West African territories, Liberia, the French and Portuguese territories South of latitude 18°N. to Lake Chad, and Fernando Po. 2. London: Crown agents for Oversea Governments and Administrations; 1954. Hutchinson J, Dalziel JM, Hepper F. Flora of West Tropical Africa: All territories in West Africa South of latitude 18 °N. and to the West of Lake Chad, and Fernando Po. 2nd ed. London: Crown agents for Oversea Governments and Administrations; 1963. Hutchinson J, Dalziel JM, Hepper F. Flora of West Tropical Africa: All territories in West Africa South of latitude 18°N. and to the West of Lake Chad, and Fernando Po. 2. London: Crown agents for Oversea Governments and Administrations; 1968. Royal Botanic Gardens Kew. Flora Zambesica [Internet]. 2007. Available from: http://apps.kew.org/efloras/search.do;jsessionid=112B8733022D60552751FB72E19EC0A6. [cited 29 May 2017] Royal Botanic Gardens Kew. Kew Herbarium Catalogue [Internet]. 2014. Available from: http://apps.kew.org/herbcat/navigator.do. [cited 29 May 2017] Naturalis Biodiversity Center [Internet].. Available from: http://www.naturalis.nl/en/. [cited 29 May 2017] Instituto de Investigação Científica Tropical Portugal. Herbário LISC [Internet]. 2007. Available from: http://maerua.iict.pt/colecoes/herb_simplesearch.php. [cited 29 May 2017] Tardío J, Pardo-de-Santayana M. Cultural importance indices: a comparative analysis based on the useful wild plants of Southern Cantabria (Northern Spain)1. Econ Bot. 2008;62:24–39. Kufer J, Heinrich M, Förther H, Pöll E. Historical and modern medicinal plant uses - the example of the Ch'orti ' Maya and Ladinos in eastern Guatemala. J Pharm Pharmacol. 2005;57:1127–52. McDonald J. Handbook of biological statistics: Sparky House Publishing; 2015. Ahmad M, Sultana S, Fazl-I-Hadi S, Ben Hadda T, Rashid S, Zafar M, et al. An ethnobotanical study of medicinal plants in high mountainous region of Chail valley (District Swat- Pakistan). J Ethnobiol Ethnomed. 2014;10:36. Heinrich M, Ankli A, Frei B, Weimann C, Sticher O. Medicinal plants in Mexico: healers' consensus and cultural importance. Soc Sci Med. 1998;47:1859–71. Trotter RT, Logan M. Informant consensus: a new approach for identifying potentially active medicinal plants. In: Plants in indigenous medicine and diet: Biobehavioural approaches; 1986. p. 91–112. Neuwinger HD. African traditional medicine: a dictionary of plant use and applications with supplement : search system for diseases. Stuttgart: Medpharm Scientific Publishers; 2000. Iwu M. Handbook of African medicinal plants. 2nd ed. Boca Raton: CRC Press; 2014. Cheikhyoussef A, Shapi M, Matengu K, Ashekele HM. Journal of ethnobiology and Ethnomedicine. J Ethnobiol Ethnomed. 2011;7:10. Luoga EJ, Witkowski ETF, Balkwill K. Differential utilization and ethnobotany of trees in kitulanghalo forest reserve and surrounding communal lands, eastern Tanzania. Econ Bot. 2000;54:328–43. Schnitzer SA, Bongers F. The ecology of lianas and their role in forests. Trends Ecol Evol. 2002;17:223–30. Dewalt SJ, Schnitzer SA, Denslow JS. Density and Diversity of Lianas along a Chronosequence in a central Panamanian lowland Forest. J Trop Ecol. 2000;16:1–19. Sitte P, Weiler EW, Kadereit JW, Bresinsky A, Körner C. Strasburger Lehrbuch der Botanik. Mit CD-ROM. Heidelberg: Spektrum Akademischer Verlag; 2002. Avila JV da C, Zank S, Valadares KM de O, Maragno JM, Hanazaki N. The traditional knowledge of Quilombola about plants: does urbanization matter? Ethnobot Res Appl. 2015;14:453–62. Bennett BC, Prance GT. Introduced plants in the indigenous pharmacopoeia of northern South America. Econ Bot. 2000;54:90–102. Dos Santos LL, do Nascimento ALB, Vieira FJ, da Silva VA, Voeks R, Albuquerque UP. The cultural value of invasive species: a case study from semi–arid northeastern Brazil. Econ Bot. 2014;68:283–300. Alencar NL, Santoro FR, Albuquerque UP. What is the role of exotic medicinal plants in local medical systems? A study from the perspective of utilitarian redundancy. Rev Bras Farmacogn. 2014;24:506–15. Medeiros PM, Júnior WSF, Ramos MA, da Silva TC, Ladio AH, Albuquerque UP. Why do people use exotic plants in their local medical systems? A systematic review based on Brazilian local communities. PLoS One. 2017;12:e0185358. Yadav JP, Arya V, Yadav S, Panghal M, Kumar S, Dhankhar S. Cassia occidentalis L.: a review on its ethnobotany, phytochemical and pharmacological profile. Fitoterapia. 2010;81:223–30. Voeks R. African medicine and magic in the Americas. Geogr Rev. 1993;83:66. GISD [Internet]. Available from: http://www.iucngisd.org/gisd/. [cited 24 May 2017]. Boy G, Witt A. INVASIVE ALIEN PLANTS. 2013; Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.403.962&rep=rep1&type=pdf. [cited 23 May 2017] Akol AM, Chidege MY, Talwana HAL, Mauremootoo JR. Invertebrate pests of maize in East Africa (Kenya, Uganda and Tanzania), Lucid v. 3.5 key and fact sheets [Internet]. Makerere University, TPRI, BioNET-EAFRINET, CABI & The University of Queensland (September 2011); 2011. Available from: keys.lucidcentral.org/keys/v3/EAFRINET Vodouhê FG, Coulibaly O, Greene C, Sinsin B. Estimating the local value of non-timber forest products to pendjari biosphere reserve dwellers in Benin. Econ Bot. 2009;63:397. Giday M, Teklehaymanot T, Animut A, Mekonnen Y. Medicinal plants of the Shinasha, Agew-awi and Amhara peoples in northwest Ethiopia. J Ethnopharmacol. 2007;110:516–25. Cheikhyoussef A, Embashu W. Ethnobotanical knowledge on indigenous fruits in Ohangwena and Oshikoto regions in northern Namibia. J Ethnobiol Ethnomed. 2013;9:34. Balick MJ, Cox PA. Plants, people, and culture: the science of ethnobotany. New York: W H Freeman & Co; 1996. Parveen, Upadhyay B, Roy S, Kumar A. Traditional uses of medicinal plants among the rural communities of Churu district in the Thar Desert, India. J Ethnopharmacol. 2007;113:387–99. Panghal M, Arya V, Yadav S, Kumar S, Yadav JP. Indigenous knowledge of medicinal plants used by Saperas community of Khetawas, Jhajjar District, Haryana, India. J Ethnobiol Ethnomed. 2010;6:4. Mander M. Marketing of Indigenous Medicinal Plants in South Africa : A case study in Kwazulu-Natal [Internet]. ResearchGate. 1998. Available from: https://www.researchgate.net/publication/267266779_Marketing_of_Indigenous_Medicinal_Plants_in_South_Africa_A_case_study_in_Kwazulu-Natal. [cited 29 May 2017] Cunningham AB. African medicinal plants. U N Educ Sci Cult Organ Paris Fr [Internet]. 1993; Available from: http://www.academia.edu/download/7985865/wp1e.pdf. [cited 29 May 2017] Gupta R, Reid M. Macroeconomic surprises and stock returns in South Africa. Stud Econ Finance. 2013;30:266–82. Jusu A, Sanchez AC. Medicinal plant trade in Sierra Leone: threats and opportunities for conservation. Econ Bot. 2014;68:16–29. Obi SNC. Fortmann L, Bruce JW, editors. Women's rights and interests in trees. In whose trees? Proprietary dimensions of forestry, vol. 1988: Westview Press Boulder. p. 240–2. Pearce JMS. The doctrine of signatures. Eur Neurol. 2008;60:51–2. Böhme J, Ellistone d. 1652 John. Signatura rerum, or, The signature of all things : shewing the sign and signification of the severall forms and shapes in the creation, and what the beginning, ruin, and cure of every thing is [Internet]. London : Printed by John Macock for Gyles Calvert; 1651. Available from: http://trove.nla.gov.au/work/7123615. [cited 24 May 2017] Sundram K. Palm fruit chemistry and nutrition. Asia Pacific J Clin Nutr. 2003; Poitou F, Masotti V, de SSG, Viano J, Gaydou EM. Composition of the essential oil of Xylopia aethiopica dried fruits from Benin. J Essent Oil Res. 1996;8:329–30. Loubelo E. Impact des produits forestiers non ligneux (PFNL) sur l'économie des ménages et la sécurité alimentaire : cas de la République du Congo [Internet] [phdthesis]. Université Rennes 2; 2012. Available from: https://tel.archives-ouvertes.fr/tel-00713758/document. [cited 24 May 2017] Adjanohoun E, Organization of African Unity, Scientific T and Research Commission. Contribution to ethnobotanical and floristic studies in Uganda. Place of publication not identified: Organization of African Unity, Scientific Technical & Research Commission = Organisation de l'unit é africaine, Commission scientifique technique et de la recherche; 1993. Chamratpan S. Biodiversity of medicinal mushrooms in Northeast Thailand. In: Proc 2nd Int Conf med mushroom Int Conf biodivers bioact Compd; 2003. Hartke K, Mutschler E, Rücker G. Deutsches Arzneibuch ( DAB 10). 10. Ausgabe 1991.: Wissenschaftliche Erläuterungen zum Deutschen Arzneibuch. 1991. Jiofack T, Fokunang C, Guedje N, Kemeuze V, Fongnzossie E, Nkongmeneck BA, et al. Ethnobotanical uses of medicinal plants of two ethnoecological regions of Cameroon. Int J Med Med Sci. 2010;2:60–79. Erinoso S. M. Ethnobotanical survey of some medicinal plants used in traditional health care in Abeokuta areas of Ogun State, Nigeria. Afr J Pharm Pharmacol [Internet]. 2012;6. Available from: http://www.academicjournals.org/ajpp/abstracts/abstracts/abstract%202012/15%20May/Erinoso%20and%20Aworinde.htm. [cited 23 May 2017] Prusti AB, Behera KK. Ethnobotanical exploration of Malkangiri district of Orissa. India Ethnobot Leafl. 2007;2007:14. Kouamé PB-K, Jacques C, Bedi G, Silvestre V, Loquet D, Barillé-Nion S, et al. Phytochemicals isolated from leaves of Chromolaena odorata : impact on viability and Clonogenicity of Cancer cell lines: ANTICANCER ACTIVITY OF CHROMOLAENA ODORATA LEAF EXTRACTS. Phytother Res. 2013;27:835–40. Suriyavathana M, Parameswari G, Shiyan SP. Biochemical and antimicrobial study of Boerhavia erecta and Chromolaena odorata (L.) King & Robinson. Int J Pharm Sci Res. 2012;3:465. Hulse JH. Biotechnologies: past history, present state and future prospects. Trends Food Sci Technol. 2004;15:3–18. van Andel T, van Onselen S, Myren B, Towns A, Quiroz D. "The medicine from behind": the frequent use of enemas in western African traditional medicine. J Ethnopharmacol. 2015;174:637–43. Mesfin F, Demissew S, Teklehaymanot T. An ethnobotanical study of medicinal plants in Wonago Woreda, SNNPR, Ethiopia. J Ethnobiol Ethnomed. 2009;5:28. Biloso A, Lejoly J. Etude de l'exploitation et du marché des produits forestiers non ligneux à Kinshasa. Tropicultura. 2006;24:183–8. Termote C, Van Damme P. Wild edible plant use in Tshopo District, DR Congo: Universiteit Gent; 2012. Kibungu Kembelo A, Kibungu Kembelo P. Contribution à l'étude des plantes alimentaires des populations du territoire de Madimba. Jard Bot Kisantu. 2010; Latham P, Konda ku Mbuta A. Some honeybee plants of Bas-Congo province, Democratic Republic of Congo. 2011. PROTA4U. Prota4U.org [Internet]. 2017. Available from: https://www.prota4u.org/database/protav8.asp?h=M26&t=Cryptolepis_sanguinolenta&p=Cryptolepis+sanguinolenta#MajorReferences. [cited 1 Dec 2018] Abotsi WK, Ainooson GK, Gyasi EB, Abotsi WKM. Acute and sub-acute toxicity studies of the ethanolic extract of the aerial parts of Hilleria latifolia (Lam.) H. Walt.(Phytolaccaceae) in rodents. West Afr J Pharma. 2011;22:27–35. Voeks RA. Are women reservoirs of traditional plant knowledge? Gender, ethnobotany and globalization in Northeast Brazil. Singap J Trop Geogr. 2007;28:7–20. Bradacs G, Heilmann J, Weckerle CS. Medicinal plant use in Vanuatu: a comparative ethnobotanical study of three islands. J Ethnopharmacol. 2011;137:434–48. Kamatenesi-Mugisha M, Oryem-Origa H. Traditional herbal remedies used in the management of sexual impotence and erectile dysfunction in western Uganda. Afr Health Sci. 2005;5:40–9. Pirker H, Haselmair R, Kuhn E, Schunko C, Vogl CR. Transformation of traditional knowledge of medicinal plants: the case of Tyroleans (Austria) who migrated to Australia, Brazil and Peru. J Ethnobiol Ethnomedicine. 2012;8:44. Cohen B. Urbanization in developing countries: current trends, future projections, and key challenges for sustainability. Technol Soc. 2006;28:63–80. Jenkins M. Prospects for biodiversity. Science. 2003;302:1175–7. The analysis and discussion contained within this article would not have been possible without the contribution of knowledge from the villagers of the province of Uíge. The University Kimpa Vita was an essential base for operations and provided logistical support. The authors would like to thank Barbara Ditsch, Gerard van der Weerden, Paul Latham, Christian Schulz, Daniel Nickrent, Anne Göhre, Roseli Buzanelli Torres, Thomas Couvreur and Ulrich Meve for assistance in the identification of selected herbarium specimens. We are also grateful to the Herbarium LISC in Lisbon and the Herbarium COI in Coimbra, Portugal for the assistance as well as to the Botanical Garden of the TU Dresden for cultivating plants until essential characters for identification appeared. Thanks to Andreas Kempe for preparing Fig. 1 and Stefan Bischoff for preparing Fig. 2. We acknowledge support by the Open Access Publication Funds of the SLUB/TU Dresden. The fieldwork in Angola was supported by a travel fund from the German Academic Exchange Service (DAAD) and the program "Strategic Partnerships" of the TU Dresden. These published results were obtained in collaboration with the Instituto Nacional da Biodiversidade e Áreas de Conservação (INBAC) of the Ministério do Ambiente da República de Angola. All data are available from the corresponding author. All voucher specimens are deposited in the Herbarium Dresdense (DD) of the Institute of Botany, Technische Universität Dresden, Germany. As soon as suitable conditions are established, parts of the collection will be deposited at University Kimpa Vita, Uíge, Angola. Department of Biology, Institute of Botany, Faculty of Science, Technische Universität Dresden, 01062, Dresden, Germany Thea Lautenschläger , Christin Heinze & Christoph Neinhuis University Kimpa Vita, Province of Uíge, Rua Henrique Freitas No. 1, Bairro Popular, Uíge, Angola Mawunu Monizi , Macuntima Pedro , José Lau Mandombe & Makaya Futuro Bránquima Search for Thea Lautenschläger in: Search for Mawunu Monizi in: Search for Macuntima Pedro in: Search for José Lau Mandombe in: Search for Makaya Futuro Bránquima in: Search for Christin Heinze in: Search for Christoph Neinhuis in: TL carried out field work, analysed the collected data and drafted the manuscript. MM, MP, JLM and MFB participated in field work and established contact with local people. CH and CN participated in the design of the study and helped to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Thea Lautenschläger. Since 2012, the Universidade Kimpa Vita in Uíge, Angola and the Technische Universität Dresden, Germany, have a multifaceted cooperation including the establishment of a Botanical Garden with the focus on local medicinal plants as well as biodiversity assessments. Short movie: Two of the authors during field studies in Uíge. A traditional healer demonstrates the preparation and application of an herbal funnel. (MOV 142656 kb) Lautenschläger, T., Monizi, M., Pedro, M. et al. First large-scale ethnobotanical survey in the province of Uíge, northern Angola. J Ethnobiology Ethnomedicine 14, 51 (2018) doi:10.1186/s13002-018-0238-3 Received: 10 February 2018 Accepted: 29 May 2018 Influence of distance Gender-specific Neophytes
CommonCrawl
Only show open access (2) Over 3 years (29) Physics And Astronomy (55) flm;acoustics;jet noise Journal of Fluid Mechanics (55) Ryan Test (55) Modelling installed jet noise due to the scattering of jet instability waves by swept wings Benshuai Lyu, Ann P. Dowling Journal: Journal of Fluid Mechanics / Volume 870 / 10 July 2019 Published online by Cambridge University Press: 14 May 2019, pp. 760-783 Print publication: 10 July 2019 Add to cart USD35.00 Added An error has occurred, Jet noise is a significant contributor to aircraft noise, and on modern aircraft it is considerably enhanced at low frequencies by a closely installed wing. Recent research has shown that this noise increase is due to the scattering of jet instability waves by the trailing edge of the wing. Experimentalists have recently shown that noise can be reduced by using wings with swept trailing edges. To understand this mechanism, in this paper, we develop an analytical model to predict the installed jet noise due to the scattering of instability waves by a swept wing. The model is based on the Schwarzschild method and Amiet's approach is used to obtain the far-field sound. The model can correctly predict both the reduction in installed jet noise and the change to directivity patterns observed in experiments due to the use of swept wings. The agreement between the model and experiment is very good, especially for the directivity at large azimuthal angles. It is found that the principal physical mechanism of sound reduction is due to destructive interference. It is concluded that in order to obtain an effective noise reduction, both the span and the sweep angle of the wing have to be large. Such a model can greatly aid in the design of quieter swept wings and the physical mechanism identified can provide significant insight into developing other innovative noise-reduction strategies. A conditional space–time POD formalism for intermittent and rare events: example of acoustic bursts in turbulent jets Oliver T. Schmidt, Peter J. Schmid Journal: Journal of Fluid Mechanics / Volume 867 / 25 May 2019 Published online by Cambridge University Press: 02 April 2019, R2 Print publication: 25 May 2019 We present a conditional space–time proper orthogonal decomposition (POD) formulation that is tailored to the eduction of the average, rare or intermittent events from an ensemble of realizations of a fluid process. By construction, the resulting spatio-temporal modes are coherent in space and over a predefined finite time horizon, and optimally capture the variance, or energy of the ensemble. For the example of intermittent acoustic radiation from a turbulent jet, we introduce a conditional expectation operator that focuses on the loudest events, as measured by a pressure probe in the far field and contained in the tail of the pressure signal's probability distribution. Applied to high-fidelity simulation data, the method identifies a statistically significant 'prototype', or average acoustic burst event that is tracked over time. Most notably, the burst event can be traced back to its precursor, which opens up the possibility of prediction of an imminent burst. We furthermore investigate the mechanism underlying the prototypical burst event using linear stability theory and find that its structure and evolution are accurately predicted by optimal transient growth theory. The jet-noise problem demonstrates that the conditional space–time POD formulation applies even for systems with probability distributions that are not heavy-tailed, i.e. for systems in which events overlap and occur in rapid succession. The acoustic impedance of a laminar viscous jet through a thin circular aperture David Fabre, Raffaele Longobardi, Paul Bonnefis, Paolo Luchini Journal: Journal of Fluid Mechanics / Volume 864 / 10 April 2019 Published online by Cambridge University Press: 01 February 2019, pp. 5-44 Print publication: 10 April 2019 The unsteady axisymmetric flow through a circular aperture in a thin plate subjected to harmonic forcing (for instance under the effect of an incident acoustic wave) is a classical problem first considered by Howe (Proc. R. Soc. Lond. A, vol. 366, 1979, pp. 205–223), using an inviscid model. The purpose of this work is to reconsider this problem through a numerical resolution of the incompressible linearized Navier–Stokes equations (LNSE) in the laminar regime, corresponding to $Re=[500,5000]$ . We first compute a steady base flow which allows us to describe the vena contracta phenomenon in agreement with experiments. We then solve a linear problem allowing us to characterize both the spatial amplification of the perturbations and the impedance (or equivalently the Rayleigh conductivity), which is a key quantity to investigate the response of the jet to acoustic forcing. Since the linear perturbation is characterized by a strong spatial amplification, the numerical resolution requires the use of a complex mapping of the axial coordinate in order to enlarge the range of Reynolds number investigated. The results show that the impedances computed with $Re\gtrsim 1500$ collapse onto a single curve, indicating that a large Reynolds number asymptotic regime is effectively reached. However, expressing the results in terms of conductivity leads to substantial deviation with respect to Howe's model. Finally, we investigate the case of finite-amplitude perturbations through direct numerical simulations (DNS). We show that the impedance predicted by the linear approach remains valid for amplitudes up to order $10^{-1}$ , despite the fact that the spatial evolution of the perturbations in the jet is strongly nonlinear. Impact of coherence decay on wavepacket models for broadband shock-associated noise in supersonic jets Marcus H. Wong, Peter Jordan, Damon R. Honnery, Daniel Edgington-Mitchell Journal: Journal of Fluid Mechanics / Volume 863 / 25 March 2019 Published online by Cambridge University Press: 29 January 2019, pp. 969-993 Print publication: 25 March 2019 Motivated by the success of wavepackets in modelling the noise from subsonic and perfectly expanded supersonic jets, we apply the wavepacket model to imperfectly expanded supersonic jets. Recent studies with subsonic jets have demonstrated the importance of capturing the 'jitter' of wavepackets in order to correctly predict the intensity of far-field sound. Wavepacket jitter may be statistically represented using a two-point coherence function; accurate prediction of noise requires identification of this coherence function. Following the analysis of Cavalieri & Agarwal (J. Fluid Mech., vol. 748, 2014. pp. 399–415), we extend their methodology to model the acoustic sources of broadband shock-associated noise in imperfectly expanded supersonic jets using cross-spectral densities of the turbulent and shock-cell quantities. The aim is to determine the relationship between wavepacket coherence-decay and far-field broadband shock-associated noise, using the model as a vehicle to explore the flow mechanisms at work. Unlike the subsonic case where inclusion of coherence decay amplifies the sound pressure level over the whole acoustic spectrum, we find that it does not play such a critical role in determining the peak sound amplitude for shock-cell noise. When higher-order shock-cell modes are used to reconstruct the acoustic spectrum at higher frequencies, however, the inclusion of a jittering wavepacket is necessary. These results suggest that the requirement for coherence decay identified in prior broadband shock-associated noise (BBSAN) models is in reality the statistical signature of jittering wavepackets. The results from this modelling approach suggest that nonlinear jittering effects of wavepackets need to be included in dynamic models for broadband shock-associated noise. Nozzle external geometry as a boundary condition for the azimuthal mode selection in an impinging underexpanded jet Joel L. Weightman, Omid Amili, Damon Honnery, Daniel Edgington-Mitchell, Julio Soria The role of the external boundary conditions of the nozzle surface on the azimuthal mode selection of impinging supersonic jets is demonstrated for the first time. Jets emanating from thin- and infinite-lipped nozzles at a nozzle pressure ratio of $3.4$ and plate spacing of $5.0D$ , where $D$ is the nozzle exit diameter, are investigated using high resolution particle image velocimetry (PIV) and acoustic measurements. Proper orthogonal decomposition is applied to the PIV fields and a difference in dominant instability mode is found. To investigate possible explanations for the change in instability mode, additional nozzle external boundary conditions are investigated, including the addition of acoustic dampening foam. A difference in acoustic feedback path is suggested to be the cause for the change in dominant azimuthal modes between the flows. This is due to the thin-lip case containing a feedback path that is concluded to be closed exclusively by a reflection from the nozzle base surface, rather than directly to the nozzle lip. The ability of the flow to form a feedback path that maximises the impingement tone gain is discussed with consideration of the numerous acoustic feedback paths possible for the given nozzle external boundary conditions. Nozzles, turbulence, and jet noise prediction Jonathan B. Freund Journal: Journal of Fluid Mechanics / Volume 860 / 10 February 2019 Published online by Cambridge University Press: 29 November 2018, pp. 1-4 Print publication: 10 February 2019 Jet noise prediction is notoriously challenging because only subtle features of the flow turbulence radiate sound. The article by Brès et al. (J. Fluid Mech., vol. 851, 2018, pp. 83–124) shows that a well-constructed modelling procedure for the nozzle turbulence can provide unprecedented sub-dB prediction accuracy with modest-scale large-eddy simulations, as confirmed by detailed comparison with turbulence and sound-field measurements. This both illuminates the essential mechanisms of the flow and facilitates prediction for engineering design. On noise generation in low Reynolds number temporal round jets at a Mach number of 0.9 Christophe Bogey Journal: Journal of Fluid Mechanics / Volume 859 / 25 January 2019 Published online by Cambridge University Press: 27 November 2018, pp. 1022-1056 Print publication: 25 January 2019 Two temporally developing isothermal round jets at a Mach number of 0.9 and Reynolds numbers of 3125 and 12 500 are simulated in order to investigate noise generation in high-subsonic jet flows. Snapshots and statistical properties of the flow and sound fields, including mean, root-mean-square and skewness values, spectra and auto- and cross-correlations of velocity and pressure, are presented. The jet at a Reynolds number of 12 500 develops more rapidly, exhibits more fine turbulent scales and generates more high-frequency acoustic waves than the other. In both cases, however, when the jet potential core closes, mixing-layer turbulent structures intermittently intrude on the jet axis and strong low-frequency acoustic waves are emitted in the downstream direction. These waves are dominated by the axisymmetric mode and are significantly correlated with centreline flow fluctuations. These results are similar to those obtained at the end of the potential core of spatially developing jets. They suggest that the mechanism responsible for the downstream noise component of these jets also occurs in temporal jets, regardless of the Reynolds number. This mechanism is revealed by averaging the flow and pressure fields of the present jets using a sample synchronization with the minimum values of centreline velocity at potential-core closing. A spot characterized by a lower velocity and a higher level of vorticity relative to the background flow field is found to develop in the interfacial region between the mixing layer and the potential core, to strengthen rapidly and reach a peak intensity when arriving on the jet axis, and then to break down. This is accompanied by the growth and decay of a hydrodynamic pressure wave, propagating at a velocity which, initially, is close to 65 per cent of the jet velocity and slightly increases, but quickly decreases after the collapse of the high-vorticity spot in the flow. During that process, sound waves are radiated in the downstream direction. Spectral analysis of jet turbulence Oliver T. Schmidt, Aaron Towne, Georgios Rigas, Tim Colonius, Guillaume A. Brès Journal: Journal of Fluid Mechanics / Volume 855 / 25 November 2018 Published online by Cambridge University Press: 21 September 2018, pp. 953-982 Print publication: 25 November 2018 Informed by large-eddy simulation (LES) data and resolvent analysis of the mean flow, we examine the structure of turbulence in jets in the subsonic, transonic and supersonic regimes. Spectral (frequency-space) proper orthogonal decomposition is used to extract energy spectra and decompose the flow into energy-ranked coherent structures. The educed structures are generally well predicted by the resolvent analysis. Over a range of low frequencies and the first few azimuthal mode numbers, these jets exhibit a low-rank response characterized by Kelvin–Helmholtz (KH) type wavepackets associated with the annular shear layer up to the end of the potential core and that are excited by forcing in the very-near-nozzle shear layer. These modes too have been experimentally observed before and predicted by quasi-parallel stability theory and other approximations – they comprise a considerable portion of the total turbulent energy. At still lower frequencies, particularly for the axisymmetric mode, and again at high frequencies for all azimuthal wavenumbers, the response is not low-rank, but consists of a family of similarly amplified modes. These modes, which are primarily active downstream of the potential core, are associated with the Orr mechanism. They occur also as subdominant modes in the range of frequencies dominated by the KH response. Our global analysis helps tie together previous observations based on local spatial stability theory, and explains why quasi-parallel predictions were successful at some frequencies and azimuthal wavenumbers, but failed at others. Upstream-travelling acoustic jet modes as a closure mechanism for screech Daniel Edgington-Mitchell, Vincent Jaunet, Peter Jordan, Aaron Towne, Julio Soria, Damon Honnery Published online by Cambridge University Press: 20 September 2018, R1 Experimental evidence is provided to demonstrate that the upstream-travelling waves in two jets screeching in the A1 and A2 modes are not free-stream acoustic waves, but rather waves with support within the jet. Proper orthogonal decomposition is used to educe the coherent fluctuations associated with jet screech from a set of randomly sampled velocity fields. A streamwise Fourier transform is then used to isolate components with positive and negative phase speeds. The component with negative phase speed is shown, by comparison with a vortex-sheet model, to resemble the upstream-travelling jet wave first studied by Tam & Hu (J. Fluid Mech., vol. 201, 1989, pp. 447–483). It is further demonstrated that screech tones are only observed over the frequency range where this upstream-travelling wave is propagative. Jet–flap interaction tones Peter Jordan, Vincent Jaunet, Aaron Towne, André V. G. Cavalieri, Tim Colonius, Oliver Schmidt, Anurag Agarwal Journal: Journal of Fluid Mechanics / Volume 853 / 25 October 2018 Published online by Cambridge University Press: 23 August 2018, pp. 333-358 Print publication: 25 October 2018 Motivated by the problem of jet–flap interaction noise, we study the tonal dynamics that occurs when an isothermal turbulent jet grazes a sharp edge. We perform hydrodynamic and acoustic pressure measurements to characterise the tones as a function of Mach number and streamwise edge position. The observed distribution of spectral peaks cannot be explained using the usual edge-tone model, in which resonance is underpinned by coupling between downstream-travelling Kelvin–Helmholtz wavepackets and upstream-travelling sound waves. We show, rather, that the strongest tones are due to coupling between Kelvin–Helmholtz wavepackets and a family of trapped, upstream-travelling acoustic modes in the potential core, recently studied by Towne et al. (J. Fluid Mech. vol. 825, 2017) and Schmidt et al. (J. Fluid Mech. vol. 825, 2017). We also study the band-limited nature of the resonance, showing the high-frequency cutoff to be due to the frequency dependence of the upstream-travelling waves. Specifically, at high Mach number, these modes become evanescent above a certain frequency, whereas at low Mach number they become progressively trapped with increasing frequency, which inhibits their reflection in the nozzle plane. A volume integral implementation of the Goldstein generalised acoustic analogy for unsteady flow simulations Vasily A. Semiletov, Sergey A. Karabasov A new volume integral method based on the Goldstein generalised acoustic analogy is developed and directly applied with large-eddy simulation (LES). In comparison with the existing Goldstein generalised acoustic analogy implementations, the current method does not require the computation and recording of the expensive fluctuating stress autocovariance function in the seven-dimensional space–time. Until now, the multidimensional complexity of the generalised acoustic analogy source term has been the main barrier to using it in routine engineering calculations. The new method only requires local pointwise stresses as an input that can be routinely computed during the flow simulation. On the other hand, the new method is mathematically equivalent to the original Goldstein acoustic analogy formulation, and, thus, allows for a direct correspondence between different effective noise sources in the jet and the far-field noise spectra. The implementation is performed for conditions of a high-speed subsonic isothermal jet corresponding to the Rolls-Royce SILOET experiment and uses the LES solution based on the CABARET solver. The flow and noise solutions are validated by comparison with experiment. The accuracy and robustness of the integral volume implementation of the generalised acoustic analogy are compared with those based on the standard Ffowcs Williams–Hawkings surface integral method and the conventional Lighthill acoustic analogy. As a demonstration of its capabilities to investigate jet noise mechanisms, the new integral volume method is applied to analyse the relative importance of various noise generation and propagation components within the Goldstein generalised acoustic analogy model. Importance of the nozzle-exit boundary-layer state in subsonic turbulent jets Guillaume A. Brès, Peter Jordan, Vincent Jaunet, Maxime Le Rallic, André V. G. Cavalieri, Aaron Towne, Sanjiva K. Lele, Tim Colonius, Oliver T. Schmidt Journal: Journal of Fluid Mechanics / Volume 851 / 25 September 2018 Published online by Cambridge University Press: 19 July 2018, pp. 83-124 Print publication: 25 September 2018 To investigate the effects of the nozzle-exit conditions on jet flow and sound fields, large-eddy simulations of an isothermal Mach 0.9 jet issued from a convergent-straight nozzle are performed at a diameter-based Reynolds number of $1\times 10^{6}$ . The simulations feature near-wall adaptive mesh refinement, synthetic turbulence and wall modelling inside the nozzle. This leads to fully turbulent nozzle-exit boundary layers and results in significant improvements for the flow field and sound predictions compared with those obtained from the typical approach based on laminar flow in the nozzle. The far-field pressure spectra for the turbulent jet match companion experimental measurements, which use a boundary-layer trip to ensure a turbulent nozzle-exit boundary layer to within 0.5 dB for all relevant angles and frequencies. By contrast, the initially laminar jet results in greater high-frequency noise. For both initially laminar and turbulent jets, decomposition of the radiated noise into azimuthal Fourier modes is performed, and the results show similar azimuthal characteristics for the two jets. The axisymmetric mode is the dominant source of sound at the peak radiation angles and frequencies. The first three azimuthal modes recover more than 97 % of the total acoustic energy at these angles and more than 65 % (i.e. error less than 2 dB) for all angles. For the main azimuthal modes, linear stability analysis of the near-nozzle mean-velocity profiles is conducted in both jets. The analysis suggests that the differences in radiated noise between the initially laminar and turbulent jets are related to the differences in growth rate of the Kelvin–Helmholtz mode in the near-nozzle region. Vortex dynamics and sound emission in excited high-speed jets Michael Crawley, Lior Gefen, Ching-Wen Kuo, Mo Samimy, Roberto Camussi This work aims to study the dynamics of and noise generated by large-scale structures in a Mach 0.9 turbulent jet of Reynolds number $6.2\times 10^{5}$ using plasma-based excitation of shear layer instabilities. The excitation frequency is varied to produce individual or periodic coherent ring vortices in the shear layer. First, two-point cross-correlations are used between the acoustic near field and far field in order to identify the dominant noise source region. The large-scale structure interactions are then investigated by stochastically estimating time-resolved velocity fields using time-resolved near-field pressure traces and non-time-resolved planar velocity snapshots (obtained by particle image velocimetry) by means of an artificial neural network. The estimated time-resolved velocity fields show multiple mergings of large-scale structures in the shear layer, and indicate that disintegration of coherent ring vortices is the dominant aeroacoustic source mechanism for the jet studied here. However, the merging of vortices in the initial shear layer is also identified as a non-trivial noise source mechanism. On the hydrodynamic and acoustic nature of pressure proper orthogonal decomposition modes in the near field of a compressible jet Matteo Mancinelli, Tiziano Pagliaroli, Roberto Camussi, Thomas Castelain Published online by Cambridge University Press: 13 December 2017, pp. 998-1008 In this work an experimental investigation of the near-field pressure of a compressible jet is presented. The proper orthogonal decomposition (POD) of the pressure fluctuations measured by a linear array of microphones is performed in order to provide the streamwise evolution of the jet structure. The wavenumber–frequency spectrum of the space–time pressure fields re-constructed using each POD mode is computed in order to provide the physical interpretation of the mode in terms of hydrodynamic/acoustic nature. Specifically, non-radiating hydrodynamic, radiating acoustic and 'hybrid' hydro-acoustic modes are found based on the phase velocity associated with the spectral energy bumps in the wavenumber–frequency domain. Furthermore, the propagation direction in the far field of the radiating POD modes is detected through the cross-correlation with the measured far-field noise. Modes associated with noise emissions from large/fine scale turbulent structures radiating in the downstream/sideline direction in the far field are thus identified. Modelling of noise reduction in complex multistream jets Dimitri Papamoschou Published online by Cambridge University Press: 17 November 2017, pp. 555-599 The paper presents a low-order prediction scheme for the noise change in multistream jets when the nozzle geometry is altered from a known baseline. The essence of the model is to predict the changes in acoustics due to the redistribution of the mean flow as computed by a Reynolds-averaged Navier–Stokes (RANS) solver. A RANS-based acoustic analogy framework is developed that addresses the noise in the polar direction of peak emission and uses the Reynolds stress as a time-averaged representation of the action of the coherent turbulent structures. The framework preserves the simplicity of the Lighthill acoustic analogy, using the free-space Green's function, while accounting for azimuthal effects via special forms for the space–time correlation combined with source–observer relations based on the Reynolds stress distribution in the jet plume. Results are presented for three-stream jets with offset secondary and tertiary flows that reduce noise in specific azimuthal directions. The model reproduces well the experimental noise reduction trends. Principal mechanisms of noise reduction are elucidated. The near-field pressure radiated by planar high-speed free-shear-flow turbulence David A. Buchta, Jonathan B. Freund Journal: Journal of Fluid Mechanics / Volume 832 / 10 December 2017 Print publication: 10 December 2017 Jets with Mach numbers $M\gtrsim 1.5$ are well known to emit an intense, fricative, so-called crackle sound, having steep compressions interspersed with weaker expansions that together yield a positive pressure skewness $S_{k}>0$ . Its shock-like features are obvious hallmarks of nonlinearity, although a full explanation of the skewness is lacking, and wave steepening alone is understood to be insufficient to describe its genesis. Direct numerical simulations of high-speed free-shear flows for Mach numbers $M=0.9$ , $1.5$ , $2.5$ and $3.5$ in the Reynolds number range $60\leqslant Re_{\unicode[STIX]{x1D6FF}_{m}}\leqslant 4200$ are used to examine the mechanisms leading to such pressure signals, especially the pressure skewness. For $M=2.5$ and $3.5$ , the pressure immediately adjacent the turbulence already has the large $S_{k}\gtrsim 0.4$ associated with jet crackle. It also has a surprisingly complex three-dimensional structure, with locally high pressures at compression-wave intersections. This structure is transient, and it simplifies as radiating waves subsequently merge through nonlinear mechanisms to form the relatively distinct and approximately two-dimensional Mach-like waves deduced from laboratory visualizations. A transport equation for $S_{k}$ is analysed to quantify factors affecting its development. The viscous dissipation that decreases $S_{k}$ is balanced by a particular nonlinear flux, which is (of course) absent in linear acoustic propagation and confirmed to be independent of the simulated Reynolds numbers. Together these effects maintain an approximately constant $S_{k}$ in the near acoustic field. High-frequency wavepackets in turbulent jets Kenzo Sasaki, André V. G. Cavalieri, Peter Jordan, Oliver T. Schmidt, Tim Colonius, Guillaume A. Brès Wavepackets obtained as solutions of the flow equations linearised around the mean flow have been shown in recent work to yield good agreement, in terms of amplitude and phase, with those educed from turbulent jets. Compelling agreement has been demonstrated, for the axisymmetric and first helical mode, up to Strouhal numbers close to unity. We here extend the range of validity of wavepacket models to Strouhal number $St=4.0$ and azimuthal wavenumber $m=4$ by comparing solutions of the parabolised stability equations with a well-validated large-eddy simulation of a Mach 0.9 turbulent jet. The results show that the near-nozzle dynamics can be correctly described by the homogeneous linear model, the initial growth rates being accurately predicted for the entire range of frequencies and azimuthal wavenumbers considered. Similarly to the lower-frequency wavepackets reported prior to this work, the high-frequency linear waves deviate from the data downstream of their stabilisation locations, which move progressively upstream as the frequency increases. Wavepackets and trapped acoustic modes in a turbulent jet: coherent structure eduction and global stability Oliver T. Schmidt, Aaron Towne, Tim Colonius, André V. G. Cavalieri, Peter Jordan, Guillaume A. Brès Journal: Journal of Fluid Mechanics / Volume 825 / 25 August 2017 Print publication: 25 August 2017 Coherent features of a turbulent Mach 0.9, Reynolds number $10^{6}$ jet are educed from a high-fidelity large eddy simulation. Besides the well-known Kelvin–Helmholtz instabilities of the shear layer, a new class of trapped acoustic waves is identified in the potential core. A global linear stability analysis based on the turbulent mean flow is conducted. The trapped acoustic waves form branches of discrete eigenvalues in the global spectrum, and the corresponding global modes accurately match the educed structures. Discrete trapped acoustic modes occur in a hierarchy determined by their radial and axial order. A local dispersion relation is constructed from the global modes and found to agree favourably with an empirical dispersion relation educed from the simulation data. The product between direct and adjoint modes is then used to isolate the trapped waves. Under certain conditions, resonance in the form of a beating occurs between trapped acoustic waves of positive and negative group velocities. This resonance explains why the trapped modes are prominently observed in the simulation and as tones in previous experimental studies. In the past, these tones were attributed to external factors. Here, we show that they are an intrinsic feature of high-subsonic jets that can be unambiguously identified by a global linear stability analysis. Experimental characterisation of the screech feedback loop in underexpanded round jets Bertrand Mercier, Thomas Castelain, Christophe Bailly Published online by Cambridge University Press: 05 July 2017, pp. 202-229 Near-field acoustic measurements and time-resolved schlieren visualisations are performed on 10 round jets with the aim of analysing the different parts of the feedback loop related to the screech phenomenon in a systematic fashion. The ideally expanded Mach number of the studied jets ranges from $M_{j}=1.07$ to $M_{j}=1.50$ . The single source of screech acoustic waves is found at the fourth shock tip for A1 and A2 modes, and at either the third or the fourth shock tip for the B mode, depending on the Mach number. The phase of the screech cycle is measured throughout schlieren visualisations in the shear layer from the nozzle to the source. Estimates of the convective velocities are deduced for each case, and a trend for the convective velocity to grow with the axial distance is pointed out. These results are used together with source localisation deduced from a two-microphone survey to determine the number of screech periods contained in a screech loop. For the A1 and B modes, four periods are contained in a loop for cases in which the radiating shock is the fourth, and three periods when the radiating shock tip is the third, whereas the loop of the A2 mode contains five periods. Feedback loop and upwind-propagating waves in ideally expanded supersonic impinging round jets Christophe Bogey, Romain Gojon Published online by Cambridge University Press: 22 June 2017, pp. 562-591 The aeroacoustic feedback loop establishing in a supersonic round jet impinging on a flat plate normally has been investigated by combining compressible large-eddy simulations and modelling of that loop. At the exit of a straight pipe nozzle of radius $r_{0}$ , the jet is ideally expanded, and has a Mach number of 1.5 and a Reynolds number of $6\times 10^{4}$ . Four distances between the nozzle exit and the flat plate, equal to $6r_{0}$ , $8r_{0}$ , $10r_{0}$ and $12r_{0}$ , have been considered. In this way, the variations of the convection velocity of the shear-layer turbulent structures according to the nozzle-to-plate distance are shown. In the spectra obtained inside and outside of the flow near the nozzle, several tones emerge at Strouhal numbers in agreement with measurements in the literature. At these frequencies, by applying Fourier decomposition to the pressure fields, hydrodynamic-acoustic standing waves containing a whole number of cells between the nozzle and the plate and axisymmetric or helical jet oscillations are found. The tone frequencies and the mode numbers inferred from the standing-wave patterns are in line with the classical feedback-loop model, in which the loop is closed by acoustic waves outside the jet. The axisymmetric or helical nature of the jet oscillations at the tone frequencies is also consistent with a wave analysis using a jet vortex-sheet model, providing the allowable frequency ranges for the upstream-propagating acoustic wave modes of the jet. In particular, the tones are located on the part of the dispersion relations of the modes where these waves have phase and group velocities close to the ambient speed of sound. Based on the observation of the pressure fields and on frequency–wavenumber spectra on the jet axis and in the shear layers, such waves are identified inside the present jets, for the first time to the best of our knowledge, for a supersonic jet flow. This study thus suggests that the feedback loop in ideally expanded impinging jets is completed by these waves.
CommonCrawl
Back to Chemistry Stack Exchange Return to the main site Let's not close questions as homework except for blatantly obvious cases (at least for a while) There has been some concerns lately about whether we close too much. So we start giving leniency a chance. Scroll down to What we do to get to the fun part. Read the rest if you're interested. It's all about answers. Whatever we do here, whether it is commenting, asking, answering, closing, reopening, deleting, or editing, is to make all the knowledge that's in your minds or out there more accessible. We might even take a step further in the future and actually produce knowledge, but in the end, it's only about the knowledge in the answers you provide. So why do we close, really? Is it because we hold a grudge? Is it because we'd like to feel superior? Is it because we're elitist, and only want the more veteran chemists to be on this site? Those are some of the accusations that tend to get thrown at close voters around the network I poke my head around. They were even thrown at us at least once. I don't know if those are the reasons you guys vote to close, but why I do is because I want the providers of that knowledge to stay. That's why the system of closing is designed in the first place, and what it's meant to be used for. Of course, all of that would work providing that we have a stream of valuable and/or interesting questions. I wouldn't mind, merely care, if someone came to rant on meta about how poorly their posts were received. I've seen too many of those for them to have any effect now. At times, they even manage to show how well the current system is working. Stack Overflow, the largest site of this yard, is continuing to exponentially grow. However, when one of our two only Socrates's is concerned about excessive closure, we have a problem. In fact, this was not realized just today. We ourselves had some concerns, expressed before on meta, and there were some fruitless efforts to come up with more objective ways of closing questions as homework. Well, buckle up. It's time for real action. So, while I don't see myself in the place of someone who gives orders, and while I realized this defining and redefining is going to take forever to reach meaningful results, what do you say we stop dipping our toes in water? An experiment. For one month, effective Thursday, 2016 Sep 1. Here's what we do, and we don't: We will only close blatantly obvious homework questions. That is, the ones that ask for calculations, without any effort on behalf of the asker. Here's one such example. (Imgur mirror for <10k users.) We won't close easily Googleable, or rudimentary questions. Instead, go ahead and provide an exemplary answer. If your answer will hardly be anything more than a link to the first Google result and a quote, but sufficiently addresses the question, it's okay. You can mark it as Community Wiki if that would make you feel better. You can also downvote if you think the question's quality isn't high enough. But no close votes. Regarding mechanism questions, The ones that are obviously from a textbook, for instance a screenshot with no effort provided, will be closed and linked to the homework policy. However, it's not obvious whether the typed out ones are homework. In those cases, only vote for closure if one of the bronze tag badge holders, who are trusted to make the right decision, say that the question is scraped from a textbook. Notably, again lack of research in itself is not a close reason. It's worth explicitly mentioning that lack of research is no longer a valid close reason. Why would we The biggest cause of the uneasy feeling of closing something we don't have an objective reason to do so were questions without demonstrated effort. Sometimes, further unnecessary but required context only clutters space. The grayer areas of detecting homework land, where the question was only conceptual but showed no effort, caused some questions to be closed that shouldn't have been. Ones that could've triggered interesting discussions. Why deprive ourselves of the chance to learn something new, or add a shiny post to the already full basket of Chemistry Stack Exchange? Hence we're giving what we culled before a chance, to see how it plays out. After a month, we'll be able to evaluate whether the experience on Chem.SE has been more positive for us. We'd be able to discern whether sending more people home with answers is what we can take, or it's just too much and hinders the better questions from getting the attention they so fully deserve. One month isn't too short a period not to show the results well, and not too long to be able to cause damage that we can't heal, if there was any to come from this. So just lend me a hand and trust me. We'll dive into the water that perhaps may be deep, but it's much more likely that we'll have a good swim instead of a tragedy. What's the worst that could happen? In the answer and the comments below, please let me know whether you have anything to add to the bullet points, or have some insight to share. There are a few days left to the beginning of the experiment, and we can tweak things until, say, 30th. discussion homework vote-to-close experiment It's OverIt's Over $\begingroup$ Is this mechanism question valid: chemistry.stackexchange.com/questions/34766/…? $\endgroup$ – RBW Aug 29 '16 at 17:27 $\begingroup$ The badge holders should give some input on that, but it does seem like a legitimate question. Reopening. $\endgroup$ – It's Over Aug 29 '16 at 17:38 $\begingroup$ No it's not. It's an advanced question, but heck, my organcis professor often gave homework questions we couldn't solve with the help of his assistants & doctoral candidates. Plus it's posed in a lazy, offhanded way, without even a question mark, and no hint to any own effort by the OP. $\endgroup$ – Karl Aug 29 '16 at 19:05 $\begingroup$ Can you tell us how to agree on what a blatantly obvious homework question is? My degrees are not in chemistry, and I still see many blatant homework questions. Heaven forbid if I actually knew more chemistry (likely meaning I'd done more chemistry homework). $\endgroup$ – Jon Custer Aug 29 '16 at 20:23 $\begingroup$ @Jon as a guideline, stuff asking for calculations or picking a choice are homework, but conceptual questions aren't. The problem we're trying to solve is that some conceptual questions that are not homework tend to get closed as homework if there is no demonstrated effort by the OP, hence the close reason and the guidance won't fit. Bottom line is, if you can be certain that it's homework, it is; but we won't be closing as homework in the gray areas. It would be helpful to comment in such cases so the close reviewers agree. $\endgroup$ – It's Over Aug 30 '16 at 4:59 $\begingroup$ @Marko You might (in general) want to review your questions to actually have a meaningful title, especially for this one. Also consider that a question should be helpful for everyone and even though you might use common abbreviations, it would be much better to actually include the full one in the body. $\endgroup$ – Martin - マーチン♦ Aug 30 '16 at 8:12 $\begingroup$ @DEAD - I'm in! Thanks for taking the lead on all of this! $\endgroup$ – Todd Minehardt Sep 1 '16 at 0:51 $\begingroup$ Will a question that explicitly says "This is my homework" be closed under these guidelines, regardless of what the question is? $\endgroup$ – Matthew Read Sep 25 '16 at 19:33 $\begingroup$ Well, I established these guidelines so we be a bit more lenient on closing. If someone explicitly states that it's their homework, lack of effort will likely result in closure. However, we have a history of leaving questions with a potentially interesting answer open. So we'd decide more or less on a case-by-case basis. $\endgroup$ – It's Over Sep 25 '16 at 19:58 Pre-experiment stats up to 2016/08/31 16:00 UTC. Post-experiment stats up to 2016/10/01 16:00 UTC. Total question statistics $$\begin{array}{|c|c|c|} \hline & \textbf{Pre-experiment} & \textbf{Post-experiment} \\ \hline \text{No. of questions} & 16\,753 & 17\,385 \\ \text{Qns without accepted answers} & 8\,376 \,(50.00\%) & 8\,718\,(50.15\%) \\ \text{Qns with no answers} & 3\,460 \,(20.65\%) & 3\,660\,(21.05\%) \\ \text{Qns with no answers, not closed} & 2\,830\,(16.89\%) & 2\,993\,(17.22\%) \\ \hline \end{array}$$ Answer statistics $$\begin{array}{|c|c|c|} \hline & \textbf{Pre-experiment} & \textbf{Post-experiment} \\ \hline \text{No. of answers} & 19\,299 & 19\,928 \\ \hline \end{array}$$ Closure statistics $$\begin{array}{|c|c|c|} \hline & \textbf{Pre-experiment} & \textbf{Post-experiment} \\ \hline \text{No. of qns closed} & 1\,641 & 1\,723 \\ \text{No. closed as not duplicate} & 1\,124\,(68.49\%) & 1\,171\,(67.96\%) \\ \text{No. closed as duplicate} & 517\,(31.51\%) & 552\,(32.04\%) \\ \hline \end{array}$$ Homework statistics $$\begin{array}{|c|c|c|} \hline & \textbf{Pre-experiment} & \textbf{Post-experiment} \\ \hline \text{No. of HW qns open} & 1\,929 & 1\,997 \\ \text{No. of HW qns closed} & 321 & 341 \\ \text{No. of HW qns closed as duplicate} & 55 & 57 \\ \hline \end{array}$$ Close reason statistics (10k) Pre-experiment: Last 90-day period until start of the experiment During experiment: Last 30-day period until 2016/10/01 17:00 UTC $$\begin{array}{|c|c|c|} \hline & \textbf{Pre-experiment} & \textbf{During experiment} \\ \hline \text{Total asked} & 2\,058 & 943 \\ \text{Total closed} & 716\,(34.79\%\text{ of total asked}) & 319\,(33.83\%) \\ \text{Closed as HW} & 339\,(47.35\%\text{ of total closed}) & 133\,(41.69\%) \\ \text{Closed as HW then edited} & 27\,(7.96\%\text{ of closed as HW}) & 9\,(6.77\%) \\ \text{Closed as HW then reopened} & 8\,(2.36\%\text{ of closed as HW}) & 2\,(1.50\%) \\ \text{Closed as HW, edited, then reopened} & 6\,(22.22\%\text{ of edited}) & 1\,(11.11\%) \\ \hline \end{array}$$ orthocresol♦ Not the answer you're looking for? Browse other questions tagged discussion homework vote-to-close experiment . This site is for discussion about Chemistry Stack Exchange. You must have an account there to participate. A new policy of closure: November 2016 Can we please be friendlier at least in the beginning? Let's decide on what we should close as homework What *is* a 'homework question'? Feedback on The Experiment The policy of excessively closing questions Are we a bit overzealous with closing questions as homework? Temporary Suspension of the Homework Close Reason (concluded) Let's talk about effort, shall we? Close Votes Aren't Super Downvotes Should we close old questions? Do we have a close-voting but not closing problem? Gaps to Fill Without a 'Homework' Close Reason — Overview
CommonCrawl
Effect of the Lights4Violence intervention on the sexism of adolescents in European countries Belén Sanz-Barbero1,2, Alba Ayala1,3, Francesca Ieracitano4, Carmen Rodríguez-Blázquez5,6, Nicola Bowes7, Karen De Claire7, Veronica Mocanu8, Dana-Teodora Anton-Paduraru8, Miriam Sánchez-SanSegundo9, Natalia Albaladejo-Blázquez9, Ana Sofia Antunes das Neves10, Ana Sofia da Silva Queirós11, Barbara Jankowiak12, Katarzyna Waszyńska12 & Carmen Vives-Cases2,9 A Correction to this article was published on 11 May 2022 Sexism results in a number of attitudes and behaviors that contribute to gender inequalities in social structure and interpersonal relationships. The objective of this study was to evaluate the effectiveness of Lights4Violence, an intervention program based on promoting health assets to reduce sexist attitudes in young European people. We carried out a quasi-experimental study in a non-probabilistic population of 1146 students, aged 12–17 years. The dependent variables were the difference in the wave 1 and wave 2 values in the subscales of the Ambivalent Sexism Inventory: benevolent sexism (BS) and hostile sexism (HS). The effect of the intervention was evaluated through linear regression analyses stratified by sex. The models were adjusted by baseline subscales scores, socio-demographic and psychological variables. In girls, we observed a decrease in BS in the intervention group compared to the control group (β = − 0.101; p = 0.006). In the wave2,, BS decreased more in the intervention group compared to the control group in girls with mothers with a low level of education (β = − 0.338; p = 0.001), with a high level of social support (β = − 0.251; p < 0.001), with greater capacity for conflict resolution (β = − 0.201; p < 0.001) and lower levels of aggressiveness (β = − 0.232, p < 0.001). In boys, the mean levels of HS and BH decreased in wave 2 in both the control and intervention groups. The changes observed after the wave 2 were the same in the control group and in the intervention group. No significant differences were identified between both groups. The implementation of the Lights4Violence was associated with a significant reduction in BS in girls, which highlights the potential of interventions aimed at supporting the personal competencies and social support. It is necessary to reinforce the inclusion of educational contents that promote reflection among boys about the role of gender and the meaning of the attributes of masculinity. Clinicaltrials.gov : NCT03411564. Unique Protocol ID: 776905. Date registered: 26-01-2018. Sexism has been defined in different ways over time and across cultures [1]. Traditionally, sexism towards women has been defined as a series of discriminatory behaviors based on values and attitudes that consider that the women is by nature inferior to the men [2]. This is defined by Glick as hostile sexism (HS) and represents a sexist antipathy toward women [3] who are perceived as competing with men when they behave in a "non traditional" way [4]. In recent decades, a more subtle dimension of sexism, benevolent sexism (BS) has emerged [5]. BS is expressed through protective attitudes towards women that are expressed socially as positive signs of courtesy and respect, but that place women on an inferior plane. In this dimension women are considered to be weaker and in need of protection, care and companionship by men. These two dimensions of sexism, HS and BS, have been found to be positively correlated to each other and comprise what is referred to as ambivalent sexism [1]. Sexism emphasizes gender roles and defines heterosexual romantic relationships, as the only way to feel complete and reach happiness. In this sense, sexual LGB orientation or transsexual and transgender people also experience discrimination due to sexist attitudes and behaviors [6]. Sexism, along with other axes of oppression, such as racism, class, disability, homophobia and transphobia, gives rise to multiple forms of discrimination [7]. Sexist attitudes, behaviors and values generate gender inequalities that are manifest in the social structure and in interpersonal relationships. There is currently a consolidated body of evidence that shows an association between sexist attitudes and risky sexual relationships among young people [8], emotional dependence [9], low levels of education among women [10] and poor quality of affective relationships [11]. In both sexes, HS predicted more favorable attitudes toward bullying [12] men and women, high score in BS and HS sexism were more likely to minimize DV, whereas those high only in BS were more likely to blame the victim [13], When men had a high relational dependence and perceived that women had a low in relationship commitment, HS was related with more aggressive toward their female partner [14]. From a health promotion perspective, encouraging non-sexist attitudes and personal abilities in the adolescent population is key to the development of positive, healthy and non-sexist interpersonal relationships as well as the prevention of dating violence [15]. The magnitude and the prevalence of adverse consequences of teen dating violence among young people - both short and long-term [16, 17] justify the need to identify prevention policies and interventions that promote their well-being. According to the European Violence against Women Survey [18], the prevalence of current physical and/or sexual intimate partner violence (IPV) among young women ages 18–29 is 6%, and it is 48% in the case of lifetime psychological IPV. In contrast, the registered prevalence among adult women over age 30 was around 4 and 32%, respectively. Even though studies involving boys are less frequent it has been observed that boys adolescents also experience dating violence victimization, but the dynamics and consequences may be more severe for girls [19]. Stereotyped attitudes about the protective role of men and women's need for protection and care reflect fundamental components of benevolent sexism that are sometimes present from childhood [20]. Adolescence is a developmental stage that is especially relevant for questioning negative social models learned in infancy [21]. It is this developmental stage on which men and women define their identities and behavior styles in intimate relationships. Adolescence also represents, an opportunity to promote healthy relationships. Nowadays, most dating prevention intervention programs whose results have been published in the scientific literature, have been carried out mainly in the United States [22]. A recent meta-analysis highlighted the importance of these interventions being integral, although the context where they are most effective is in the educational field [23]. These interventions have targeted both the general young population and youth at risk of suffering from or committing gender violence like children exposed to domestic violence, or witnesses of maternal abuse. Most interventions have been focused on increasing knowledge of violence and traditional gender roles [24], in addition to providing skills that allow young people to confront frustration and resolve conflicts in a non-violent way [22], and increasing awareness and skills about appropriate bystander interventions among adolescents [25, 26]. Although the evaluations of these interventions are heterogeneous, the results show that it is possible to reduce victimization and perpetration of dating violence [23, 27, 28]. This study was carried out in the context of the Lights4Violence Project, an educational intervention which aims to promote dating violence protective assets among secondary school students from different European cities (Alicante, Rome, Iasi, Matosinhos, Poznan and Cardiff) [29]. This project was based on the model for positive youth development, which emphasizes youth strengths, stressing the development of capacities (personal, moral, cognitive, conceptual and social) that support young people in resisting risk factors, and reducing or coping with problems such as violence perpetration or victimization [30]. We integrated the aim of supporting adolescents in challenging sexism due to the promising results previously observed in interventions that bring factors such as gender equality, violence acceptability, non-violent conflict resolution, and other healthy development skills to bear in preventing dating violence among young people [23]. In this study, we aimed to evaluate the effectiveness of the Lights4Violence project in decreasing sexism among secondary school students. The study is based on the hypothesis that the intervention implemented in the Lights4Violence project will produce a decrease in the mean values of sexism in boys and girls in the intervention group but not in the control group. Also, given that prior studies have shown that the existence of sexist attitudes among Spanish adolescents may differ between sex [31], with boys showing a higher degree of HS than girls, the analyses were stratified by sex. Lights4Violence intervention The intervention "Filming Together to See Ourselves in a New Present" –Lights4Violence- intervention aimed to provide adolescents with assets and skills that promote positive couple relationships. This intervention has as a framework the Positive Youth Development Model [32, 33]. The model integrates different areas of intervention for the promotion of positive adolescent development (personal development -such as empathy; cognitive -such as the ability to solve problems; emotional -such as aggressiveness; social -as the ability to solve problems or relate to others or social support; and, moral -such as sexism). We selected for the project those that were related with the promotion of positive intimate relationship or dating violence protective factors [34]. In this study, we selected covariates among our whole project variables that were also associated with sexism [35]. Lights4Violence project implemented an educational intervention in secondary schools by project members and/or teachers with prior training. The intervention consisted of two different parts. Students first participated in 10 theoretical and practical sessions for an average of 55 min per session. Later, during 5 practical sessions lasting 55–60 min, students filmed a series of video capsules put together into short films in which, using the knowledge and skills they had acquired, they resolved situations of conflict between couples. The intervention ended with a public showing of the films with the support of city councils and other public institutions. The intervention was carried out during the period October 2018 and April 2019. More information about the intervention can be found in the following article [29]. The study was a quasi-experimental study using a convenience sample of secondary students (age range: 12–17) in six European cities: Alicante (Spain), Mathosinos (Portugal), Cardiff (United Kingdom), Roma (Italy), Poznán (Poland) and Iasi (Romania). The intervention group included students from 12 school centers selected using viability criteria. The control group was made up of students from six schools (different from the intervention schools) that were located in the same cities where the intervention was carried out. Data collection was carried out through an online questionnaire, with an average duration of 45 min. The questionnaire was administered to both the case and control groups prior to the beginning of the intervention (wave 1), and approximately 6 months after the end of the intervention (wave 2). The project control group received no alternative intervention during this time period. A statistical power analysis was performed for sample size estimation (initial sample designed for 1300 students), based on data from a previous published random-effects meta-analysis of 23 studies about school-based interventions that aimed to prevent violence and negative attitudes in teen dating relationships [36]. The percentage of cases lost during follow-up was 25.7%. Given that the analysis was stratified by sex, 0.6% (n = 9) of cases was eliminated when people answered "other" to the question asking about their sex. Sample size was determined before any data analysis and it was not increased after analysis. The final sample used for this study included 1146 people. All the obtained measures are discussed in results section. We did not do any manipulations or exclusion results. The outcome variable was sexism measured with the Ambivalent Sexism Inventory (ASI) scale [5]. ASI includes two subscales: benevolent sexism (BS) and hostile sexism (HS). Each of the subscales is made up of 11 items scoring the level of agreement or disagreement in a likert-type scale with six categories. Higher scores on the scale indicate greater levels of sexism (range: 0–110). Based on prior evidence [37] and the differences between cases and controls identified in our sample (Table 1), the following were included as covariables: Table 1 Sample description for wave 1, Lights4Violence project Assets related to external support resources Perceived social support was evaluated using the Student Social Support Scale (SSSS) [38]. Students answered 60 questions in 6 likert-type categories that measure the level of social support in five areas: parents, teachers, classmates, close friends and people of the school. Example items include "My parents show they are proud of me"; "My teachers explain things that I don't understand"; "My close friend spends time with me when I'm lonely". A higher score indicates greater social support (range: 60–360). The variable was divided into tertiles: low social support (< 232 points), medium (232–276 points), high (> 276 points). Assets related to personal competencies Assertiveness was measured using the Assertive Interpersonal Schema Questionnaire (AISQ) [39]. The scale contains 21 items in five likert-type response categories, ranging from completely false to completely true (range: 21–105). Example items are "I feel I am special to some people"; "I possess as many skills as most people"; "When I am sad, angry, or upset, I have someone to support me and help me feel". The variable was categorized into tertiles: low assertiveness (< 82 points), medium (82–92 points), high (> 92 points) To evaluate the capacity of students to resolve social conflicts, we used the Social Problem-Solving Inventory-Revised Scale (SPSI-R) [40]. The scale consists of 25 items in categories with likert-type responses that range from 0 to 4 (range: 0–100). Example items: "I try to see my problems as challenges"; "When I solve problems, I try to predict the pros and cons of each option". The capacity to resolve conflicts was categorized into tertiles: low capacity to resolve conflicts (< 54 points), medium (54–64 points) and high (> 64 points). Aggressiveness of the students was measured using the abbreviated version of the Aggression Questionnaire (AQR-12) [41, 42]. The questionnaire consists of 12 items in 5 categories with likert-type responses (range: 12–60). Example items include "Given enough provocation"; "I may hit another person"; "I can't help getting into arguments when people disagree with me". The variable was categorized into tertiles. High scores indicated greater aggressiveness: low (< 23 points), medium (23–31 points) and high (> 31 points). Other adjustment variables Sociodemographic variables: sex, age, mother's education level Variables related to relationships and inter-partner violence that include both control and fear, such as exposure to physical and sexual violence by a partner. Benevolent sexism or hostile sexism at baseline was including for controling its effects as possible moderator. Descriptive analysis All the analyses were stratified by sex. First, we performed a descriptive study to calculate the differences in baseline between the control and intervention groups using a chi-square test. Then we analyzed the mean differences in BS and HS, between wave 1 (W1) and wave 2 (W2), in the control and intervention groups through paired t-tests for repeated measures. The magnitude of the mean difference over time was measured using the effect size coefficient, calculated using Cohen's d. The effect size was considered small when Cohen's d was around 0.2, medium when it was around 0.5 and large when it was around 0.8. Later, we analyzed differences in the mean values of BS and HS between W1 and W2 for each of the covariables, both in the control group and in the intervention group. Intervention effect analysis In order to analyze the intervention's effects, we performed linear regression models. The outcome variable was the difference in the value obtained in the ASI subscales -BS and HS- between W1 and W2 (Yi2 − Yi1) where Yi2 is the observation for student in W2 and Yi1 is observation for student i in W1 (eq. 1). The intervention effect is identified by the variable group (control/intervention). Models were adjusted by the following covariates: outcome value in baseline (Yi1), country, age, mother's education and dating violence, and the following scales measured in W1: SSSS, AISQ, SPSI-R, AQR-12, BS/HS. $${\displaystyle \begin{array}{c}{Y}_{i2}-{Y}_{i1}={\beta}_0+{\beta}_1{Y}_{i1}+\dots +{\varepsilon}_i\\ {}{\beta}_0\ and\ {\beta}_1= coefficients\ of\ the\ model;{\varepsilon}_i= random\ errors\end{array}}$$ To analyze whether the intervention had a different effect for each of the categories of the covariables included in the model, we explored the interactions between the group variable (control vs. intervention) and each covariable. For those interactions that were significant we analyzed the differences by group (intervention and control) in the different categories of the covariables. All of the analyses were stratified by sex and used the software program Stata 14.0. We collected 1555 questionnaires in W1 and 1434 questionnaires in W2, resulting in 1155 questionnaires matched using the student's code (578 control and 577 intervention). Nine questionnaires were excluded because they had missing values for the sex variable. The final dataset used included 1146 questionnaires, 575 from the intervention group (59.1% girls) and 571 from the control group (62.7% girls). Table 1 describes the sociodemographic characteristics and the distribution in tertiles of the scales used in the baseline (W1) for each group (intervention and control), stratified by sex. Results on benevolent sexism Table 2 shows the mean values of BS in the variables included in the analysis, in both periods –W1 and W2-, by group (control / intervention) and by sex. In W2, there was a significant reduction in BS in both groups (control/intervention) and among both sexes. The effect size was greater in the intervention group (effect size total: 0.33; effect size girls: 0.23; effect size boys: 0.18) than in the control group (effect size total: 0.11; effect size girls: 0.07; effect size boys: 0.09). Among the girls in the intervention group, in W2 there was a significant reduction in the mean values of BS with respect to W1 in all of the studied variables. Among the boys in the intervention group, there was a significant reduction in BS in the extreme categories of those variables that indicate higher family socioeconomic level (mother with university studies) (p < 0.001)], greater social support [SSSS high (p = 0.004)], and greater social competencies and skills [ASIQ high (0.029), SPSI-R high (p = 0.005) and AQR-12 low (p = 0.002)]). BS decreased in some of the categories of the control group in both sexes. Table 2 Means and standard deviations for benevolent sexism by demographic, socioeconomic and violence-related Variables at baseline Table 3 shows the variables independently associated with a change in the mean values of BS over time (W2-W1). In girls in W2, there was a significant reduction in the levels of BS in the intervention group compared to the control group (β = − 0.101; p = 0.006). This decrease was not observed for boys (p = 0.537). For both sexes, the change in mean BS values was associated with the baseline values for BS. Greater baseline values for BS (W1) were associated with a greater reduction in W2. This effect was independent of belonging to the control or intervention group. Table 3 Linear regression for change in subscales In the reduction in BS in the subsample of girls, we identified a significant interaction between group (control/intervention) and the following variables: mother's education (p = 0.032), Student Social Support (p = 0.009), Social Problem-Solving (p = 0.002) and Aggression (p = 0.019). In W2 there was a significant reduction in the average values of BS in the intervention group, compared to the control group, in girls with mothers with primary or lower levels of education (Fig. 1a; β = − 0.338; p = 0.001), in girls who had a high level of social support at W1 (Fig. 1b; β = − 0.251; p < 0.001), a high capacity to resolve social conflicts at W1 (Fig. 1c; β = − 0.201; p < 0.001) and low levels of aggressiveness at W1 (Fig. 1d; β = − 0.232, p < 0.001). There were no significant interactions in BS in boys, Graphs of significant interactions between the control group and intervention group and the linear regression model variables, benevolent sexism in girls Results on hostile sexism Table 4 shows average values for HS in W1 and W2, by group (control/intervention) and by sex. The effect size was similar in the intervention group (effect size girls: 0.07; effect size boys: 0.03) and in the control group (effect size girls: 0.11; effect size boys: 0.06). Significant decrease was observed in HS among girls in both groups (control/intervention). However, HS remained constant in boys. For girls there was a significant reduction in the levels of HS in some categories of the variables analyzed both in the control and intervention groups. In the intervention group in boys, HS decreased only for those whose mothers had university studies (p = 0.032). In the boys in the control group, there was no observed significant change in average levels of HS between W1 and W2. Table 3 shows the variables independently associated with a change in the mean values of HS over time (W2-W1). There was no significant change in the average levels of HS over time in the control group, compared to the intervention group, for either of the sexes (pgirls = 0.319; pboys: 0.472). For both sexes, the change in mean HS values was associated with the baseline values for HS. Greater baseline values for HS (W1) were associated with a greater reduction in W2. This effect was independent of belonging to the control or intervention group. There were no significant interactions in HS in both boys and girls. Table 4 Means and standard deviations for hostile sexism by demographic, socioeconomic and violence-related variables at baseline The present study shows three main results: a) during the W2-period there was a decrease in BS (both sexes) and in HS (in girls), both in the control group and the intervention group; b) the decrease in BS in girls –and not in boys- was significantly greater in the intervention group than in the control group; c) the intervention decreased BS in girls whose mothers had a low level of education and in those with high levels of social support, high capacity for conflict resolution and low levels of aggressiveness. In boys, the intervention did not produce significant changes in the average levels of BS or HS. Possible explanations The decrease observed in sexism W2 period, both in the cases and in the controls, could show that the act of responding to the questionnaire itself has an effect on sexism levels. The study published by Montañés [43] showed how response to the Ambivalent Sexism Inventory modified the self-perceptions of the interviewees regarding their experiences with inter-partner violence. Interviewees recognized greater exposure to inter-partner violence when they had responded to the ASI than if the information had been collected in reverse order (exposure to the inter-partner violence-ASI questionnaire-). The authors argued that the questionnaire instrument itself could facilitate the recognition of inter-partner violence. In our study this fact is more visible for BS. This is a subtle form of sexism that is, on occasions, difficult to recognize, given that it is culturally incorporated into positive behaviors. It is possible that responding to the questionnaire acts as an external stimulus that permits the participants to become conscious of the attitudes and values that underlie those behaviors and to take a more critical position in terms of rejecting them. Although the time between W1 and W2 was 6 months, it is possible that this change could be due to an age effect, given that BS seems to present an U-shaped trajectory for women across the lifespan and a positive linear trajectory in men [4, 44]. It could also be due to sociocultural and contextual influences outside of the intervention (publicity campaigns, participation in extracurricular activities, etc.) that both the cases and controls could be exposed to. On the other hand, we cannot rule out the presence of social desirability bias. In terms of our second result, we found that the Lights4Violence intervention promotes a significant decrease in BS among girls. As far as we know, there are few interventions with adolescents whose main result is a decrease in sexism [45, 46]. However, there is evidence that allows us to highlight the reach of these results. Both HS and BS have been identified as predictors of dating violence in adolescents [47]. Women with BS attitudes, beliefs and behaviors are more likely to maintain traditional gender roles [48], which is an attitude that is linked to acceptance of IPV [49, 50]. Greater BS among young girls has been associated with a greater number of sexual partners in adolescence [8], greater idealized myth of romantic love, and greater attraction by partners with BS [37] among others. It is for this reason that implementing interventions that reduce BS in girls could promote egalitarian relationships in adolescence. In this sense, given that sexism places men and boys in a position of domination and power, it is necessary to identify interventions that decrease sexism among men and boys, thus supporting equality and diminishing the need to promote the rejection of sexism among women. The decrease in the mean values of BS and HS in W2 was associated with baseline levels of sexism. When these were high, the reduction was greater. This effect was independent of group (intervention/control) and sex. Finally, our results show a decrease in BS among girls with mothers with low levels of education and in those with high levels of social support, high capacity for conflict resolution and low aggressiveness. This decrease in BS in girls with lower socioeconomic levels at home - whose mothers have lower education levels - is important as it could indicate that the Lights4Violence intervention promotes equity among young people. A recent systematic review published by Tinner [51] shows that there are very few public health interventions with young people that allow for analyzing the effect on different social strata. Thus, if these interventions are not being received among the most disfavored groups, they could be contributing to an increase in health inequalities. In terms of the decrease in BS in young people with greater capacities and social support, it is possible that it is due to the fact that these characteristics predispose young people to a more egalitarian change in attitudes. In this sense studies by Ferragut 2013 show how positive psychological values are inversely related with sexist attitudes [52]. Specifically, among adolescent girls there is lower sexism among those with greater empathy and understanding of the social reality. Leaper's studies support this idea and affirm that learning about feminist theory could provide to girls a conceptual model that allows them to become more conscious and position themselves against sexist attitudes [53]. The greater permeability of girls to theories that promote equality between men and women could explain the decrease in BS observed in girls. In fact, the intervention did not produce a substantial change in the average levels of sexism in boys. Given that the concept of sexism makes reference to attitudes, beliefs and behaviors that emphasize the inferiority of women to men based on their sex, the response prompted by questioning of these attitudes through educational interventions could be different between the sexes [37]. It is possible that boys are less receptive to change, and that in questioning the concept of hegemonic masculinity, boys could perceive a loss of privileges. In this sense, the studies carried out by Becker and Swim (2011) [54] show that in men it is not enough to increase their awareness of sexist behavior to promote a change. Changes in sexist attitudes in men occur when empathy towards women who suffer from sexist behaviors and attitudes increases. In our work, we could not analyze this hypothesis because empathy is evaluated in a generic way -not toward women who suffer sexism-, and it was measured in the W1 but not in the W2, so that we could not monitor the change in empathy during the intervention. It is important to integrate contents to help boys reflect on gender roles and the meaning attributed to hegemonic masculinity in these types of interventions. In this sense a gender transformative approach is promising for preventing risky health behaviors and violence, since it works to foster equitable attitudes, behaviors and community structures that support women and men and help them to break with gender stereotypes [55]. This work should be interpreted taking into account certain limitations. The study is based on a non-probabilistic sample. Schools were selected for their feasibility. The distribution of the schools in the intervention and control groups was not carried out randomly. Even so, the differences identified between the intervention and control groups were introduced into statistical models to avoid spurious associations. Social desirability bias may be present when information is collected on socially committed topics. This bias is minimized when the information is self-completed online using anonymous questionnaires. If it existed, it would be present both in the baseline questionnaire (W1) and in the follow-up questionnaire (W2). If it is present, a decrease in sexism would be observed in the intervention group in the W2, giving rise to an overestimation of the effect of the intervention. This greater decrease in sexism in the intervention group compared to the control group is not observed in boys, which is where the social desirability bias could be present. The time of follow-up of the participants after the intervention was limited in time. We do not know whether the effect of the intervention on sexism can be maintained over time, and in the same way, it is possible that the intervention has an effect on sexism over the long-term that we cannot observe in this study. In our study, because it was not considered culturally appropriate in some of the countries participating, we did not include the gender variable nor the sexual orientation variable. Therefore, we could not estimate the impact of gender identity and sexual orientation on the intervention results on sexism. Given the association of these variables with IPV, it is possible that they were also associated with sexism and with intervention results. Despite this, teaching of sexual and gender diversity was taken into account in the implementation of the intervention. Ethnicity was measured in the study, but the low percentage of people born abroad or with parents born abroad, did not allow for us to include it in the analysis. Thus we could not carry out analysis that included the intersectionality framework. This study suggests that it is possible to carry out interventions that promote a decrease in sexism in young women. There is a need to create spaces for reflection that allow for integrating young men into these changes towards more egalitarian relationships. Social support and encouraging social capacities could facilitate this decrease in sexism and thus the promotion of positive interpersonal relationships. Data will be available upon reasonable request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. BS: Benevolent sexism HS: Hostile sexism IPV: ASI: Ambivalent Sexism Inventory SSSS: Student Social Support Scale AISQ: Assertive Interpersonal Schema Questionnaire SPSI-R: Social Problem-Solving Inventory-Revised Scale AQR-12: Aggression Questionnaire W1: Glick P, Fiske ST, Mladinic A, Saiz JL, Abrams D, Masser B, et al. Beyond prejudice as simple antipathy: hostile and benevolent sexism across cultures. J Pers Soc Psychol. 2000;79:763. López-Sáez MÁ, García-Dauder D, Montero G-CI. El sexismo Como constructo en psicología: una revisión de teorías e instrumentos. Quad Psicol. 2019;21:008. Glick P, Fiske ST. Ambivalent sexism. In: Advances in experimental social psychology: Elsevier; 2001. p. 115–88. de Lemus S, Moya M, Glick P. When contact correlates with prejudice: adolescents' romantic relationship experience predicts greater benevolent sexism in boys and hostile sexism in girls. Sex Roles. 2010;63:214–25. Glick P, Fiske ST. The ambivalent sexism inventory: differentiating hostile and benevolent sexism. J Pers Soc Psychol. 1996;70:491–512. Bostwick WB, Boyd CJ, Hughes TL, West BT, McCabe SE. Discrimination and mental health among lesbian, gay, and bisexual adults in the United States. Am J Orthop. 2014;84:35–45. Hester N, Payne K, Brown-Iannuzzi J, Gray K. On Intersectionality: how complex patterns of discrimination can emerge from simple stereotypes. Psychol Sci. 2020;31:1013–24. Ramiro-Sánchez T, Ramiro MT, Bermúdez MP, Buela-Casal G. Sexism and sexual risk behavior in adolescents: gender differences. Int J Clin Health Psychol. 2018;18:245–53. Pradas E, Perles F. Relación del sexismo y dependencia emocional en las estrategias de resolución de conflictos de los adolescentes. Quad Psicol. 2012;14:45–60. Malonda E, Llorca A, Tur-Porcar A, Samper P, Mestre M. Sexism and aggression in adolescence—how do they relate to perceived academic achievement? Sustainability. 2018;10:3017. Viejo C, Ortega-Ruiz R, Sánchez V. Adolescent love and well-being: the role of dating relationships for psychological adjustment. J Youth Stud. 2015;18:1219–36. Carrera-Fernández M-V, Lameiras-Fernández M, Rodríguez-Castro Y, Vallejo-Medina P. Bullying among Spanish secondary education students: the role of gender traits, sexism, and homophobia. J Interpers Violence. 2013;28:2915–40. Yamawaki N, Ostenson J, Brown CR. The functions of gender role traditionality, ambivalent sexism, injury, and frequency of assault on domestic violence perception: a study between Japanese and American college students. Violence Women. 2009;15:1126–42. Cross EJ, Overall NC, Hammond MD, Fletcher GJ. When does men's hostile sexism predict relationship aggression? The moderating role of partner commitment. Soc Psychol Personal Sci. 2017;8:331–40. Schubert K. Building a culture of health: promoting healthy relationships and reducing teen dating violence. J Adolesc Health. 2015;56:S3–4. Foshee VA, Reyes HLM, Gottfredson NC, Chang L-Y, Ennett ST. A longitudinal examination of psychological, behavioral, academic, and relationship consequences of dating abuse victimization among a primarily rural sample of adolescents. J Adolesc Health. 2013;53:723–9. Exner-Cortens D, Eckenrode J, Bunge J, Rothman E. Revictimization after adolescent dating violence in a matched, national sample of youth. J Adolesc Health. 2017;60:176–83. FRA – European Union Agency for Fundamental Rights. Violence against women: an EU-wide survey; 2014. Taquette SR, Monteiro DLM. Causes and consequences of adolescent dating violence: a systematic review. J Inj Violence Res. 2019;11:137–47. Gutierrez BC, Halim MLD, Martinez MA, Arredondo M. The heroes and the helpless: the development of benevolent sexism in children. Sex Roles. 2020;82:558–69. Schiavon R, Troncoso E, Billings D. El papel de la sociedad civil en la prevención de la violencia contra la mujer. Salud Pública México. 2007;49:337–40. Fellmeth GL, Heffernan C, Nurse J, Habibula S, Sethi D. Educational and skills-based interventions for preventing relationship and dating violence in adolescents and young adults: a systematic review. Campbell Syst Rev. 2013;9:i–124. Lundgren R, Amin A. Addressing intimate partner violence and sexual violence among adolescents: emerging evidence of effectiveness. J Adolesc Health. 2015;56:S42–50. Hickman LJ, Jaycox LH, Aronoff J. Dating violence among adolescents: prevalence, gender distribution, and prevention program effectiveness. Trauma Violence Abuse. 2004;5:123–42. Debnam KJ, Mauer V. Who, when, how, and why bystanders intervene in physical and psychological teen dating violence. Trauma Violence Abuse. 2021;22:54–67. O'Brien KM, Sauber EW, Kearney MS, Venaglia RB, Lemay EP. Evaluating the effectiveness of an online intervention to educate college students about dating violence and bystander responses. J Interpers Violence. 2021;36:NP7516–46. Foshee VA, Bauman KE, Ennett ST, Suchindran C, Benefield T, Linder GF. Assessing the effects of the dating violence prevention program "safe dates" using random coefficientregression modeling. Prev Sci. 2005;6:245. Wolfe DA, Crooks C, Jaffe P, Chiodo D, Hughes R, Ellis W, et al. A school-based program to prevent adolescent dating violence: a cluster randomized trial. Arch Pediatr Adolesc Med. 2009;163:692–9. Vives-Cases C, Davó-Blanes MC, Ferrer-Cascales R, Sanz-Barbero B, Albaladejo-Blázquez N, Sánchez-San Segundo M, et al. Lights4Violence: a quasi-experimental educational intervention in six European countries to promote positive relationships among adolescents. BMC Public Health. 2019;19:389. Benson PL, Scales PC, Hamilton SF, Sesma A Jr. Positive youth development: theory, research, and applications. Handb Child Psychol. 2007. del Silván-Ferrero M, Bustillos-López A. Benevolent sexism toward men and women: justification of the traditional system and conventional gender roles in Spain. Sex Roles. 2007;57:607–14. Oliva Delgado A, Pertegal Vega MÁ, Antolín Suárez L, Reina Flores M d C, Ríos Bermúdez M, Hernando Gómez Á, et al. Desarrollo positivo adolescente y los activos que lo promueven: un estudio en centros docentes andaluces; 2011. Lerner JV, Phelps E, Forman Y, Bowers EP. Positive youth development: Wiley; 2009. Rubio-Garay F, Carrasco MÁ, Amor PJ, López-González MA. Factores asociados a la violencia en el noviazgo entre adolescentes: una revisión crítica. Anu Psicol Juríd. 2015;25:47–56. Ayala A, Vives-Cases C, Davó-Blanes C, Rodríguez-Blázquez C, Forjaz MJ, Bowes N, et al. Sexism and its associated factors among adolescents in Europe: Lights4Violence baseline results. Aggress Behav. 2021;47:354–63. De La Rue L, Polanin JR, Espelage DL, Pigott TD. A meta-analysis of school-based interventions aimed to prevent or reduce violence in teen dating relationships. Rev Educ Res. 2017;87:7–34. Ramiro-Sánchez T, Ramiro MT, Bermúdez MP, Buela-Casal G. Sexism in adolescent relationships: a systematic review. Psychosoc Interv. 2018;27:123–32. Nolten PW. Conceptualization and measurement of social support: the development of the student social support scale; 1995. Vagos P, Pereira A. A proposal for evaluating cognition in assertiveness. Psychol Assess. 2010;22:657. D'Zurilla TJ, Maydeu-Olivares A, Kant GL. Age and gender differences in social problem-solving ability. Personal Individ Differ. 1998;25:241–52. Bryant FB, Smith BD. Refining the architecture of aggression: a measurement model for the buss–Perry aggression questionnaire. J Res Pers. 2001;35:138–67. Gallardo-Pujol D, Kramp U, García-Forero C, Pérez-Ramírez M, Andrés-Pueyo A. Assessing aggressiveness quickly and efficiently: the Spanish adaptation of aggression questionnaire-refined version. Eur Psychiatry. 2006;21:487–94. Montañés P, Megías JL, De Lemus S, Moya M. Influence of early romantic relationships on adolescents' sexism. Rev Psicol Soc. 2015;30:219–40. Hammond MD, Milojev P, Huang Y, Sibley CG. Benevolent sexism and hostile sexism across the ages. Soc Psychol Personal Sci. 2018;9:863–74. Kilmartin C, Semelsberger R, Dye S, Boggs E, Kolar D. A behavior intervention to reduce sexism in college men. Gend Issues. 2015;32:97–110. Becker JC, Zawadzki MJ, Shields SA. Confronting and reducing sexism: a call for research on intervention. J Soc Issues. 2014;70:603–14. Garaigordobil M, Aliri J. Ambivalent sexism inventory: standardization and normative data in a sample of the Basque Country. Psicol Conduct. 2013;21:173. Napier JL, Thorisdottir H, Jost JT. The joy of sexism? A multinational investigation of hostile and benevolent justifications for gender inequality and their relations to subjective well-being. Sex Roles. 2010;62:405–19. Expósito F, Herrera MC, Moya M, Glick P. Don't rock the boat: Women's benevolent sexism predicts fears of marital violence. Psychol Women Q. 2010;34:36–42. Masser B, Lee K, McKimmie BM. Bad woman, bad victim? Disentangling the effects of victim stereotypicality, gender stereotypicality and benevolent sexism on acquaintance rape victim blame. Sex Roles. 2010;62:494–504. Tinner L, Caldwell D, Hickman M, MacArthur GJ, Gottfredson D, Perez AL, et al. Examining subgroup effects by socioeconomic status of public health interventions targeting multiple risk behaviour in adolescence. BMC Public Health. 2018;18:1180. Ferragut M, Blanca MJ, Ortiz-Tallo M. Psychological values as protective factors against sexist attitudes in preadolescents. Psicothema. 2013;25:38–42. Leaper C, Brown CS. Sexism in childhood and adolescence: recent trends and advances in research. Child Dev Perspect. 2018;12:10–5. Becker JC, Swim JK. Seeing the unseen: attention to daily encounters with sexism as way to reduce sexist beliefs. Psychol Women Q. 2011;35:227–42. Casey E, Carlson J, Two Bulls S, Yager A. Gender transformative approaches to engaging men in gender-based violence prevention: a review and conceptual model. Trauma Violence Abuse. 2018;19:231–46. We want to thank to all schools and students from the different involved settings for their time and valuable contribution in Lights4Violence project. The project "Lights, Camera and Action against Dating Violence" (Ligts4Violence) was funded by the European Commission Directorate-General Justice and Consumers Rights, Equality and Citizen Violence Against Women Program 2016 for the period 2017–2019 to promote healthy dating relationship assets among secondary school students from different European countries, under grant agreement No. 776905. It was also co-supported by the CIBER of Epidemiology and Public Health of Spain for its aid to the Gender-based Violence and Youth Research Program. National School of Public Health, Carlos III Institute of Health, Avda. Monforte de Lemos, 5–28029, Madrid, Spain Belén Sanz-Barbero & Alba Ayala CIBER of Epidemiology and Public Health (CIBERESP), Madrid, Spain Belén Sanz-Barbero & Carmen Vives-Cases Research Network on Health Services for Chronic Diseases (REDISSEC), Madrid, Spain Alba Ayala Department of Human Studies-Communication, Education and Psychology - LUMSA University of Rome, Rome, Italy Francesca Ieracitano National Center of Epidemiology, Carlos III Institute of Health, Madrid, Spain Carmen Rodríguez-Blázquez CIBER in Neurodegenerative Diseases (CIBERNED), Madrid, Spain Department of Applied Psychology, Cardiff Metropolitan University, Wales, UK Nicola Bowes & Karen De Claire Mother and Child Medicine Deparment, Gr.T. Popa University of Medicine and Pharmacy, Iasi, Romania Veronica Mocanu & Dana-Teodora Anton-Paduraru Faculty of Health Science, University of Alicante, Alicante, Spain Miriam Sánchez-SanSegundo, Natalia Albaladejo-Blázquez & Carmen Vives-Cases University of Maia and CIEG/ISCSP-ULisboa, Porto, Portugal Ana Sofia Antunes das Neves University Institute of Maia – ISMAI, Maia, Portugal Ana Sofia da Silva Queirós Faculty of Educational Studies, Adam Mickiewicz University, Poznań, Poland Barbara Jankowiak & Katarzyna Waszyńska Belén Sanz-Barbero Nicola Bowes Karen De Claire Veronica Mocanu Dana-Teodora Anton-Paduraru Miriam Sánchez-SanSegundo Natalia Albaladejo-Blázquez Barbara Jankowiak Katarzyna Waszyńska Carmen Vives-Cases CVC and BSB contributed to the design of the Lights4violence project and this study; all authors participated in data collection; AA and BS conducted data analysis and interpretation of results; BSB, AA and CVC wrote the first draft of the paper; all authors participated in the implementation of the strategy in their countries and made substantial contributions of the following versions; and, all authors approved the final version of the paper. Correspondence to Alba Ayala. Data was collected by project partners based at universities in the various countries. The data was collected and stored anonymously, and participants created a unique participant code for themselves at the first data collection point. Participation was voluntary, and each partner university was required to obtain the permission of their own ethical committees. Schools provided a signed informed consent document from the school directors, as did parents of the participants and the students themselves. The Lights4Violence protocol was approved by the ethical committee of the University of Alicante, Instituto Universitário da Maia/ Maiêutica Cooperativa de Ensino Superior CRL. Maia, Universitatea de Medicina si Farmacie Grigore T. Popa, Adam Mickiewicz University, Libera Universita Maria SS. Assunta of Rome and the Cardiff Metropolitan University. It was also registered in ClinicalTrials.gov by the coordinator (Clinicaltrials.gov: NCT03411564. Unique Protocol ID: 776905. Date registered: 18-01-2018). This project aims to meet the principles of the Convention on the Rights of the Child (art. 19) and Helsinki Declaration (AMM, 2013) (Vives-Cases et al., 2019). In cases in which a student reported having been abused by an adult, each country used their own protocol to inform the school. Due to the anonymity of the questionnaire, it was impossible to identify the victims. However, it was possible to inform the school about the number of student reports of abuse. Each school was responsible for following their respective protocol to intervene. This article has been updated to correct 2 table citations. Sanz-Barbero, B., Ayala, A., Ieracitano, F. et al. Effect of the Lights4Violence intervention on the sexism of adolescents in European countries. BMC Public Health 22, 547 (2022). https://doi.org/10.1186/s12889-022-12925-3 Ambivalent sexism
CommonCrawl
Image transformation and information hiding technology based on genetic algorithm Xiaowei Gu1 & Yiqin Sun2 EURASIP Journal on Image and Video Processingvolume 2018, Article number: 115 (2018) | Download Citation Information hiding technology has always been a hot research field in the field of information security. How to hide information in images is also a major research field of image transformation. Today's information hiding technology has the problems of complicated encryption technology and large amount of information. Based on the genetic algorithm, this paper studies the image information hiding technology based on genetic algorithm, performs image simulation on different types of mothers, and displays the results. For the images that need to be hidden, no matter how the size changes, the mother image can achieve good information hiding. The change of the parent image is not large. Compared with the Least Significant Bit (LSB) technique, this method has a larger peak signal-to-noise. With the increasing popularity of computer network applications, information security has become a hot area, and information hiding technology has become the focus of information security technology. Information hiding hides information such as plain text and password images in multimedia information, like images, videos, and audio. Since the information hiding technology mainly hides the existence of information, it is not easy to attract the attention of the attacker, and only the authorized legal users can use the corresponding method to accurately extract the secret information from the carrier information. It is precisely because of this information hiding technology that shows its strong advantages and development potential in the field of information security. Information hiding technology is a cross-discipline. It involves mathematics, cryptography, information theory, computer vision, and other computer application technologies. It is a hot topic that researchers in various countries pay attention to and study. In the research of information hiding, the research results of information hiding carriers are rich now. The principle is to use the redundant information existing in the image to hide the secret object in order to achieve digital signature and authentication or achieve secure communication. At present, it is easier to attack and intercept data transmitted over the Internet through the Internet. If some important secret information is maliciously destroyed or eavesdropped during transmission, it will cause incalculable losses to users. How to provide a secure transmission method for users, that is, to transmit information in a secret manner and avoid the perception of third-party recipients, has become an urgent problem that needs to be solved and thus has led to the rapid development of information hiding technology. After years of research, information hiding technology has developed rapidly, and many information hiding algorithms have also been proposed [1,2,3,4]. Although in some respects it has been relatively mature, there are still a large number of practical problems to be solved such as information hiding capacity, anti-attack, and anti-statistical analysis. Therefore, there are many places where information hiding technology is worth for further research to better apply to the field of information security. The basic idea of information hiding is to embed secret information into a carrier signal through an embedded algorithm in the process of hiding information to generate a hidden carrier that is not easily perceived. Secret information is released through the transmission of hidden carriers. It is difficult for a person trying to illegally steal information to visually distinguish whether other seemingly unknown secret information is concealed in this seemingly ordinary carrier signal. Even if hidden information is found in the transmitted information, such information is difficult to extract and delete, so that information encryption transmission can be realized in this way. Information hiding technology plays an important role in protecting important information from being destroyed or stolen. Therefore, compared with traditional cryptography, information hiding technology addresses information security issues from different perspectives. Traditional cryptography [5,6,7,8] favors protecting the information content itself. After being encrypted, the secret information becomes a garbled code, making it difficult to understand, but it is also very obvious that this garbled is telling others that the information is very important. It is easy to attract the attention of the attackers, stimulate the motivation of illegal interceptors to crack confidential information, and increase the risk of this information being intercepted and attacked. Information hiding technology can greatly reduce the probability that information is intercepted and attacked. However, this does not mean that information hiding technology can replace encryption technology because both research have different aspects of protected information. It is to strengthen the security of information from different perspectives. First of all, the encryption technology mainly covers information by masking the information content, while the information hiding technology is realized by covering the existence of information. Encryption process is equivalent to adding a lock on the content of information without being cracked. Information hiding technology is to disguise the content of information through another carrier and not be discovered by others. Therefore, the two are not only inconsistent but also complementary. If we can combine the two, we can better protect secret information. Genetic algorithm [9,10,11] is a computational model designed by simulating Darwin's genetic selection and biological evolution. The most important idea of Darwin's genetic selection evolution is the survival principle of the fittest, which means that when the environment changes, only Individuals who can adapt to the environment can survive. In the evolutionary process, individuals with good quality are mainly preserved and combined through genetic manipulation, so that new individuals are continuously produced. Each individual will inherit the basic characteristics of the father but will produce new changes that are different from the parent, so that the population will evolve forward and continue to approach or obtain the optimal solution. Genetic algorithms have a good ability to solve complex system optimization problems, especially some combinatorial optimization problems, and have been solved in complex problems [12], image processing [13, 14], image partitioning [15, 16], machine learning [17, 18], and artificial life [19, 20]; and other fields have been successfully applied and show good performance. At present, the main contents of information hiding algorithm research include the relationship between the embedded strength of information [21, 22], hidden capacity [23, 24], and imperceptibility. In order to find hidden algorithms that meet the invisibility and large capacity of the carrier image at the same time, some scholars use genetic algorithms to optimize the embedded parameters and find the optimal solution, thus opening up a new research approach. Genetic algorithm is an optimized search algorithm with strong robustness. It can quickly and efficiently calculate complex nonlinear multidimensional data. It is good at solving global optimization problems. Genetic algorithms have two distinct characteristics, namely global search characteristics and parallelism. At present, the research of genetic algorithm in information hiding is mainly applied in some digital watermarking technologies [25]. The goal of using genetic algorithm to optimize is mainly to search for appropriate embedded parameters and embedded positions, and genetic algorithms are used to optimize the objective function consisting of invisibility and robustness. This chapter proposes an information hiding algorithm based on genetic algorithm. The implementation process of genetic algorithm in information hiding is introduced, including the coding method, fitness function, selection of genetic operator, and the processing of operating parameters and constraints of genetic algorithm. The hidden algorithm and extraction algorithm are proposed. By applying an airspace information hiding algorithm, the algorithm is relatively simple and practical. In order to reduce the influence on the visual effect of the image when hiding information, the optimal embedded method of secret information is found by introducing a genetic algorithm. Moreover, according to the requirements of the fitness function taken, the quality of the image after hiding the information is greatly improved. The main contributions of this article can be described as follows: This paper establishes image information hiding method based on genetic algorithm from multiple steps of population, coding, genetics, and variation and compares it with LSB method. The simulation results show that this method can achieve better hiding effect and analyze the result. In Chinese, this paper transforms hidden images to test the robustness of genetic algorithms. Proposed method Because of the uncertainty of genes in the genetic process, it is impossible to crack this process, so the use of information hiding can enhance the difficulty of cracking. The genetic algorithm was first proposed by Professor J. Holland in 1975. By simulating the process of natural selection and mating, reproduction, and mutation that occur in heredity, we started from an initial population and generated random selection, crossover, and mutation operations. The new and more suitable individuals for the environment have evolved populations into areas where the search space is increasingly closer to optimal solutions. Through such continuous iteration and evolution from generation to generation, it gradually converges to a group of individuals who are most suitable for the environment and finally achieves the goal of finding the optimal solution or approaching the optimal solution. The natural evolution process takes population as the unit, through the intersection of genes, variation, and the choice of competitive environment to achieve the continuous development of species. Therefore, in the genetic algorithm, a gene is a basic genetic unit that occupies a certain position in a chromosome. It is also called a genetic factor. A plurality of genes compose a chromosome. The composition ratio of different genes and their arrangement order determines the individual's manifestation, namely the level of fitness. Chromosomes are the main carriers of genetic material. The internal components of genes are composed of genes. The internal genes determine the external performance of individuals. In genetic algorithms, chromosomes correspond to solutions' code. An individual is an entity with a characteristic chromosome, which is the solution in the solution space of a genetic algorithm. A population is a collection of genetically encoded individuals. The number of individuals in a population is called the population size. The genetic algorithm can effectively use the information of the parent individuals throughout the evolutionary process to select and speculate on the better performing solutions in the next generation of individuals. When designing a genetic algorithm, the five major components involved are parameter encoding, initial population setting, fitness function determination, genetic manipulation, and control parameter selection. The genetic algorithm starts from any initialized population and selects the good individuals with large applicability values to the next generation. Crossover is the exchange of information between excellent individuals. Variation is the introduction of new individuals to maintain the diversity of populations. It is approximated by many iterations to the nearest point in the search space until it converges to the optimal solution point. The population size refers to the total number of individuals in any generation. The larger the population size, the more likely it is to find a global solution, but the running time is also relatively long. Generally, the value is between 40 and 100. Different values can be selected according to different requirements. The basic process of genetic algorithm is shown in Fig. 1. Genetic algorithm steps At present, the coding methods of genetic algorithms mainly include binary encoding and real encoding. Binary coding is the genetic manipulation and calculation of the bit string by mapping the original problem's solution into a bit string consisting of 0,1. Real-coded encoding uses the decimal method to encode the solution to the problem and then directly performs genetic operations on the solution space. When designing and using genetic algorithms to optimize the solution space, the first important step is to encode the parameters of the problem sought. The choice of coding scheme is based on the nature of the problem sought and the design of the genetic operator. This paper selects the real number encoding, which is the decimal code we often use, and genetically manipulates the value of the arrangement position of the pixels of the secret information as the genetic value of the decision variable. Moreover, the use of real-coded to avoid cross-operations and mutations may lead to the same individual in the same chromosome. Two image encodings (Sk-need to hide images, C-maternal images) are grouped by a certain length. It is coded as follows: $$ {\displaystyle \begin{array}{l}{S_{\mathrm{k}}}^{\prime }=\left\{{S_{k1}}^{\prime },{S_{k2}}^{\prime },\dots, {S_{k{2}^k}}^{\prime}\right\},{S_{\mathrm{k}\mathrm{i}}}^{\prime}\in \left\{0,1,\dots, {2}^{k-1}-1\right\},1\le i\le {2}^k,2\le k\le 4\\ {}{C}^{\prime }=\left\{{c_1}^{\prime },{c_2}^{\prime },\dots, {c_{2^k}}^{\prime}\right\},{c}_i\in \left\{0,1,\dots, {2}^k-1\right\},1\le i\le {2}^k\end{array}} $$ Initial population In genetic algorithms, after the coding design, the initial population needs to be set, and the iterative operation is started from the initial population until it terminates the iteration according to a certain termination criterion. The larger the initial population size, the wider the search range and the longer the genetic manipulation per generation. In contrast, the smaller the initial population setting, the smaller the corresponding search range and the shorter the genetic manipulation time per generation. One of the most commonly used initialization population methods is the unsupervised random initialization method. There are 2k! kinds of possible occurrences of Sk. When the value is large, calculation is required for each case, the calculation difficulty will be quite large, and the operation efficiency will be very low. Therefore, one of the 2k! species sequences was randomly selected as the initial population. Fitness function The composition of the adaptation function is closely related to the objective function, which is often changed by the objective function. It is very important to choose the correct and reliable fitness function, because it will directly affect the convergence speed of the genetic algorithm and whether it can find the optimal solution. In order to be able to directly relate the fitness function to the individual quality of the population in the genetic algorithm, it is specified that the fitness value is always non-negative, and in any case, the larger the fitness value is, the better. The fitness scale transformations currently used are mainly linear scale transformations, power-scale transformations, and exponential scaling transformations. In this paper, peak signal-to-noise ratio (PSNR) is used as an index of image quality evaluation, and performance comparison between various algorithms can be performed conveniently. The peak signal-to-noise ratio of the mother image C and the secret image F is $$ \mathrm{PSNR}=10{\log}_{10}\left[M\times N\times {255}^2/\sum \limits_{i=1}^M\sum \limits_{j=1}^N\left[C\left(i,j\right)-F\right(i,j\left)\right]{}^2\right] $$ The specific fitness function used in this algorithm can be defined as: $$ \mathrm{Fitness}=w\times \mathrm{PSNR}+c $$ w is the scale-up factor used to amplify the differences between individuals in order to select individual performance evaluations. c is a constant, it can make some individuals with lower fitness also have the opportunity to enter the next generation, in order to maintain the diversity of the individual. Selection: Selection is to select individuals with large values in the population to generate new populations, so that the individual values in the population continue to converge toward the optimal solution. The genetic algorithm uses a selection operator to select individuals. The operation process is a process in which the survival of the fittest takes place. Through this approach, the individual's overall quality in a group is improved. The selection operator can select algorithms such as roulette selection, random traversal sampling, local selection, truncated selection, and tournaments. The first step in the selection process is to calculate the fitness. The fitness can be calculated according to the proportion, or the fitness can be calculated according to the sorting method. After calculating the fitness, the tall individuals who have selected the applicable values in the parent group according to the fitness enter the next generation group. Crossover: Crossover is the process of selecting two individuals from the population and exchanging parts of the two individuals. This is the core of the genetic algorithm. When there are many identical chromosomes in the population or the chromosomes of the offspring are not much different from the previous generation, a new generation of chromosomes can be generated by chromosome crossing. When using crossover operations in a genetic algorithm, first randomly select two individuals to be mated in the new population, set the length of the selected individual bit string to k, and randomly select one or more integer points n in [1, k-1] as a cross position. Then, the crossover operation is performed according to the crossover probability P. The selected two individuals exchange their respective partial data according to the set requirements at the cross position to obtain two new individuals. Crossover operators commonly used by genetic algorithms have a little bit of crossover and multi-point intersections. This article chooses a single-point crossover operator. Variation: The variation is that the offspring change the value of a position in the string by a small probability. In binary coding, it changes 1 to 0 and 0 to 1. Mutation operators provide new information for the population to maintain the diversity of the population's chromosomes. The use of mutation in genetic algorithms is based on the set mutation probability P to change the value of the gene at the relevant gene position. The mutation operator in this paper uses a multi-point mutation operator for all gene segments. Firstly, determine whether each gene is mutated. If it is mutated, then a random number is generated between [0 2k-1], which is to transform the position of some hidden information. However, the newly generated random number cannot be equal to or adjacent to the current chromosome's genetic value, that is, the newly generated value of the mutation cannot be duplicated or adjacent to the current chromosome's contained gene value, so as to ensure that the replaced gene value does not conflict with the current gene value. Control parameters The control parameters mainly include population size, code length, number of iterations, mutation probability, and crossover probability. These parameters are all set to fixed values in the basic genetic algorithm. In addition, when selecting a specific operator, parameters related to the operator are also selected. The proper choice of genetic algorithm parameters directly relates to the convergence speed and accuracy of the algorithm. However, because there are many factors influencing the parameter selection, it is necessary to combine the characteristics of the problem itself to make a corresponding transformation, so that the genetic algorithm has the ability to solve and optimize different types of problems and powerful global search capabilities. The choice of evolutionary termination criteria There are two termination conditions of the loop, one is to set the maximum termination algebra, and the other is to terminate the loop when the conditions are met. The other is when the variance between the individual fitness in the population is less than a certain set value, the loop is terminated. In this paper, the algorithm is terminated if it meets the condition that the upper bound reached by the evolution algebra is 200 or the fourth consecutive generation has no change condition. After the individual with the highest fitness value finally decodes according to the encoding rules, it becomes the information hiding optimal distribution scheme. Information hiding and extraction Information hiding steps: Encrypt the image Select the mother image Optimize embedding according to the genetic algorithm Calculate the optimal solution and hide the image. Image extraction steps: Extract embedded information According to the value of the genetic algorithm parameters do the appropriate operation to restore the secret information before embedding The secret information is restored to the original image based on the scramble key and the seed. Arnold transform The Arnold transform is a transformation proposed by the Russian mathematician Vladimir I. Arnold, a 2D Arnold of N × N digital images. Transform is defined as: $$ \left[\begin{array}{l}{x}^{\prime}\\ {}{y}^{\prime}\end{array}\right]={\left[\begin{array}{l}a\ b\\ {}c\ d\end{array}\right]}^n\left[\begin{array}{l}x\\ {}y\end{array}\right]\operatorname{mod}N $$ x, y are the pixel coordinates of the original image, and x', y' are the transformed pixel coordinates. Guarantee |ad-bc| = 1. Experimental results Picture attribute In order to verify the feasibility and effectiveness of the algorithm, we chose four images as masters respectively. As shown in Fig. 4, the encrypted image has a size of 512 × 256. Four master images contain one shot with a phone named MI 4. Life photos, three pictures from the Internet, two of them are color pictures, 1 is a black and white picture, this article was sequentially encoded as 1 (life photos), 2 (Internet color pictures), 3 (Internet color pictures), and 4 (Internet black and white). Encrypted pictures come from the Internet. Simulation environment The data processing in this paper is based on the MATLAB R2014b 8.4 software environment operating system for Windows 7 Ultimate Edition 64-bit SP1. Genetic algorithm parameters In this paper, the genetic algorithm environment is set to population size = 30, crossover probability = 0.5, mutation probability = 0.02, and to find the optimal solution iterations = 200. For image hiding technology, first of all, transform the image you want to hide and scramble the original image according to a certain coding method to form the key form. For a RGB image, original image, R layer, G layer, and B layer images, such as Fig. 2 are shown. Comparison of different layers (the upper left is the original image, the upper right is the R layer image, the lower left is the G layer image, and the lower right is the B layer image) The scrambling of the different layers in Fig. 2 is as shown in Fig. 3. Scrambling effect (the upper left R layer is encrypted, the upper right G layer is encrypted, the lower left B layer is encrypted, and the lower right is the three-layer encrypted image) It can be seen from the results of Fig. 3 that the scrambling display effects of different layers are different. For example, the information of the R layer and the B layer is scrambled and the background is black, and there is a certain similarity between the two, but the scrambling result of the G layer is similar to the three-layer encrypted image, the only difference is the color. To display the original image, you need to reverse the settings. This can achieve the effect of image encryption. Four parent maps and images that need to be embedded are shown in Fig. 4. Mother map and embedded map (the first row is the embedded map, the second and third rows are the mother map) In order to be able to compare the effect of the encrypted image embedded in the mother map, this article changes the photos that need to be encrypted into different sizes for testing. The sizes are 512 × 256, 384 × 256, 256 × 256, and 128 × 256. After embedding the four parent images, the effect is as shown in Fig. 5. The effect of after embedding parent image Figure 5 shows the results of hidden pictures after they are coded by this method and hidden behind the parent figure. From the display results of the parent figure, the hidden method of this article can maintain good visual effects of the parent figure. LSB algorithm is a commonly used information hiding algorithm. Peak signal-to-noise ratio represents the merits of the algorithm. Compared with LSB algorithm, the signal-to-noise ratio of this method is obviously higher. The result is shown in Fig. 6. The embedding effect is shown in Fig. 7. The result of Fig. 7 shows that the hidden effect of LSB is obviously worse than the method of this paper (Table 1). Comparison of two methods LSB embedding effect Table 1 Peak signal-to-noise ratio after hiding The hiding capacity of the algorithm proposed in this paper can be adjusted according to specific conditions, and the amount of information per bit embedded in the image can be reached at most. Using the idea of genetic algorithm to get the most embedding effect, it can be seen from the experimental results that the effect of the vector image embedded in the information is basically consistent with that of the original image. The algorithm also has poor robustness and is easy to be attacked. It is also a common problem of using the airspace to hide information. As a non-deterministic quasi natural algorithm, genetic algorithm provides a new method for the optimization of complex systems and has proved to be effective. Although the genetic algorithm has a wide range of application value in many fields, it still has some problems. The scholars in various countries have been exploring the improvement of the genetic algorithm so that the genetic algorithm has a wider range of applications. LSB: Least Significant Bit PSNR: Peak signal-to-noise ratio RGB: Red Green Blue Y.E. Wei, G.L. Wang, X.G. Gong, Internal information hiding algorithm based on discrete cosine transform in cloud computing environment[J]. Sci. Technol. Engrg. 18(7), 166–171 (2018) Z. Shou, A. Liu, S. Li, et al., Spatial outlier information hiding algorithm based on complex transformation[C]// International Conference on Security, Privacy and Anonymity in Computation, Communication and Storage (Springer, Cham, 2017), pp. 241–255 Y. Liu, Information Hiding Algorithm of Wavelet Transform Based on JPEG Image[J] (Journal of Beihua University, 2017), 18(5), 697–700 W.A. Shukur, K.K. Jabbar, Information hiding using LSB technique based on developed PSO algorithm[J]. Int. J. Electr. Comput. Engrg. 81(2), 1156–1168 (2018) D. Dolev, Dwork, et al., Non-malleable cryptography[J]. SIAM Rev. 45(4), 727–784 (2003) D. Deutsch, A. Ekert, R. Jozsa, et al., Quantum privacy amplification and the security of quantum cryptography over noisy channels[J]. Phys. Rev. Lett. 77(13), 2818–2821 (1996) T. Jennewein, C. Simon, G. Weihs, et al., Quantum cryptography with entangled photons[J]. Phys. Rev. Lett. 84(20), 4729 (2000) O. Goldreich, Foundations of cryptography: basic applications[J]. J. ACM 10(509), 359–364 (2004) D.E. Goldberg, Genetic algorithm in search optimization and machine learning[J]. Addison Wesley xiii(7), 2104–2116 (1989) D.E. Goldberg, D.M. Goldberg, D.E. Goldberg, et al., Genetic algorithm is search optimization and machine learning[J]. xiii(7), 2104–2116 (1989) K. Deb, S. Agrawal, A. Pratap, et al., A fast elitist non-dominated sorting genetic algorithm for multi-objective optimisation: NSGA-II[C]// International Conference on Parallel Problem Solving From Nature (Springer-Verlag, 2000), pp. 849–858 Y. Laalaoui, N. Bouguila, Pre-run-time scheduling in real-time systems: Current researches and Artificial Intelligence perspectives[J]. Expert Systems with Applications. 41(5), 2196–2210 (2014) Jennane R, Almhdie-Imjabber A, Hambli R, et al. Genetic algorithm and image processing for osteoporosis diagnosis[C]// International Conference of the IEEE Engineering in Medicine & Biology. Conf. Proc. IEEE Eng. Med. Biol. Soc., 2010:5597 M. Yoshioka, S. Omatu, Noise reduction method for image processing using genetic algorithm[C]// IEEE International Conference on Systems, Man, and Cybernetics, 1997. Comput. Cybern. Simul. IEEE, 2650–2655, 3 (2002) C. Alippi, R. Cucchiara, Cluster partitioning in image analysis classification: a genetic algorithm approach[C]// CompEuro'92. 'Computer Systems and Software Engineering', proceedings. IEEE, 139–144 (1992) U. Maulik, S. Bandyopadhyay, Fuzzy partitioning using a real-coded variable-length genetic algorithm for pixel classification[J]. IEEE Trans. Geosci. Remote Sens. 41(5), 1075–1081 (2003) M. Asjad, S. Khan, Analysis of maintenance cost for an asset using thegenetic algorithm[J]. International Journal of System Assurance Engineering & Management. 8(2), 1–13 (2016) A.R.F. Pinto, A.F Crepaldi, M.S Nagano. A Genetic Algorithm applied to pick sequencing for billing[J]. J Int Manuf. 1–18. (2017) M. Conner, C. Patel, M. Little, Genetic algorithm/artificial life evolution of security vulnerability agents[C]// Military Communications Conference Proceedings, 1999. Milcom. IEEE 1, 739–743 (1999) X.P. Wang, L.M. Cao, H.B. Shi, Design of artificial life demo system based on genetic algorithm[J]. J. Tongji Univ. 31(2), 224–228 (2003) Q. Zong, W. Guo, A speech information hiding algorithm based on the energy difference between the frequency band[C]// International Conference on Consumer Electronics, Communications and Networks. IEEE, 3078–3081 (2012) Y. Hu, C. Zhang, Y. Su, Information hiding based on intra prediction modes for H.264/AVC[C]// IEEE International Conference on Multimedia and Expo. IEEE, 1231–1234 (2007) J. Xie, C. Yang, D. Huang, High capacity information hiding algorithm for DCT domain of image[C]// International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE Comput. Soc., 269–272 (2008) X.I.E. Jian-quan, Y.A.N.G. Chun-hua, X.I.E. Qing, et al., High capacity information hiding algorithm in a binary host image[J]. J. Chin. Comput. Syst. 29(10), 1874–1877 (2008) C. Jiang, A summary to algorithms of digital multimedia information hiding technology and its applications of digital watermarking[J]. J. Shanxi Educ. College 2(3), 25–28 (2003) The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. I would like to acknowledge all the team members, especially Yiqin Sun. Xiaowei Gu was born in Rudong, Jiangsu, People's Republic of China, in 1980. He received the Ph.D. degree in physical electronics from the University of Electronic Science and Technology of China, Chengdu, China, in 2011. From 2008 to 2011, he was with the University of Electronic Science and Technology of China. Since 2011, he has been an associate professor with the School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China. From 2013 to 2015, he conducted the Post-Doctoral Research with the College of Electrical Engineering, Zhejiang University, Hangzhou. His current research interests include electronic technology, information science, and computational intelligence. Yiqin Sun was born in Rudong, Jiangsu, People's Republic of China, in 1981. She received her Master's degree from the University of Electronic Science and Technology of China, Chengdu, China, in 2008. Now, she works in Key Laboratory for RF Circuits and Systems of Ministry of Education, Hangzhou Dianzi University, Hangzhou, China. Her research interests include electronic science and computer technology, image processing, and information security. This research was supported by the National Natural Science Foundation of China under grant no. 51407156, Public welfare Technology Application Research Programs of Science and Technology Department of Zhejiang province under grant no. 2016C33018, and the Project Grants 521 Talents Cultivation of Zhejiang Sci-Tech University and Key Laboratory for RF Circuits and Systems (Hangzhou Dianzi University), Ministry of Education. We can provide the data. School of Information Science and Technology, Zhejiang Sci-Tech University, 928 Second Avenue, Xiasha Higher Education Zone, Hangzhou, Zhejiang, People's Republic of China Xiaowei Gu Key Laboratory for RF Circuits and Systems of Ministry of Education, Hangzhou Dianzi University, 1158, No. 2 Street, Xiasha Higher Education Zone, Hangzhou, Zhejiang Province, 310018, People's Republic of China Yiqin Sun Search for Xiaowei Gu in: Search for Yiqin Sun in: All authors take part in the discussion of the work described in this paper. These authors contributed equally to this work and should be considered co-first authors. Both authors read and approved the final manuscript. Correspondence to Xiaowei Gu. Xiaowei Gu and Yiqin Sun contributed equally and are co-authors of this manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Information hiding Image transformation Visual Information Learning and Analytics on Cross-Media Big Data
CommonCrawl
On a class of analytic functions associated to a complex domain concerning q-differential-difference operator Rabha W. Ibrahim ORCID: orcid.org/0000-0001-9341-025X1,2 & Maslina Darus3 In our current investigation, we apply the idea of quantum calculus and the convolution product to amend a generalized Salagean q-differential operator. By considering the new operator and the typical version of the Janowski function, we designate definite new classes of analytic functions in the open unit disk. Significant properties of these modules are considered, and recurrent sharp consequences and geometric illustrations are realized. Applications are considered to find the existence of solutions of a new class of q-Briot–Bouquet differential equations. The q-calculus motivates to build a new method of q-special functions, new differential and difference operators and generalized well-known differential and difference equations. The structure of q-calculus improves different modules of orthogonal polynomials and functions as regards the procedure of their traditional complements. The joining between equilibriums of differential formulas (equations, operators and inequalities) and their solutions is one of the most beneficial and well-designed tools for studying properties of the special functions in mathematical analysis and mathematical physics. The consequence of these concerns in applications to solve physical problems need not be strained. The q-operators usually realize q-difference equations (which may include derivatives). We show the close connection between these operators of q-difference equations. In certain studies, we shall present a technique for developing and understanding from a geometric viewpoint numerous properties and characteristics of q-operators. The theory of q-calculus mainly was recently developed. Studies of q-difference equations were widely performed essentially by Carmichael [1], Jackson [2], Mason [3], and Trjitzinsky [4]. Investigation concerning the geometric function theory and q-theory together was first given by Ismail et al.[5]. Many differential and integral operators can be recorded in terms of convolution, such as the Sàlàgean differential operator [6], Al-Oboudi differential operator [7], and the generalized differential operator [8]. It is an advantage that the method of convolution aids investigators in extra investigation of the geometric properties for some well-known classes of analytic univalent functions. Newly, Naeem et al. [9] announced investigations of classes linking the Sàlàgean q-differential operator. By joining the q-calculus and the generalized Sàlàgean differential operator [8], we present a new generalized q-operator called the generalized Sàlàgean q-differential operator. By using the new formula of this operator, we formulate some new classes and investigate the geometric consequences of them. Precursory We intend to require and assume the following throughout this study. A function \({\upsilon }\in {\varLambda }\) is known as univalent in \(\mathbb{U}\) (the open unit disk) if it definitely obeys: if \(\xi _{1}{\ne }\xi _{2}\) in \(\mathbb{U}=\{z \in \mathbb{C}: |z|<1\}\) then \(\psi (\xi _{1}){\ne }\psi (\xi _{2}) \) or equivalently, if \(\upsilon ( \xi _{1})=\upsilon (\xi _{2})\) then \(\xi _{1}=\xi _{2}\). Without loss of generality, we suggest the letter Λ for our univalent functions as regards the construction $$ \upsilon (z)=z+ \sum_{n=2}^{\infty } \vartheta _{n} z^{n},\quad z \in \mathbb{U}. $$ We let \(\mathcal{S}\) denote the class of such functions \({\upsilon } \in {\varLambda }\) as are univalent in \(\mathbb{U}\). A function \({\upsilon }\in {\mathcal{S}}\) is recalled starlike w.r.t. \((0,0)\) in \(\mathbb{U}\) if the linear cut associating the origin to every other point of \(\upsilon (z), |z|=r<1\) is set entirely in \(\upsilon (z), |z|=r<1\) (every point of \(\upsilon (z)\) be observable from the origin). A function \({\upsilon }\in {\mathcal{S}}\) is called convex in \(\mathbb{U}\) if the linear slice fitting together any two points of \(\upsilon (z), |z|=r<1\) is set entirely in \(\upsilon (z), |z|=r<1\) or a function \({\upsilon }\in {\mathcal{S}}\) is convex in \(\mathbb{U}\) if it is starlike. We address the class of functions \({\upsilon }\in {\mathcal{S}}\) that are starlike with respect to the origin by \({\mathcal{S}}^{*}\) and convex in \(\mathbb{U}\) by \(\mathcal{C}\). Connected to the classes \({\mathcal{S}}^{*}\) and \(\mathcal{C}\), we address the class \(\mathcal{P}\) of all analytic functions υ in \(\mathbb{U}\) with a positive real part in \(\mathbb{U}\) and \({\upsilon }(0)=1\). In fact \({\psi }\in {\mathcal{S}}^{*}\) if and only if \(z\upsilon '(z)/\upsilon (z) \in {\mathcal{P}}\) and \({\upsilon }\in {\mathcal{C}}\) if and only if \(1+z\upsilon ''(z)/ \upsilon '(z) \in {\mathcal{P}}\). Extensively, for a positive number \({\sigma } \in [0,1)\), we address the class \({\mathcal{P}}( \sigma )\) of analytic functions υ in \(\mathbb{U}\) with \({\upsilon }(0)=1\) such that \(\Re ({\upsilon }(z))>{\sigma }\) for all \({z}\in {\mathbb{U}}\). Note that [10] \({\mathcal{P}}({\sigma }_{2}) \subset {\mathcal{P}}({\sigma }_{1}) \subset {\mathcal{P}}(0) \equiv {\mathcal{P}}\) for \(0<{\sigma }_{1}<{\sigma }_{2}\). According to [11], for two functions υ and \(\nu \in \varLambda \), the function υ is subordinate to ν, denoted by \(\upsilon \prec \nu \), if there occurs a Schwarz function ς with \(\varsigma (0)=0\) and \(|\varsigma (z)|<1\) such that \(\upsilon (z) = \nu (\varsigma (z)) \) for all \(z \in {\mathbb{U}}\). Clearly, \(\upsilon (z) \prec \nu (z)\) is analogous to \(\upsilon (0) =\nu (0) \) and \(\upsilon ({\mathbb{U}}) \subset \nu ({\mathbb{U}})\). It is advantageous stating that the method of convolution helps investigators in extra investigations of the geometric properties of analytic functions. For any non-negative integer n, the q-integer number n, symbolized by \([n, q] \), is formulated as \([n, q] = \frac{1-q ^{n}}{1-q}\), where \([0, q] =0 \), \([1, q] =1\) and \(\lim_{q\rightarrow 1^{-}} [n, q] =n\). The q-difference operator of the analytic function υ is formulated by the construction $$ \Delta _{q}\upsilon (z)= \frac{\upsilon (qz)-\upsilon (z)}{qz-z},\quad z \in {\mathbb{U}}. $$ Clearly, \(\Delta _{q} z^{n}= [n,q] z^{n-1}\) and for \(\upsilon \in \varLambda \), we have $$ \Delta _{q} \upsilon (z)= \sum_{n=1}^{\infty } \vartheta _{n} [n,q] z ^{n-1}, \quad z \in {\mathbb{U}}, \vartheta _{1}=1. $$ For \(\upsilon \in \varLambda \), Govindaraj and Sivasubramanian presented the Sàlàgean q-differential operator [12] $$ S^{0}_{q} \upsilon (z)=\upsilon (z),\qquad S^{1}_{q}\upsilon (z)= z \Delta _{q} \upsilon (z), \ldots, S^{k}_{q} \upsilon (z)=z \Delta _{q} \bigl( S^{k-1}_{q}\upsilon (z) \bigr), $$ where k is a positive integer. A calculation dependent on the formula of \(\Delta _{q}\) shows \(S^{k}_{q}\upsilon (z)= \upsilon (z)*\varOmega _{q} ^{k}(z)\), where ∗ is the convolution product, \(\varOmega _{q}^{k}(z)= z+ \sum_{n=2}^{\infty } [n,q]^{k} z^{n}\) and \(S^{k}_{q}\upsilon (z)=z+ \sum_{n=2}^{\infty } [n,q]^{k} \vartheta _{n} z^{n}\). It is clear that $$ \lim_{q\rightarrow 1^{-}}S^{k}_{q}\upsilon (z)=z+ \sum_{n=2}^{\infty } n^{k} \vartheta _{n} z^{n}, $$ the normalized Sàlàgean differential operator [6]. For a function \(\upsilon (z) \) and a constant \(\kappa \in \mathbb{R}\), we construct the generalized q-Sàlàgean differential-difference operator (q-SDD) employing the idea of \(\Delta _{q} \) as follows: $$\begin{aligned} \begin{aligned} & \mathcal{S}_{q}^{\kappa ,0}\upsilon (z)= \upsilon (z), \\ & \mathcal{S}_{q}^{\kappa ,1} \upsilon (z)=z \Delta _{q} \upsilon (z)+\frac{ \kappa }{2} \bigl( \upsilon (z)-\upsilon (-z)-2z \bigr) \\ &\phantom{\mathcal{S}_{q}^{\kappa ,1} \upsilon (z)} =z+ \sum_{n=2}^{\infty } \biggl([n,q]+ \frac{\kappa }{2} \bigl(1+(-1)^{n+1}\bigr) \biggr) \vartheta _{n} z^{n}, \\ & \mathcal{S}_{q}^{\kappa ,2}\upsilon (z)= \mathcal{S}_{q}^{\kappa ,1} \bigl[ \mathcal{S}_{q}^{\kappa ,1} \upsilon (z)\bigr]= z+ \sum_{n=2}^{\infty } \biggl([n,q]+\frac{\kappa }{2} \bigl(1+(-1)^{n+1}\bigr) \biggr) ^{2} \vartheta _{n} z^{,}n \\ & \vdots \\ & \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)= \mathcal{S}_{q}^{\kappa ,1} \bigl[ \mathcal{S}_{q}^{\kappa ,k-1} \upsilon (z)\bigr]= z+ \sum_{n=2}^{\infty } \biggl([n,q]+\frac{\kappa }{2} \bigl(1+(-1)^{n+1}\bigr) \biggr)^{k} \vartheta _{n} z^{n} . \end{aligned} \end{aligned}$$ Obviously, \(\lim_{q\rightarrow 1^{-}}\mathcal{S}_{q}^{\kappa ,k} \upsilon (z)\) implies the generalized Sàlàgean differential–difference operator [8], which is a special type of Dunkl operator with Dunkl constant κ in the open unit disk [13]. Moreover, when \(\kappa =0 (\lim_{q\rightarrow 1^{-}} \mathcal{S}_{q}^{0,k}\upsilon (z))\), we have the normalized Sàlàgean differential operator [6]. Finally, when \(\kappa =0\), we have the q-Sàlàgean differential operator (\(\mathcal{S}_{q}^{0,k}\upsilon (z)\)) (see [12]). Based on the operator (2), we introduce the following classes. A function \(\upsilon \in \varLambda \) is in the class \(S_{q}^{*}(\kappa ,k,h) \) if and only if $$ S_{q}^{*}(\kappa ,k,h) = \biggl\{ \psi \in \varLambda : \frac{z ( \mathcal{S}_{q}^{\kappa ,k}\upsilon (z))'}{ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}\prec h(z), h \in \mathcal{C} \biggr\} . $$ \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h)\); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=\frac{1+Az}{1+Bz}\) (see [14,15,16]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=\frac{2}{1+e^{-z}}\) (see [17]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=\frac{1+\epsilon ^{2}z^{2}}{1-\epsilon ^{2}-\epsilon ^{2}z^{2}}, \epsilon = \frac{1- \sqrt{5}}{2}\) (see [18, 19]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=1+\frac{\beta - \alpha }{\pi }i \log (\frac{1-\exp (2\pi i(\frac{1-\alpha }{ \beta -\alpha }) )z}{1-z})\) (see [20]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=1+\frac{2}{\pi (1- \alpha )}i \log (\frac{1-\exp (\pi i(1-\alpha )^{2}z}{1-z})\) (see [21]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=\sqrt{1+z}\) (see [22]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=1+\sin (z)\) (see [23]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)=1+\cos (z)\) (see [24]); \(S_{q}^{*}(\kappa ,0,h)=\mathcal{S}^{*}(h), h(z)= (\frac{1+z}{1+( \frac{1-c}{c})z} )^{\frac{1}{\mu }}, \mu \geq 1, c\geq 0.5\) (see [25]). If \(\upsilon \in \varLambda \), then \(\upsilon \in \mathbb{J}_{q}^{\kappa ,b} (A,B,k)\) if and only if $$\begin{aligned} &1+\frac{1}{b} \biggl( \frac{2 \mathcal{S}_{q}^{\kappa ,k+1} \upsilon (z) }{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)- \mathcal{S}_{q}^{ \kappa ,k}\upsilon (-z)} \biggr) \prec \frac{1+Az}{1+Bz} \\ &\quad \bigl(z \in \mathbb{U}, -1\leq B< A\leq 1, k=1,2,\ldots, b \in \mathbb{C} \setminus \{0\}, \kappa \in \mathbb{R} \bigr). \end{aligned}$$ \(\kappa =0, q\rightarrow 1^{-} \Longrightarrow \) [26]; \(\kappa =0, B=0, q\rightarrow 1^{-} \Longrightarrow \) [27]; \(\kappa =0, A=1,B=-1 , b=2, q\rightarrow 1^{-} \Longrightarrow \) [28]; \(q\rightarrow 1^{-} \Longrightarrow \) [8]. We shall study the geometric significance of the special classes \(S_{q}^{*}(\lambda ,k,h)\) and \(\mathbb{J}_{q}^{\lambda ,b} (A,B,k)\) by using the following preliminaries, which can be found in [11]. Suppose the following data: \({a}\in {\mathbb{C}}\), a positive integernand $$ \mathfrak{H}[\vartheta ,n] = \bigl\{ \upsilon :\upsilon (z)= \vartheta + \vartheta _{n}z^{n}+\vartheta _{n+1}z^{n+1}+ \cdots \bigr\} . $$ If \(\wp \in \mathbb{R}\)then \(\Re (\upsilon (z)+ \wp z \upsilon '(z) )>0 \Longrightarrow \Re (\upsilon (z) )>0\). Moreover, if \(\wp >0\)and \({\upsilon }\in \mathfrak{H}[1,n]\), then there are constants \(\ell >0\)and \(\flat >0\)with \(\flat =\flat (\wp ,\ell ,n)\)so that $$ \upsilon (z)+\wp z{\upsilon }'(z) \prec \biggl[ \frac{1+z}{1-z} \biggr] ^{\flat }\quad\Rightarrow\quad {\upsilon }(z)\prec \biggl[\frac{1+z}{1-z} \biggr] ^{\ell }. $$ If \(\nu \in [0,1)\)and \(\psi \in \mathfrak{H}[1,n]\)then there is a constant \({\ell }>0\)withℓso that $$ \Re \bigl(\upsilon ^{2}(z)+2{\upsilon }(z).z{\upsilon }'(z) \bigr)> \nu \quad\Rightarrow\quad \Re \bigl(\upsilon (z)\bigr)> \ell . $$ If \(\upsilon \in \mathfrak{H}[\vartheta ,n]\)with \(\Re ( \vartheta ) >0\)then $$ \Re \bigl(\upsilon (z)+z\upsilon '(z)+z^{2} \upsilon ''(z) \bigr)>0 $$ or for \(\alpha : \mathbb{U} \rightarrow \mathbb{R}\)with $$ \Re \biggl(\upsilon (z)~+~\alpha (z) \frac{z\upsilon '(z)}{\upsilon (z)} \biggr)>0 $$ then \(\Re (\upsilon (z))> 0\). In this section, we study the geometric properties of the classes \(S_{q}^{*}(\kappa ,k,h)\) and \(\mathbb{J}_{q}^{\kappa ,b} (A,B,k)\) and the consequences of these classes for recent investigations by researchers. For \(\upsilon \in \varLambda \)if one of the following statements is given: \(\mathcal{S}_{q}^{\kappa ,k}\upsilon (z)\)is of bounded boundary rotation; υsatisfies the subordination structure $$ \bigl( \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) \bigr)' \prec \biggl(\frac{1+z}{1-z} \biggr) ^{\flat },\quad \flat >0, z \in {\mathbb{U}}; $$ υfulfills the layout $$ \Re \biggl(\bigl( \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) \bigr)' \frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)}{z} \biggr)> \frac{\varsigma }{2},\quad \varsigma \in [0,1), z \in {\mathbb{U}}, $$ υobeys the relation $$ \Re \biggl( z \bigl( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z) \bigr)'' -\bigl( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)\bigr)' + 2 \frac{\mathcal{S}_{q} ^{\kappa ,k} \upsilon (z) }{z} \biggr)> 0, $$ υadmits the relation $$ \Re \biggl( \frac{z (\mathcal{S}_{q}^{\kappa ,k}\upsilon (z))' }{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) } + 2 \frac{ \mathcal{S}_{q} ^{\kappa ,k} \upsilon (z)}{z} \biggr)> 1, $$ then \(\frac{\mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}{z} \in {\mathcal{P}}(\sigma )\)for some \(\sigma \in [0,1)\). Consider a function ρ as follows: $$ \rho (z)= \frac{\mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}{z}\quad \Rightarrow \quad z\rho '(z)+ \rho (z)= \bigl( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)\bigr)'. $$ By the first conclusion, \(\mathcal{S}_{q}^{\kappa ,k}\upsilon (z)\) is of bounded boundary rotation, it implies that \(\Re ( z\rho '(z)+ \rho (z))>0\). Thus, by Lemma 2.1.i, we obtain \(\Re (\rho (z))>0\) which implies the first part of the theorem. According to the second part, we have the subject subordination layout $$ \bigl( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z) \bigr)'=z\rho '(z)+ \rho (z) \prec \biggl[ \frac{1+z}{1-z}\biggr]^{\flat }. $$ Now, according to Lemma 2.1.i, there is a constant \(\ell >0\) with \(\flat =\flat (\ell )\) accepting the subordination $$ \frac{\mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}{z}\prec \biggl(\frac{1+z}{1-z} \biggr) ^{\ell }. $$ This implies that $$ \Re \biggl(\frac{\mathcal{S}_{q}^{\kappa ,k}\upsilon (z)}{z} \biggr)> \sigma , \quad \sigma \in [0,1). $$ Continuing, we address the third part, which implies that $$ \Re \bigl(\rho ^{2}(z)+2\rho (z) . z\rho '(z) \bigr)=2\Re \biggl(\bigl( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)\bigr)' \frac{ \mathcal{S}_{q}^{ \kappa ,k} \upsilon (z)}{z} \biggr) > \varsigma . $$ In virtue of Lemma 2.1.ii, there is a constant \({\ell }>0\) such that \(\Re (\rho (z))>\ell \) and $$ \rho (z) = \frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) }{z} \in {\mathcal{P}}(\sigma ), \quad \sigma \in [0,1). $$ It follows from (4) that \(\Re ( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z))' )>0\) and thus by the Noshiro–Warschawski and Kaplan theorems that \(\mathcal{S}_{q}^{\kappa ,k}\upsilon (z)\) is univalent and of bounded boundary rotation in \(\mathbb{U}\). By differentiating (3) and taking the real part, we have $$\begin{aligned} \Re \bigl( \rho (z)+z\rho '(z)+z^{2} \rho ''(z) \bigr) = \Re \biggl( z\bigl( \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)\bigr)'' -\bigl(\mathcal{S}_{q}^{\kappa ,k} \upsilon (z) \bigr)' + 2 \frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) }{z} \biggr)> 0. \end{aligned}$$ Thus, in view of Lemma 2.1-ii, we attain \(\Re (\frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)}{z})>0\). By logarithmic differentiation (3) and taking the real part, we obtain the following: $$\begin{aligned} \Re \biggl( \rho (z)+\frac{z\rho '(z)}{\rho (z)}+z^{2} \rho ''(z) \biggr) &=\Re \biggl( \frac{z (\mathcal{S}_{q}^{\kappa ,k}\upsilon (z) )' }{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)} + 2 \frac{ \mathcal{S} _{q}^{\kappa ,k}\upsilon (z) }{z} -1 \biggr)> 0. \end{aligned}$$ Thus, according to Lemma 2.1-iii, where \(\alpha (z)=1\), we get \(\Re ( \frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)}{z})>0\). □ Consider \(\upsilon \in S_{q}^{*}(\kappa ,k,h)\), where \(h(z)\)is convex univalent function in \(\mathbb{U}\). Then $$ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z) \prec z \exp \biggl( \int ^{z} _{0} \frac{h(\eth (w))-1}{w} \,d w \biggr), $$ where \(\eth (z)\)is analytic in \(\mathbb{U}\), with \(\eth (0) = 0\)and \(|\eth (z)| < 1\). Moreover, for \(|z|=\chi \), \(\mathcal{S}_{q}^{ \kappa ,k}\upsilon (z)\)fulfills the formula $$ \exp \biggl( \int ^{1}_{0} \frac{h\eth (-\chi ))-1}{\chi } \biggr) \,d \chi \leq \biggl\vert \frac{ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}{z} \biggr\vert \leq \exp \biggl( \int ^{1}_{0} \frac{h(\eth (\chi ))-1}{\chi } \biggr) \,d \chi . $$ Since \(\psi \in S_{q}^{*}(\kappa ,k,h)\), we get $$ \biggl(\frac{z( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z))'}{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)} \biggr) \prec h(z), \quad z \in \mathbb{U}, $$ which leads to a Schwarz function with \(\eth (0) = 0\) and \(|\eth (z)| < 1\) satisfying the following equality: $$ \biggl(\frac{z( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z))'}{ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)} \biggr) = h\bigl(\eth (z)\bigr),\quad z \in \mathbb{U}. $$ A calculation gives $$ \biggl(\frac{( \mathcal{S}_{q}^{\kappa ,k} \psi (z))'}{ \mathcal{S} _{q}^{\kappa ,k} \psi (z)} \biggr)- \frac{1}{z} = \frac{ h(\eth (z))-1}{z}. $$ By integrating both sides, we obtain $$ \log \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)-\log z= \int ^{z}_{0} \frac{h( \eth (w))-1}{w} \,d w. $$ Thus, we have $$ \log \frac{ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}{ z}= \int ^{z} _{0} \frac{h(\eth (\xi ))-1}{w} \,d w. $$ By utilizing the meaning of subordination, we conclude that $$ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) \prec z \exp \biggl( \int ^{z} _{0} \frac{h(\eth (w))-1}{w} \,dw \biggr). $$ Besides, we find that the function \(h(z)\) maps the disk \(0< |z|< \chi <1\) onto a domain which is convex and symmetric with respect to the real axis, which means $$ h\bigl(-\chi \vert z \vert \bigr) \leq \Re \bigl(h\bigl(\eth (\chi z)\bigr) \bigr) \leq h\bigl(\chi \vert z \vert \bigr), \quad \chi \in (0,1), $$ then we obtain the next relations: $$ h(-\chi ) \leq h\bigl(-\chi \vert z \vert \bigr),\qquad h\bigl(\chi \vert z \vert \bigr) \leq h(\chi ), $$ $$ \int ^{1}_{0} \frac{h(\eth (-\chi \vert z \vert ))-1}{\chi }\,d\chi \leq \Re \biggl( \int ^{1}_{0} \frac{h(\eth (\chi ))-1}{\chi } \,d\chi \biggr)\leq \int ^{1}_{0} \frac{h(\eth (\chi \vert z \vert ))-1}{\chi }\,d\chi . $$ By employing Eq. (5), we deduce that $$ \int ^{1}_{0} \frac{h(\eth (-\chi \vert z \vert ))-1}{\chi }\,d \chi \leq \log \biggl\vert \frac{ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)}{ z} \biggr\vert \leq \int ^{1}_{0} \frac{h(\eth (\chi \vert z \vert ))-1}{\chi }\,d\chi , $$ which leads to $$ \exp \biggl( \int ^{1}_{0} \frac{h(\eth (-\chi \vert z \vert ))-1}{\chi }\,d\chi \biggr) \leq \biggl\vert \frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)}{ z} \biggr\vert \leq \exp \biggl( \int ^{1}_{0} \frac{h(\eth (\chi \vert z \vert ))-1}{ \chi }\,d\chi \biggr). $$ $$ \exp \biggl( \int ^{1}_{0} \frac{h(\eth (-\chi ))-1}{\eta } \biggr) \,d \chi \leq \biggl\vert \frac{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)}{z} \biggr\vert \leq \exp \biggl( \int ^{1}_{0} \frac{h(\eth (\chi ))-1}{\chi } \biggr) \,d \chi . $$ ([8]) Let \(q \longrightarrow 1\)in Theorem 3.2. Then $$ \mathcal{S}_{1}^{\kappa ,k}\upsilon (z) \prec z \exp \biggl( \int ^{z} _{0} \frac{h(\eth (w))-1}{w} \,d w \biggr). $$ Note that all the special cases of the class \(S_{q}^{*}(\kappa ,k,h)\) can be considered as consequences of Theorem 3.2. If \(\upsilon \in \mathbb{J}_{q}^{\kappa ,b} (A,B,k)\) then the odd function $$ \mathfrak{B}(z)=\frac{1}{2} \bigl[\upsilon (z)-\upsilon (-z)\bigr],\quad z \in \mathbb{U,} $$ attains the subordination inequalities $$ 1+\frac{1}{b} \biggl( \frac{ \mathcal{S}_{q}^{\kappa ,k+1} \mathfrak{B}(z)}{ \mathcal{S}_{q}^{\kappa ,k} \mathfrak{B}(z)}-1 \biggr) \prec \frac{1+Az}{1+Bz} $$ $$\begin{aligned} &\Re \biggl( \frac{z \mathfrak{B}(z)'}{ \mathfrak{B}(z)} \biggr) \geq \frac{1- \varrho ^{2}}{1+\varrho ^{2}},\quad \vert z \vert =\varrho < 1, \\ &\quad \bigl(z \in \mathbb{U}, -1\leq B< A\leq 1, k=1,2,\ldots, b \in \mathbb{C} \setminus \{0\}, \kappa \in \mathbb{R} \bigr). \end{aligned}$$ Let \(\upsilon \in \mathbb{J}_{q}^{\kappa ,b} (A,B,k)\). Then there exists a function \(P \in \mathbb{J}(A,B) \) with the layout $$ b \bigl(P(z)-1\bigr)= \biggl( \frac{2 \mathcal{S}_{q}^{\kappa ,k+1}\upsilon (z)}{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)- \mathcal{S}_{q}^{\kappa ,k} \upsilon (-z)} \biggr) $$ $$ b \bigl(P(-z)-1\bigr)= \biggl( \frac{-2 \mathcal{S}_{q}^{\kappa ,k+1}\upsilon (-z)}{ \mathcal{S}_{q}^{\kappa ,k} \upsilon (z)- \mathcal{S}_{q}^{\kappa ,k}\upsilon (-z)} \biggr). $$ This yields $$ 1+\frac{1}{b} \biggl( \frac{ \mathcal{S}_{q}^{\kappa ,k+1} \mathfrak{B}(z)}{ \mathcal{S}_{q}^{\kappa ,k} \mathfrak{B}(z)}-1 \biggr)= \frac{P(z)+P(-z)}{2}. $$ In addition, since P fulfills the inequality $$ P(z) \prec \frac{1+Az}{1+Bz}, $$ where \(\frac{1+Az}{1+Bz}\) is univalent, by the idea of subordination, we obtain $$ 1+\frac{1}{b} \biggl( \frac{ \mathcal{S}_{q}^{\kappa ,k+1} \mathfrak{B}(z)}{ \mathcal{S}_{q}^{\kappa ,k} \mathfrak{B}(z)}-1 \biggr) \prec \frac{1+Az}{1+Bz}. $$ Also, the odd function \(\mathfrak{B}(z)\) is starlike in \(\mathbb{U,}\) which produces the subordination inequality $$ \frac{z \mathfrak{B}(z)'}{ \mathfrak{B}(z)} \prec \frac{1-z^{2}}{1+z ^{2}}, $$ that is, there exists a Schwarz function \(\gamma \in \mathbb{U}, | \gamma (z)| \leq |z|<1, \gamma (0)=0\) with the property $$ \varXi (z):=\frac{z \mathfrak{B}(z)'}{ \mathfrak{B}(z)} \prec \frac{1- \gamma (z)^{2}}{1+\gamma (z)^{2}}, $$ $$ \gamma ^{2}(\zeta )=\frac{1-\varXi (\zeta )}{1+\varXi (\zeta )}, \quad \zeta \in \mathbb{U}, \zeta , \vert \zeta \vert =r< 1. $$ A computation implies that $$ \biggl\vert \frac{1-\varXi (\zeta )}{1+\varXi (\zeta )} \biggr\vert = \bigl\vert \gamma (\zeta ) \bigr\vert ^{2} \leq \vert \zeta \vert ^{2}. $$ Therefore, we get the following inequality: $$ \biggl\vert \varXi (\zeta )- \frac{1+ \vert \zeta \vert ^{4}}{1- \vert \zeta \vert ^{4}} \biggr\vert ^{2} \leq \frac{ 4 \vert \zeta \vert ^{4}}{(1- \vert \zeta \vert ^{4})^{2}} $$ $$ \biggl\vert \varXi (z)- \frac{1+ \vert \zeta \vert ^{4}}{1- \vert \zeta \vert ^{4}} \biggr\vert \leq \frac{2 \zeta |^{2}}{(1- \vert \zeta \vert ^{4})}. $$ Consequently, we obtain the result $$ \Re \bigl(\varXi (z)\bigr) \geq \frac{1-\varrho ^{2}}{1+\varrho ^{2}}, \qquad \vert \zeta \vert = \varrho < 1. $$ The following consequences of Theorem 3.3 can be found in [26, 27] and [8], respectively. Let \(\lambda =1\)in Theorem 3.3. Then $$ 1+\frac{1}{b} \biggl( \frac{\mathcal{S}_{q}^{0,k+1}\mathfrak{B}(z)}{ \mathcal{S}_{q}^{0,k} \mathfrak{B}(z)}-1 \biggr) \prec \frac{1+Az}{1+Bz}. $$ Let \(\kappa =0, k=1\)and \(q \longrightarrow 1\)in Theorem 3.3. Then $$ 1+\frac{1}{b} \biggl( \frac{ \mathcal{S}_{q}^{0,2}\mathfrak{B}(z)}{ \mathcal{S}_{q}^{0,1} \mathfrak{B}(z)}-1 \biggr) \prec \frac{1+Az}{1+Bz}. $$ $$ 1+\frac{1}{b} \biggl( \frac{ \mathcal{S}_{q}^{\kappa ,k+1} \mathfrak{B}(z)}{\mathcal{S}_{q}^{\kappa ,k} \mathfrak{B}(z)}-1 \biggr) \prec \frac{1+Az}{1+Bz}. $$ We produce a presentation of our results established by the solution of the complex Briot–Bouquet (BB) differential equation [11]. The class of complex Briot–Bouquet differential equations is a link of differential equations whose consequences are visible in the complex plane. Accruing integrals shade special paths to follow, which have singularities and branch points of the equation we must study. Existence and uniqueness theorems contain the efficacy of upper and lower (subordination and superordination relations) (see [29,30,31,32]). The study of the rational first ODEs in the complex domain indicates new transcendental special functions as follows: $$ \beta \upsilon (z)+(1-\beta )\frac{z(\upsilon (z))'}{\upsilon (z)} = h(z), \qquad h(0)=\upsilon (0),\quad \beta \in [0,1]. $$ Many applications of these equations in geometric function theory have newly been researched in [11]. Our goal is to propagate this class of equations by applying the suggested operator and establishing its solutions using the subordination relations. The q-SDD in (2) propagates the complex Briot–Bouquet differential equation as follows: $$ \beta \upsilon (z)+ (1-\beta ) \biggl(\frac{z(\mathcal{S}_{q}^{\kappa ,k}\upsilon (z))'}{\mathcal{S}_{q}^{\kappa ,k}\upsilon (z)} \biggr) = h(z),\qquad h(0)=\upsilon (0),\quad z \in \mathbb{U}. $$ The subordination conditions and distortion bounds for a class of complex conformable fractional derivative are given in the next theorem. A trivial solution of (6) is given when \(\beta =1\). Therefore, our study concerns the case with \(\upsilon \in \varLambda \) and \(\beta =0\). Consider Eq. (6) with \(\beta =0\)and \(\psi \in \varLambda \)with non-negative coefficients. If \(h(z), z \in \mathbb{U}\)is univalent convex in \(\mathbb{U}\)then there exists a solution satisfying the subordination (major solution) $$ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z) \prec z \exp \biggl( \int ^{z} _{0} \frac{h(\eth (w))-1}{w} \,d w \biggr), $$ where \(\eth (z)\)is analytic in \(\mathbb{U}\), with \(\eth (0) = 0\)and \(|\eth (z)| < 1\). Collect all the assumptions of Eq. (6), and \(\upsilon (z) \in \varLambda \). Then we get the following conclusion: $$\begin{aligned} & \Re \biggl( \frac{z ( \mathcal{S}_{q}^{\kappa ,k}\upsilon (z))'}{ \mathcal{S}_{q}^{\kappa ,k}\upsilon (z)} \biggr)>0 \\ & \quad\Leftrightarrow\quad \Re \biggl( \frac{ z+ \sum_{n=2}^{\infty }n ([n,q]+\frac{ \kappa }{2} (1+(-1)^{n+1}) )^{k} \vartheta _{n} z^{n}}{ z+ \sum_{n=2}^{\infty } ([n,q]+\frac{\kappa }{2} (1+(-1)^{n+1}) ) ^{k} \vartheta _{n} z^{n}} \biggr)>0 \\ &\quad\Leftrightarrow \quad\Re \biggl( \frac{1+ \sum_{n=2}^{\infty }n ([n,q]+\frac{ \kappa }{2} (1+(-1)^{n+1}) )^{k} \vartheta _{n} z^{n-1}}{ 1+ \sum_{n=2}^{\infty } ([n,q]+\frac{\kappa }{2} (1+(-1)^{n+1}) ) ^{k} \vartheta _{n} z^{n-1}} \biggr)>0 \\ &\quad\Leftrightarrow \quad\biggl( \frac{1+ \sum_{n=2}^{\infty }n ([n,q]+\frac{ \kappa }{2} (1+(-1)^{n+1}) )^{k} \vartheta _{n} }{ 1+ \sum_{n=2}^{\infty } ([n,q]+\frac{\kappa }{2} (1+(-1)^{n+1}) ) ^{k} \vartheta _{n}} \biggr)>0,\quad z \rightarrow 1^{+} \\ &\quad\Leftrightarrow\quad \Biggl( 1+ \sum_{n=2}^{\infty }n \biggl([n,q]+\frac{ \kappa }{2} \bigl(1+(-1)^{n+1}\bigr) \biggr)^{k} \vartheta _{n} \Biggr)>0. \end{aligned}$$ Moreover, by the definition of \(\mathcal{S}_{q}^{\kappa ,k} \upsilon (z)\), we indicate that \((\mathcal{S}_{q}^{\kappa ,k}\upsilon )(0)=0\). Consequently, $$ \frac{z ( \mathcal{S}_{q}^{\kappa ,k} \upsilon (z))'}{ \mathcal{S} _{q}^{\kappa ,k}\upsilon (z)} \in {\mathcal{P}}\quad \Rightarrow \quad\upsilon (z) \in S_{q}^{*}(\kappa ,k,h). $$ Hence, in view of Theorem 3.2, we have the desired result (7). □ By our method, we have revealed new classes of univalent functions, which assign a q-SDD operator in the open unit disk. We obtained appropriate essential conditions of these subclasses. Applications involved the BB equation and investigated its solution in the open unit disk. For further study, we encourage researchers to introduce some certain new classes related to other kinds of analytic functions such as harmonic, symmetric, p-valent and meromorphic functions with respect to symmetric points associated by (2). Carmichael, R.D.: The general theory of linear q-difference equations. Am. J. Math. 34, 147–168 (1912) Jackson, F.H.: On q-definite integrals. Quart. J. Pure Appl. Math. 41, 193–203 (1910) Mason, T.E.: On properties of the solution of linear q-difference equations with entire function coefficients. Am. J. Math. 37, 439–444 (1915) Trjitzinsky, W.J.: Analytic theory of linear q-difference equations. Acta Math. 61, 1–38 (1933) Ismail, M.E.H., Merkes, E., Styer, D.: A generalization of starlike functions. Complex Var. Theory Appl. 14, 77–84 (1990) Sàlàgean, G.S.: Subclasses of univalent functions. In: Complex Analysis-Fifth Romanian-Finnish Seminar, Part 1, Bucharest, 1981. Lecture Notes in Math., vol. 1013, pp. 362–372. Springer, Berlin (1983) Chapter Google Scholar Al-Oboudi, F.M.: On univalent functions defined by a generalized Sàlàgean operator. Int. J. Math. Math. Sci. 2004(27), 1429–1436 (2004) Ibrahim, R.W., Darus, M.: Univalent functions formulated by the Salagean-difference operator. Int. J. Anal. Appl. 17(4), 652–658 (2019) Naeem, M., et al.: A new subclass of analytic functions defined by using Salagean q-differential operator. Mathematics 7(5), 458 (2019) Duren, P.: Univalent Functions. Grundlehren der Mathematischen Wissenschaften, vol. 259. Springer, New York (1983) Miller, S.S., Mocanu, P.T.: Differential Subordinations: Theory and Applications. CRC Press, Boca Raton (2000) Govindaraj, M., Sivasubramanian, S.: On a class of analytic functions related to conic domains involving q-calculus. Anal. Math. 43, 475–487 (2017) Dunkl, C.F.: Differential-difference operators associated to reflection groups". Trans. Am. Math. Soc. 311(1), 167–183 (1989) Jakubowski, Z.J., Kaminski, J.: On some properties of Mocanu–Janowski functions. Rev. Roum. Math. Pures Appl. 23, 1523–1532 (1978) Obradović, M., Owa, S.: On certain properties for some classes of starlike functions. J. Math. Anal. Appl. 145(2), 357–364 (1990) Ma, W.C., Minda, D.: A unified treatment of some special classes of univalent functions. In: Proceedings of the Conference on Complex Analysis, Tianjin, China, pp. 19–23 (1992) Goel, P., Kumar, S.S.: Certain Class of Starlike Functions Associated with Modified Sigmoid Function. Bull. Malays. Math. Sci. Soc., 1–35 (2019) Dziok, J., Raina, R.K., Sokół, J.: Certain results for a class of convex functions related to a shell-like curve connected with Fibonacci numbers. Comput. Math. Appl. 61, 2605–2613 (2011) Dziok, J., Raina, R.K., Sokół, J.: On a class of starlike functions related to a shell-like curve connected with Fibonacci numbers. Math. Comput. Model. 57, 1203–1211 (2013) Kargar, R., Ebadian, A., Sokol, J.: On subordination of some analytic functions. Sib. Math. J. 57, 599–605 (2016) Kargar, R., Ebadian, A., Sokół, J.: On booth lemniscate and starlike functions. Anal. Math. Phys. 9(1), 143–154 (2019) Sokół, J.: On some subclass of strongly starlike functions. Demonstr. Math. 21, 81–86 (1998) Cho, N.E., Kumar, V., Kumar, S.S., Ravichandran, V.: Radius problems for starlike functions associated with the sine function. Bull. Iran. Math. Soc. 45, 213–232 (2019) Tang, H., et al.: Majorization results for subclasses of starlike functions based on the sine and cosine functions. Bull. Iran. Math. Soc., 1–8 (2019) Sivasubramanian, S., et al.: Differential subordination for analytic functions associated with left-like domains. J. Comput. Anal. Appl. 26(1) (2019) Arif, M., et al.: A new class of analytic functions associated with Sălăgean operator. J. Funct. Spaces 2019, Article ID 6157394 (2019). https://doi.org/10.1155/2019/6157394 Sakaguchi, K.: On a certain univalent mapping. J. Math. Soc. Jpn. 11, 72–75 (1959) Das, R.N., Singh, P.: On sub classes of Schlicht mapping. Indian J. Pure Appl. Math. 8, 864–872 (1977) Ibrahim, R.W., Darus, M.: Subordination and superordination for univalent solutions for fractional differential equations. J. Math. Anal. Appl. 345(2), 871–879 (2008) Ibrahim, R.W.: On holomorphic solutions for nonlinear singular fractional differential equations. Comput. Math. Appl. 62(3), 1084–1090 (2011) Ibrahim, R.W.: Existence and uniqueness of holomorphic solutions for fractional Cauchy problem. J. Math. Anal. Appl. 380(1), 232–240 (2011) Ibrahim, R.W.: Fractional complex transforms for fractional differential equations. Adv. Differ. Equ. 2012(1), 192 (2012) The authors would like to express their thanks to the reviewers to provide us with deep comments. The work here is partially supported by the Universiti Kebangsaan Malaysia grant: GUP (Geran Universiti Penyelidikan)-2019-032. Informetrics Research Group, Ton Duc Thang University, Ho Chi Minh City, Vietnam Rabha W. Ibrahim Faculty of Mathematics & Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam Centre for Modelling and Data Science, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, Bangi, Malaysia Maslina Darus Conceptualization was by RWI and MD; the methodology by RWI; the validationby RWI and MD; the formal analysis by RWI and MD; investigation by RWI and MD; writing and original draft preparation by RWI; writing review and editing by MD. All authors read and approved the final manuscript. Correspondence to Rabha W. Ibrahim. Ibrahim, R.W., Darus, M. On a class of analytic functions associated to a complex domain concerning q-differential-difference operator. Adv Differ Equ 2019, 515 (2019). https://doi.org/10.1186/s13662-019-2446-0 Univalent function Unit disk Analytic function Fractional calculus q-calculus Subordination and superordination
CommonCrawl
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up. Generalised derivative and derivative of functions of bounded variation Let $f:\mathbb{R}\to\mathbb{C}$ be a function Lebesgue-integrable on any finite interval and let $K$ be the space of infinitely differentiable equal to 0 outside a given finite interval. Be the generalised derivative of $f$ defined as the distribution $T':K\to\mathbb{C}$ defined by $$T'(\varphi)=-\int_{\mathbb{R}}f(x)\varphi'(x)d\mu$$where the integral is Lebesgue's. I read in Kolmogorov-Fomin's Элементы теории функций и функционального анализа that if $f$ is a function of bounded variation and its derivative as a function corresponds to its generalised derivative as a distribution, i.e. if $\int_{\mathbb{R}}f'(x)\varphi(x)d\mu=-\int_{\mathbb{R}}f(x)\varphi'(x)d\mu$ for any $\varphi\in K$, then $f$ is absolutely continuous. I think I have been able to prove, by using the decomposition $f=H+\psi+\chi$ where $H$ is a step function, $\psi$ is absolutely continuous and $\chi$ is singular (i.e. with $\chi'(x)=0$ for almost all $x\in\mathbb{R}$), that, if the derivative of $f$ as a function corresponds to its generalised derivative as a distribution, then $f$ is identical to an absolutely continuous function almost everywhere, but I am not sure it holds everywhere and, if it did, I am not able to prove it. Does anybody know more about this fact? $\infty$ thanks! I apologise in advance if I had used some scarcely formal and rigourous language, but the book I am following is quite informal and I fear I am assimilating not-so-good customs. functional-analysis lebesgue-integral Self-teaching workerSelf-teaching worker Actually, this problem doesn't depend on knowing much about $f$. If $f$, $g$ are locally integrable on $\mathbb{R}$, and if $$ \int f\varphi'\,dx = -\int g\varphi\,dx,\;\;\; \varphi\in C_{0}^{\infty}(\mathbb{R}) $$ then $f$ is equal a.e. to a continuous function $\tilde{f}$, and $\tilde{f}$ is absolutely continuous with $\tilde{f}'=g$ a.e.. To see this, choose $\varphi'$ to converge to $\frac{1}{\epsilon}\chi_{[a-\epsilon,a]}-\frac{1}{\delta}\chi_{[b,b+\delta]}$ for $0 < \epsilon,\delta$ and $a \le b$, and $\varphi$ to converge to the integral of this function (which has $0$ total integral.) Then you get $$ \frac{1}{\epsilon}\int_{a-\epsilon}^{a}f\,dx-\frac{1}{\delta}\int_{b}^{b+\delta}f\,dx =-\int_{a-\epsilon}^{b+\delta}g\left[\int_{a-\epsilon}^{x}\frac{1}{\epsilon}\chi_{[a-\epsilon,a]}+\frac{1}{\delta}\chi_{[b,b+\delta]}\,dt\right]dx. $$ The inner integral on the right converges to $\chi_{[a,b]}$ as $\epsilon\downarrow$ and $\delta\downarrow 0$, and it remains uniformly bounded by $1$ in the process. So the limit of the expression on the right exists as $\epsilon \downarrow 0$, $\delta\downarrow 0$, whether one at a time, or together. That means that the limits on the left also exist at every $a$, $b$. We know that the limits on the left are left- and right-hand derivatives of $\int_{0}^{x}f\,dt$, and by the Lebesgue differentiation theorem, those limits are equal a.e. to $f$. So, we have the a.e. equality $$ f(a)-f(b) = -\int_{a}^{b}g\,dx. $$ The function on the right is continuous in $a$, $b$, which means that $f$ is equal a.e. to a continuous function $\tilde{f}$, and $\tilde{f}$ is absolutely continuous with $\tilde{f}'=g$ a.e.. Mollifier: You are concerned that the weak relation $$ \int_{\mathbb{R}}f'\varphi d\mu = -\int_{\mathbb{R}}f\varphi' d\mu,\;\; \varphi\in\mathcal{C}^{\infty}_{0}(\mathbb{R}) $$ cannot necessarily be strengthened to allow $\varphi$ to be a compactly supported absolutely continuous function instead. You can prove that such a thing can be done by finding $\eta \in \mathcal{C}^{\infty}_{0}(\mathbb{R})$ which is non-negative, non-vanishing and constant in a neighborhood of $x=0$, symmetric about $x=0$, supported in $[-1,1]$, bounded between $0$ and $\eta(0)$, and normalized so that $\int \eta d\mu =1$. Then one defines $$ \eta_{n}(x) = n\eta(nx) $$ so that $\int_{\mathbb{R}}\eta_{n}\,d\mu =1$ for all $n$. For any compactly supported absolutely continuous function $\varphi$, define $$ \varphi_{n} = \int_{\mathbb{R}}\eta_{n}(x-y)\varphi(y)\,d\mu(y). $$ The function $\varphi_{n}$ is in $\mathcal{C}^{\infty}_{0}(\mathbb{R})$. Because $\eta_{n}$ is supported in $[-1/n,1/n]$, and $\varphi$ is continuous, then $\varphi_{n}$ converges uniformly to $\varphi$ as $n\rightarrow\infty$. And, because $\varphi$ is absolutely continuous, then $$ \varphi_{n}' = \int_{\mathbb{R}}\eta_{n}'(x-y)\varphi(y)\,d\mu(y)= -\int_{\mathbb{R}}\eta_{n}(x-y)\varphi'(y)\,d\mu(y). $$ For my case $\varphi'$ is piecewise continuous, and the right side then converges pointwise everywhere to mean of the left- and right-hand limits of $\varphi'$, and it remains uniformly bounded by any bound for $\varphi'$. Thus, your weak equation $$ \int f'\varphi_{n}d\mu = -\int f\varphi_{n}'d\mu $$ becomes the following in the limit $$ \int f'\varphi d\mu = -\int f \varphi' d\mu. $$ For a general absolutely continuous $\varphi$, you can show that $\varphi_{n}$ converges uniformly to $\varphi$ and $\varphi_{n}'$ converges in $L^{1}(\mathbb{R})$ to $\varphi'$. Mollifiers are very useful for extending integral equations for $\mathcal{C}^{\infty}_{0}$ functions to more general functions. If $\varphi$ has $k$ continuous derivatives that are compactly supported, then $\varphi_{n}$ and all $k$ derivatives converge uniformly to the corresponding derivatives of $\varphi$. PhoemueX Disintegrating By PartsDisintegrating By Parts $\begingroup$ @DavideZena : The inner integral on the right is the piecewise linear function that is $0$ for $x \le a-\epsilon$, ramps linearly to $1$ at $x=a$, stays $1$ to $x=b$, and then ramps linearly back to $0$ at $x=b+\delta$, after which is stays $0$. Visualize what happens as either $\epsilon$ or $\delta$ tend down to $0$. You have bounded convergence. $\endgroup$ – Disintegrating By Parts Oct 31 '14 at 22:08 $\begingroup$ @DavideZena : The Bounded Convergence Theorem is a corollary of the Dominated Convergence Theorem: en.wikipedia.org/wiki/… . As the inner integral converges, it does so in a way that it remains uniformly bounded. But you can also argue directly without much trouble. $\endgroup$ – Disintegrating By Parts Nov 1 '14 at 0:03 $\begingroup$ @DavideZena : That's not what exists. It's the integral of that function $h_{a,b,\epsilon,\delta}(x)=\int_{-\infty}^{x}\left[\frac{1}{\epsilon}\chi_{[a-\epsilon ,a]}(t)+\frac{1}{\delta}\chi_{[b,b+\delta]}(t)\right]\,dt$ that has a limit as either or both $\delta$, $\epsilon$ tend to $0$. Therefore, $\int_{\mathcal{R}}g(x)h_{a,b,\epsilon,\delta}(x)dx$ has a limit as either or both of $\epsilon,\delta$ tend to $0$. Hence, the left side of the equation must have limits, which means that $\frac{1}{\epsilon}\int_{a-\epsilon}^{a}fdx$ has a limit as $\epsilon\downarrow 0$, etc. $\endgroup$ – Disintegrating By Parts Dec 31 '14 at 13:29 $\begingroup$ @DavideZena : it doesn't, but the integral with respect to it must converge, because the other side of the integral weak equation converges, which is how you conclude that $\frac{1}{\epsilon}\int_{a-\epsilon}^{a}f\,dx$ converges as $\epsilon\downarrow 0$. I've done nothing illegal in arriving at the equation; then I take limits of that legal equation, knowing that the right side converges, which forces the left side to converge. :) $\endgroup$ – Disintegrating By Parts Dec 31 '14 at 14:15 $\begingroup$ @DavideZena : Identify $\varphi$ and identify its derivative $\varphi'$. $\varphi$ is a piecewise linear function. $\endgroup$ – Disintegrating By Parts Dec 31 '14 at 15:51 Thanks for contributing an answer to Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged functional-analysis lebesgue-integral or ask your own question. Approximating a piecewise continuous function with a function in $\mathcal{C}^{\infty}_{0}(\mathbb{R})$ Ideal $I$ contained in a non-trivial ideal and quotient algebra Measurability of derivative of Lebesgue integral function Derivative of Lebesgue integral function at the endpoints Coincidence of functions defining Riemann-Stieltjes integral Orthogonality of Hermite functions Continuously differentiable functions dense in $L^2[a,b]$ Fourier transform of compactly supported differentiable function Existence everywhere of integrand in Fubini's theorem
CommonCrawl
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A. Remembering what Wedrifid told me, I decided to start with a quarter of a piece (~1mg). The gum was pretty tasteless, which ought to make blinding easier. The effects were noticeable around 10 minutes - greater energy verging on jitteriness, much faster typing, and apparent general quickening of thought. Like a more pleasant caffeine. While testing my typing speed in Amphetype, my speed seemed to go up >=5 WPM, even after the time penalties for correcting the increased mistakes; I also did twice the usual number without feeling especially tired. A second dose was similar, and the third dose was at 10 PM before playing Ninja Gaiden II seemed to stop the usual exhaustion I feel after playing through a level or so. (It's a tough game, which I have yet to master like Ninja Gaiden Black.) Returning to the previous concern about sleep problems, though I went to bed at 11:45 PM, it still took 28 minutes to fall sleep (compared to my more usual 10-20 minute range); the next day I use 2mg from 7-8PM while driving, going to bed at midnight, where my sleep latency is a more reasonable 14 minutes. I then skipped for 3 days to see whether any cravings would pop up (they didn't). I subsequently used 1mg every few days for driving or Ninja Gaiden II, and while there were no cravings or other side-effects, the stimulation definitely seemed to get weaker - benefits seemed to still exist, but I could no longer describe any considerable energy or jitteriness. "As a physical therapist with 30+ years of experience in treating neurological disorders such as traumatic brain injury, I simply could not believe it when Cavin told me the extent of his injuries. His story opened a new door to my awareness of the incredible benefits of proper nutrition, the power of attitude and community to heal anything we have arise in our lives Cavin is an inspiration and a true way-shower for anyone looking to invest in their health and well-being. No matter the state your brain is in, you will benefit from this cutting-edge information and be very glad (and entertained) that you read this fine work." Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it. No. There are mission essential jobs that require you to live on base sometimes. Or a first term person that is required to live on base. Or if you have proven to not be as responsible with rent off base as you should be so your commander requires you to live on base. Or you're at an installation that requires you to live on base during your stay. Or the only affordable housing off base puts you an hour away from where you work. It isn't simple. The fact that you think it is tells me you are one of the "dumb@$$es" you are referring to above. We have established strict criteria for reviewing brain enhancement supplements. Our reviews are clear, detailed, and informative to help you find supplements that deliver the best results. You can read our reviews, learn about the best nootropic ingredients, compare formulas, and find out how each supplement performed according to specific criteria. We can read off the results from the table or graph: the nicotine days average 1.1% higher, for an effect size of 0.24; however, the 95% credible interval (equivalent of confidence interval) goes all the way from 0.93 to -0.44, so we cannot exclude 0 effect and certainly not claim confidence the effect size must be >0.1. Specifically, the analysis gives a 66% chance that the effect size is >0.1. (One might wonder if any increase is due purely to a training effect - getting better at DNB. Probably not25.) Learning how products have worked for other users can help you feel more confident in your purchase. Similarly, your opinion may help others find a good quality supplement. After you have started using a particular supplement and experienced the benefits of nootropics for memory, concentration, and focus, we encourage you to come back and write your own review to share your experience with others. The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. Federal law classifies most nootropics as dietary supplements, which means that the Food and Drug Administration does not regulate manufacturers' statements about their benefits (as the giant "This product is not intended to diagnose, treat, cure, or prevent any disease" disclaimer on the label indicates). And the types of claims that the feds do allow supplement companies to make are often vague and/or supported by less-than-compelling scientific evidence. "If you find a study that says that an ingredient caused neurons to fire on rat brain cells in a petri dish," says Pieter Cohen, an assistant professor at Harvard Medical School, "you can probably get away with saying that it 'enhances memory' or 'promotes brain health.'" Omega-3 fatty acids: DHA and EPA – two Cochrane Collaboration reviews on the use of supplemental omega-3 fatty acids for ADHD and learning disorders conclude that there is limited evidence of treatment benefits for either disorder.[42][43] Two other systematic reviews noted no cognition-enhancing effects in the general population or middle-aged and older adults.[44][45] *Disclaimer: No statements on this website have been reviewed by the Food and Drug Administration. No products mentioned on this website are intended to diagnose, treat, cure or prevent any diseases. brs.brainreference.com is sponsored by BRS Publishers. All editorials on this site were written by editors compensated by BRS Publishers and do not claim or state to be medical professionals giving medical advice. This website is only for the purpose of providing information. Please consult with your doctor before starting any mental health program or dietary supplement. All product pictures were photographed by us and used in conjunction with stock photos who are representing lab technicians and not doctors. If you feel any of this information is inaccurate contact us and we will verify and implement your correction within about 48 business hours. Also note that we have multiple affiliates and we are paid commission on various products by different companies. If you wish to advertise with us, please contact us. Any and all trademarks, logos and service marks displayed on this site are registered or unregistered Trademarks of their respective owners. "Cavin's enthusiasm and drive to help those who need it is unparalleled! He delivers the information in an easy to read manner, no PhD required from the reader. 🙂 Having lived through such trauma himself he has real empathy for other survivors and it shows in the writing. This is a great read for anyone who wants to increase the health of their brain, injury or otherwise! Read it!!!" This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. In contrast to the types of memory discussed in the previous section, which are long-lasting and formed as a result of learning, working memory is a temporary store of information. Working memory has been studied extensively by cognitive psychologists and cognitive neuroscientists because of its role in executive function. It has been likened to an internal scratch pad; by holding information in working memory, one keeps it available to consult and manipulate in the service of performing tasks as diverse as parsing a sentence and planning a route through the environment. Presumably for this reason, working memory ability correlates with measures of general intelligence (Friedman et al., 2006). The possibility of enhancing working memory ability is therefore of potential real-world interest. Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced. For proper brain function, our CNS (Central Nervous System) requires several amino acids. These derive from protein-rich foods. Consider amino acids to be protein building blocks. Many of them are dietary precursors to vital neurotransmitters in our brain. Epinephrine (adrenaline), serotonin, dopamine, and norepinephrine assist in enhancing mental performance. A few examples of amino acid nootropics are: Kennedy et al. (1990) administered what they termed a grammatical reasoning task to subjects, in which a sentence describing the order of two letters, A and B, is presented along with the letter pair, and subjects must determine whether or not the sentence correctly describes the letter pair. They found no effect of d-AMP on performance of this task. How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use. As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo. The question of whether stimulants are smart pills in a pragmatic sense cannot be answered solely by consideration of the statistical significance of the difference between stimulant and placebo. A drug with tiny effects, even if statistically significant, would not be a useful cognitive enhancer for most purposes. We therefore report Cohen's d effect size measure for published studies that provide either means and standard deviations or relevant F or t statistics (Thalheimer & Cook, 2002). More generally, with most sample sizes in the range of a dozen to a few dozen, small effects would not reliably be found. One study of helicopter pilots suggested that 600 mg of modafinil given in three doses can be used to keep pilots alert and maintain their accuracy at pre-deprivation levels for 40 hours without sleep.[60] However, significant levels of nausea and vertigo were observed. Another study of fighter pilots showed that modafinil given in three divided 100 mg doses sustained the flight control accuracy of sleep-deprived F-117 pilots to within about 27% of baseline levels for 37 hours, without any considerable side effects.[61] In an 88-hour sleep loss study of simulated military grounds operations, 400 mg/day doses were mildly helpful at maintaining alertness and performance of subjects compared to placebo, but the researchers concluded that this dose was not high enough to compensate for most of the effects of complete sleep loss. Many of these supplements include exotic-sounding ingredients. Ginseng root and an herb called bacopa are two that have shown some promising memory and attention benefits, says Dr. Guillaume Fond, a psychiatrist with France's Aix-Marseille University Medical School who has studied smart drugs and cognitive enhancement. "However, data are still lacking to definitely confirm their efficacy," he adds. There is an ancient precedent to humans using natural compounds to elevate cognitive performance. Incan warriors in the 15th century would ingest coca leaves (the basis for cocaine) before battle. Ethiopian hunters in the 10th century developed coffee bean paste to improve hunting stamina. Modern athletes ubiquitously consume protein powders and hormones to enhance their training, recovery, and performance. The most widely consumed psychoactive compound today is caffeine. Millions of people use coffee and tea to be more alert and focused. Taken together, these considerations suggest that the cognitive effects of stimulants for any individual in any task will vary based on dosage and will not easily be predicted on the basis of data from other individuals or other tasks. Optimizing the cognitive effects of a stimulant would therefore require, in effect, a search through a high-dimensional space whose dimensions are dose; individual characteristics such as genetic, personality, and ability levels; and task characteristics. The mixed results in the current literature may be due to the lack of systematic optimization. The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote: By the end of 2009, at least 25 studies reported surveys of college students' rates of nonmedical stimulant use. Of the studies using relatively smaller samples, prevalence was, in chronological order, 16.6% (lifetime; Babcock & Byrne, 2000), 35.3% (past year; Low & Gendaszek, 2002), 13.7% (lifetime; Hall, Irwin, Bowman, Frankenberger, & Jewett, 2005), 9.2% (lifetime; Carroll, McLaughlin, & Blake, 2006), and 55% (lifetime, fraternity students only; DeSantis, Noar, & Web, 2009). Of the studies using samples of more than a thousand students, somewhat lower rates of nonmedical stimulant use were found, although the range extends into the same high rates as the small studies: 2.5% (past year, Ritalin only; Teter, McCabe, Boyd, & Guthrie, 2003), 5.4% (past year; McCabe & Boyd, 2005), 4.1% (past year; McCabe, Knight, Teter, & Wechsler, 2005), 11.2% (past year; Shillington, Reed, Lange, Clapp, & Henry, 2006), 5.9% (past year; Teter, McCabe, LaGrange, Cranford, & Boyd, 2006), 16.2% (lifetime; White, Becker-Blease, & Grace-Bishop, 2006), 1.7% (past month; Kaloyanides, McCabe, Cranford, & Teter, 2007), 10.8% (past year; Arria, O'Grady, Caldeira, Vincent, & Wish, 2008); 5.3% (MPH only, lifetime; Du-Pont, Coleman, Bucher, & Wilford, 2008); 34% (lifetime; DeSantis, Webb, & Noar, 2008), 8.9% (lifetime; Rabiner et al., 2009), and 7.5% (past month; Weyandt et al., 2009). Nicotine absorption through the stomach is variable and relatively reduced in comparison with absorption via the buccal cavity and the small intestine. Drinking, eating, and swallowing of tobacco smoke by South American Indians have frequently been reported. Tenetehara shamans reach a state of tobacco narcosis through large swallows of smoke, and Tapirape shams are said to eat smoke by forcing down large gulps of smoke only to expel it again in a rapid sequence of belches. In general, swallowing of tobacco smoke is quite frequently likened to drinking. However, although the amounts of nicotine swallowed in this way - or in the form of saturated saliva or pipe juice - may be large enough to be behaviorally significant at normal levels of gastric pH, nicotine, like other weak bases, is not significantly absorbed. As discussed in my iodine essay (FDA adverse events), iodine is a powerful health intervention as it eliminates cretinism and improves average IQ by a shocking magnitude. If this effect were possible for non-fetuses in general, it would be the best nootropic ever discovered, and so I looked at it very closely. Unfortunately, after going through ~20 experiments looking for ones which intervened with iodine post-birth and took measures of cognitive function, my meta-analysis concludes that: the effect is small and driven mostly by one outlier study. Once you are born, it's too late. But the results could be wrong, and iodine might be cheap enough to take anyway, or take for non-IQ reasons. (This possibility was further weakened for me by an August 2013 blood test of TSH which put me at 3.71 uIU/ml, comfortably within the reference range of 0.27-4.20.) The price is not as good as multivitamins or melatonin. The studies showing effects generally use pretty high dosages, 1-4g daily. I took 4 capsules a day for roughly 4g of omega acids. The jar of 400 is 100 days' worth, and costs ~$17, or around 17¢ a day. The general health benefits push me over the edge of favoring its indefinite use, but looking to economize. Usually, small amounts of packaged substances are more expensive than bulk unprocessed, so I looked at fish oil fluid products; and unsurprisingly, liquid is more cost-effective than pills (but like with the powders, straight fish oil isn't very appetizing) in lieu of membership somewhere or some other price-break. I bought 4 bottles (16 fluid ounces each) for $53.31 total (thanks to coupons & sales), and each bottle lasts around a month and a half for perhaps half a year, or ~$100 for a year's supply. (As it turned out, the 4 bottles lasted from 4 December 2010 to 17 June 2011, or 195 days.) My next batch lasted 19 August 2011-20 February 2012, and cost $58.27. Since I needed to buy empty 00 capsules (for my lithium experiment) and a book (Stanovich 2010, for SIAI work) from Amazon, I bought 4 more bottles of 16fl oz Nature's Answer (lemon-lime) at $48.44, which I began using 27 February 2012. So call it ~$70 a year. Panax ginseng – A review by the Cochrane Collaboration concluded that "there is a lack of convincing evidence to show a cognitive enhancing effect of Panax ginseng in healthy participants and no high quality evidence about its efficacy in patients with dementia."[36] According to the National Center for Complementary and Integrative Health, "[a]lthough Asian ginseng has been widely studied for a variety of uses, research results to date do not conclusively support health claims associated with the herb."[37]
CommonCrawl
Only show content I have access to (42) Over 3 years (133) Physics and Astronomy (82) Materials Research (65) Statistics and Probability (11) Earth and Environmental Sciences (6) MRS Online Proceedings Library Archive (65) Epidemiology & Infection (11) Publications of the Astronomical Society of Australia (11) Microscopy and Microanalysis (6) Proceedings of the International Astronomical Union (3) Symposium - International Astronomical Union (3) The Journal of Laryngology & Otology (3) Canadian Journal of Emergency Medicine (2) Infection Control & Hospital Epidemiology (2) Laser and Particle Beams (2) Proceedings of the Nutrition Society (2) Acta Neuropsychiatrica (1) American Journal of International Law (1) Bird Conservation International (1) Cardiology in the Young (1) Geological Magazine (1) Journal of Law, Medicine & Ethics (1) Journal of the Marine Biological Association of the United Kingdom (1) Proceedings of the Prehistoric Society (1) The Journal of Agricultural Science (1) Materials Research Society (65) International Astronomical Union (7) Brazilian Society for Microscopy and Microanalysis (SBMM) (2) Canadian Association of Emergency Physicians (CAEP) (2) MSC - Microscopical Society of Canada (2) Nestle Foundation - enLINK (2) Society for Healthcare Epidemiology of America (SHEA) (2) AEPC Association of European Paediatric Cardiology (1) AMA Mexican Society of Microscopy MMS (1) AMMS - Australian Microscopy and Microanalysis Society (1) American Society of Law, Medicine & Ethics (1) BSAS (1) Canadian Neurological Sciences Federation (1) JLO (1984) Ltd (1) MBA Online Only Members (1) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (1) Nutrition Society (1) The New Zealand Society of Otolaryngology, Head and Neck Surgery (1) British Mycological Society Symposia (1) The Evolutionary Map of the Universe pilot survey Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 07 September 2021, e046 Add to cart £25.00 Added to cart An error has occurred, We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Neutron Star Extreme Matter Observatory: A kilohertz-band gravitational-wave detector in the global network Gravitational Wave Astronomy K. Ackley, V. B. Adya, P. Agrawal, P. Altin, G. Ashton, M. Bailes, E. Baltinas, A. Barbuio, D. Beniwal, C. Blair, D. Blair, G. N. Bolingbroke, V. Bossilkov, S. Shachar Boublil, D. D. Brown, B. J. Burridge, J. Calderon Bustillo, J. Cameron, H. Tuong Cao, J. B. Carlin, S. Chang, P. Charlton, C. Chatterjee, D. Chattopadhyay, X. Chen, J. Chi, J. Chow, Q. Chu, A. Ciobanu, T. Clarke, P. Clearwater, J. Cooke, D. Coward, H. Crisp, R. J. Dattatri, A. T. Deller, D. A. Dobie, L. Dunn, P. J. Easter, J. Eichholz, R. Evans, C. Flynn, G. Foran, P. Forsyth, Y. Gai, S. Galaudage, D. K. Galloway, B. Gendre, B. Goncharov, S. Goode, D. Gozzard, B. Grace, A. W. Graham, A. Heger, F. Hernandez Vivanco, R. Hirai, N. A. Holland, Z. J. Holmes, E. Howard, E. Howell, G. Howitt, M. T. Hübner, J. Hurley, C. Ingram, V. Jaberian Hamedan, K. Jenner, L. Ju, D. P. Kapasi, T. Kaur, N. Kijbunchoo, M. Kovalam, R. Kumar Choudhary, P. D. Lasky, M. Y. M. Lau, J. Leung, J. Liu, K. Loh, A. Mailvagan, I. Mandel, J. J. McCann, D. E. McClelland, K. McKenzie, D. McManus, T. McRae, A. Melatos, P. Meyers, H. Middleton, M. T. Miles, M. Millhouse, Y. Lun Mong, B. Mueller, J. Munch, J. Musiov, S. Muusse, R. S. Nathan, Y. Naveh, C. Neijssel, B. Neil, S. W. S. Ng, V. Oloworaran, D. J. Ottaway, M. Page, J. Pan, M. Pathak, E. Payne, J. Powell, J. Pritchard, E. Puckridge, A. Raidani, V. Rallabhandi, D. Reardon, J. A. Riley, L. Roberts, I. M. Romero-Shaw, T. J. Roocke, G. Rowell, N. Sahu, N. Sarin, L. Sarre, H. Sattari, M. Schiworski, S. M. Scott, R. Sengar, D. Shaddock, R. Shannon, J. SHI, P. Sibley, B. J. J. Slagmolen, T. Slaven-Blair, R. J. E. Smith, J. Spollard, L. Steed, L. Strang, H. Sun, A. Sunderland, S. Suvorova, C. Talbot, E. Thrane, D. Töyrä, P. Trahanas, A. Vajpeyi, J. V. van Heijningen, A. F. Vargas, P. J. Veitch, A. Vigna-Gomez, A. Wade, K. Walker, Z. Wang, R. L. Ward, K. Ward, S. Webb, L. Wen, K. Wette, R. Wilcox, J. Winterflood, C. Wolf, B. Wu, M. Jet Yap, Z. You, H. Yu, J. Zhang, J. Zhang, C. Zhao, X. Zhu Published online by Cambridge University Press: 05 November 2020, e047 Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica. Gene Editing Sperm and Eggs (not Embryos): Does it Make a Legal or Ethical Difference? I. Glenn Cohen, Jacob S. Sherkow, Eli Y. Adashi Journal: Journal of Law, Medicine & Ethics / Volume 48 / Issue 3 / Fall 2020 Published online by Cambridge University Press: 01 January 2021, pp. 619-621 Print publication: Fall 2020 LO04: Decreasing emergency department length of stay for patients with acute atrial fibrillation and flutter: a cluster-randomized trial I. Stiell, D. Eagles, J. Perry, P. Archambault, V. Thiruganasambandamoorthy, R. Parkash, E. Mercier, J. Morris, D. Godin, P. Davis, G. Clark, S. Gosselin, B. Mathieu, B. Pomerleau, S. Rhee, G. Kaban, E. Brown, M. Taljaard, Journal: Canadian Journal of Emergency Medicine / Volume 22 / Issue S1 / May 2020 Published online by Cambridge University Press: 13 May 2020, pp. S7-S8 Print publication: May 2020 Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs. PL01: Creation of a risk scoring system for emergency department patients with acute heart failure I. Stiell, J. Perry, C. Clement, S. Sibley, A. McRae, B. Rowe, B. Borgundvaag, S. McLeod, L. Mielniczuk, J. Dreyer, J. Yan, E. Brown, J. Brinkhurst, M. Nemnom, M. Taljaard Published online by Cambridge University Press: 13 May 2020, p. S5 Introduction: Acute heart failure (AHF) is a common emergency department (ED) presentation and may be associated with poor outcomes. Conversely, many patients rapidly improve with ED treatment and may not need hospital admission. Because there is little evidence to guide disposition decisions by ED and admitting physicians, we sought to create a risk score for predicting short-term serious outcomes (SSO) in patients with AHF. Methods: We conducted prospective cohort studies at 9 tertiary care hospital EDs from 2007 to 2019, and enrolled adult patients who required treatment for AHF. Each patient was assessed for standardized real-time clinical and laboratory variables, as well as for SSO (defined as death within 30 days or intubation, non-invasive ventilation (NIV), myocardial infarction, coronary bypass surgery, or new hemodialysis after admission). The fully pre-specified, logistic regression model with 13 predictors (age, pCO2, and SaO2 were modeled using spline functions with 3 knots and heart rate and creatinine with 5 knots) was fitted to the 10 multiple imputation datasets. Harrell's fast stepdown procedure reduced the number of variables. We calculated the potential impact on sensitivity (95% CI) for SSO and hospital admissions and estimated a sample size of 170 SSOs. Results: The 2,246 patients had mean age 77.4 years, male sex 54.5%, EMS arrival 41.1%, IV NTG 3.1%, ED NIV 5.2%, admission on initial visit 48.6%. Overall there were 174 (7.8%) SSOs including 70 deaths (3.1%). The final risk scale is comprised of five variables (points) and had c-statistic of 0.76 (95% CI: 0.73-0.80): 1.Valvular heart disease (1) 2.ED non-invasive ventilation (2) 3.Creatinine 150-300 (1) ≥300 (2) 4.Troponin 2x-4x URL (1) ≥5x URL (2) 5.Walk test failed (2) The probability of SSO ranged from 2.0% for a total score of 0 to 90.2% for a score of 10, showing good calibration. The model was stable over 1,000 bootstrap samples. Choosing a risk model total point admission threshold of >2 would yield a sensitivity of 80.5% (95% CI 73.9-86.1) for SSO with no change in admissions from current practice (48.6% vs 48.7%). Conclusion: Using a large prospectively collected dataset, we created a concise and sensitive risk scale to assist with admission decisions for patients with AHF in the ED. Implementation of this risk scoring scale should lead to safer and more efficient disposition decisions, with more high-risk patients being admitted and more low-risk patients being discharged. Risk assessment for recrudescence of avian influenza in caged layer houses following depopulation: the effect of cleansing, disinfection and dismantling of equipment P. Gale, S. Sechi, V. Horigan, R. Taylor, I. Brown, L. Kelly Published online by Cambridge University Press: 13 February 2020, pp. 1536-1545 Following an outbreak of highly pathogenic avian influenza virus (HPAIV) in a poultry house, control measures are put in place to prevent further spread. An essential part of the control measures based on the European Commission Avian Influenza Directive 2005/94/EC is the cleansing and disinfection (C&D) of infected premises. Cleansing and disinfection includes both preliminary and secondary C&D, and the dismantling of complex equipment during secondary C&D is also required, which is costly to the owner and also delays the secondary cleansing process, hence increasing the risk for onward spread. In this study, a quantitative risk assessment is presented to assess the risk of re-infection (recrudescence) occurring in an enriched colony-caged layer poultry house on restocking with chickens after different C&D scenarios. The risk is expressed as the number of restocked poultry houses expected before recrudescence occurs. Three C&D scenarios were considered, namely (i) preliminary C&D alone, (ii) preliminary C&D plus secondary C&D without dismantling and (iii) preliminary C&D plus secondary C&D with dismantling. The source-pathway-receptor framework was used to construct the model, and parameterisation was based on the three C&D scenarios. Two key operational variables in the model are (i) the time between depopulation of infected birds and restocking with new birds (TbDR) and (ii) the proportion of infected material that bypasses C&D, enabling virus to survive the process. Probability distributions were used to describe these two parameters for which there was recognised variability between premises in TbDR or uncertainty due to lack of information in the fraction of bypass. The risk assessment estimates that the median (95% credible intervals) number of repopulated poultry houses before recrudescence are 1.2 × 104 (50 to 2.8 × 106), 1.9 × 105 (780 to 5.7 × 107) and 1.1 × 106 (4.2 × 103 to 2.9 × 108) under C&D scenarios (i), (ii) and (iii), respectively. Thus for HPAIV in caged layers, undertaking secondary C&D without dismantling reduces the risk by 16-fold compared to preliminary C&D alone. Dismantling has an additional, although smaller, impact, reducing the risk by a further 6-fold and thus around 90-fold compared to preliminary C&D alone. On the basis of the 95% credible intervals, the model demonstrates the importance of secondary C&D (with or without dismantling) over preliminary C&D alone. However, the extra protection afforded by dismantling may not be cost beneficial in the context of reduced risk of onward spread. ASKAP commissioning observations of the GAMA 23 field Australian SKA Pathfinder Denis A. Leahy, A. M. Hopkins, R. P. Norris, J. Marvil, J. D. Collier, E. N. Taylor, J. R. Allison, C. Anderson, M. Bell, M. Bilicki, J. Bland-Hawthorn, S. Brough, M. J. I. Brown, S. Driver, G. Gurkan, L. Harvey-Smith, I. Heywood, B. W. Holwerda, J. Liske, A. R. Lopez-Sanchez, D. McConnell, A. Moffett, M. S. Owers, K. A. Pimbblet, W. Raja, N. Seymour, M. A. Voronkov, L. Wang Published online by Cambridge University Press: 19 July 2019, e024 We have observed the G23 field of the Galaxy AndMass Assembly (GAMA) survey using the Australian Square Kilometre Array Pathfinder (ASKAP) in its commissioning phase to validate the performance of the telescope and to characterise the detected galaxy populations. This observation covers ~48 deg2 with synthesised beam of 32.7 arcsec by 17.8 arcsec at 936MHz, and ~39 deg2 with synthesised beam of 15.8 arcsec by 12.0 arcsec at 1320MHz. At both frequencies, the root-mean-square (r.m.s.) noise is ~0.1 mJy/beam. We combine these radio observations with the GAMA galaxy data, which includes spectroscopy of galaxies that are i-band selected with a magnitude limit of 19.2. Wide-field Infrared Survey Explorer (WISE) infrared (IR) photometry is used to determine which galaxies host an active galactic nucleus (AGN). In properties including source counts, mass distributions, and IR versus radio luminosity relation, the ASKAP-detected radio sources behave as expected. Radio galaxies have higher stellar mass and luminosity in IR, optical, and UV than other galaxies. We apply optical and IR AGN diagnostics and find that they disagree for ~30% of the galaxies in our sample. We suggest possible causes for the disagreement. Some cases can be explained by optical extinction of the AGN, but for more than half of the cases we do not find a clear explanation. Radio sources aremore likely (~6%) to have an AGN than radio quiet galaxies (~1%), but the majority of AGN are not detected in radio at this sensitivity. Early conversion of classic Fontan conversion may decrease term morbidity: single centre outcomes David Blitzer, Asma S. Habib, John W. Brown, Adam C. Kean, Jiuann-Huey I. Lin, Mark W. Turrentine, Mark D. Rodefeld, Jeremy L. Herrmann, William Aaron Kay Journal: Cardiology in the Young / Volume 29 / Issue 8 / August 2019 Published online by Cambridge University Press: 28 June 2019, pp. 1045-1050 The initial classic Fontan utilising a direct right atrial appendage to pulmonary artery anastomosis led to numerous complications. Adults with such complications may benefit from conversion to a total cavo-pulmonary connection, the current standard palliation for children with univentricular hearts. A single institution, retrospective chart review was conducted for all Fontan conversion procedures performed from July, 1999 through January, 2017. Variables analysed included age, sex, reason for Fontan conversion, age at Fontan conversion, and early mortality or heart transplant within 1 year after Fontan conversion. A total of 41 Fontan conversion patients were identified. Average age at Fontan conversion was 24.5 ± 9.2 years. Dominant left ventricular physiology was present in 37/41 (90.2%) patients. Right-sided heart failure occurred in 39/41 (95.1%) patients and right atrial dilation was present in 33/41 (80.5%) patients. The most common causes for Fontan conversion included atrial arrhythmia in 37/41 (90.2%), NYHA class II HF or greater in 31/41 (75.6%), ventricular dysfunction in 23/41 (56.1%), and cirrhosis or fibrosis in 7/41 (17.1%) patients. Median post-surgical follow-up was 6.2 ± 4.9 years. Survival rates at 30 days, 1 year, and greater than 1-year post-Fontan conversion were 95.1, 92.7, and 87.8%, respectively. Two patients underwent heart transplant: the first within 1 year of Fontan conversion for heart failure and the second at 5.3 years for liver failure. Fontan conversion should be considered early when atrial arrhythmias become common rather than waiting for severe heart failure to ensue, and Fontan conversion can be accomplished with an acceptable risk profile. Patients with laboratory evidence of West Nile virus disease without reported fever K. Landry, I. B. Rabe, S. L. Messenger, J. K. Hacker, M. L. Salas, C. Scott-Waldron, D. Haydel, E. Rider, S. Simonson, C. M. Brown, S. C. Smole, D. F. Neitzel, E. K. Schiffman, A. K. Strain, S. Vetter, M. Fischer, N. P. Lindsey Journal: Epidemiology & Infection / Volume 147 / 2019 Published online by Cambridge University Press: 17 June 2019, e219 In 2013, the national surveillance case definition for West Nile virus (WNV) disease was revised to remove fever as a criterion for neuroinvasive disease and require at most subjective fever for non-neuroinvasive disease. The aims of this project were to determine how often afebrile WNV disease occurs and assess differences among patients with and without fever. We included cases with laboratory evidence of WNV disease reported from four states in 2014. We compared demographics, clinical symptoms and laboratory evidence for patients with and without fever and stratified the analysis by neuroinvasive and non-neuroinvasive presentations. Among 956 included patients, 39 (4%) had no fever; this proportion was similar among patients with and without neuroinvasive disease symptoms. For neuroinvasive and non-neuroinvasive patients, there were no differences in age, sex, or laboratory evidence between febrile and afebrile patients, but hospitalisations were more common among patients with fever (P < 0.01). The only significant difference in symptoms was for ataxia, which was more common in neuroinvasive patients without fever (P = 0.04). Only 5% of non-neuroinvasive patients did not meet the WNV case definition due to lack of fever. The evidence presented here supports the changes made to the national case definition in 2013. Evaluation of ELISA and haemagglutination inhibition as screening tests in serosurveillance for H5/H7 avian influenza in commercial chicken flocks M. E. Arnold, M. J. Slomka, A. C. Breed, C. K. Hjulsager, S. Pritz-Verschuren, S. Venema-Kemper, R. J. Bouwstra, R. Trebbien, S. Zohari, V. Ceeraz, L. E. Larsen, R. J. Manvell, G. Koch, I. H. Brown Journal: Epidemiology & Infection / Volume 146 / Issue 3 / February 2018 Avian influenza virus (AIV) subtypes H5 and H7 can infect poultry causing low pathogenicity (LP) AI, but these LPAIVs may mutate to highly pathogenic AIV in chickens or turkeys causing high mortality, hence H5/H7 subtypes demand statutory intervention. Serological surveillance in the European Union provides evidence of H5/H7 AIV exposure in apparently healthy poultry. To identify the most sensitive screening method as the first step in an algorithm to provide evidence of H5/H7 AIV infection, the standard approach of H5/H7 antibody testing by haemagglutination inhibition (HI) was compared with an ELISA, which detects antibodies to all subtypes. Sera (n = 1055) from 74 commercial chicken flocks were tested by both methods. A Bayesian approach served to estimate diagnostic test sensitivities and specificities, without assuming any 'gold standard'. Sensitivity and specificity of the ELISA was 97% and 99.8%, and for H5/H7 HI 43% and 99.8%, respectively, although H5/H7 HI sensitivity varied considerably between infected flocks. ELISA therefore provides superior sensitivity for the screening of chicken flocks as part of an algorithm, which subsequently utilises H5/H7 HI to identify infection by these two subtypes. With the calculated sensitivity and specificity, testing nine sera per flock is sufficient to detect a flock seroprevalence of 30% with 95% probability. Follow Up of GW170817 and Its Electromagnetic Counterpart by Australian-Led Observing Programmes I. Andreoni, K. Ackley, J. Cooke, A. Acharyya, J. R. Allison, G. E. Anderson, M. C. B. Ashley, D. Baade, M. Bailes, K. Bannister, A. Beardsley, M. S. Bessell, F. Bian, P. A. Bland, M. Boer, T. Booler, A. Brandeker, I. S. Brown, D. A. H. Buckley, S.-W. Chang, D. M. Coward, S. Crawford, H. Crisp, B. Crosse, A. Cucchiara, M. Cupák, J. S. de Gois, A. Deller, H. A. R. Devillepoix, D. Dobie, E. Elmer, D. Emrich, W. Farah, T. J. Farrell, T. Franzen, B. M. Gaensler, D. K. Galloway, B. Gendre, T. Giblin, A. Goobar, J. Green, P. J. Hancock, B. A. D. Hartig, E. J. Howell, L. Horsley, A. Hotan, R. M. Howie, L. Hu, Y. Hu, C. W. James, S. Johnston, M. Johnston-Hollitt, D. L. Kaplan, M. Kasliwal, E. F. Keane, D. Kenney, A. Klotz, R. Lau, R. Laugier, E. Lenc, X. Li, E. Liang, C. Lidman, L. C. Luvaul, C. Lynch, B. Ma, D. Macpherson, J. Mao, D. E. McClelland, C. McCully, A. Möller, M. F. Morales, D. Morris, T. Murphy, K. Noysena, C. A. Onken, N. B. Orange, S. Osłowski, D. Pallot, J. Paxman, S. B. Potter, T. Pritchard, W. Raja, R. Ridden-Harper, E. Romero-Colmenero, E. M. Sadler, E. K. Sansom, R. A. Scalzo, B. P. Schmidt, S. M. Scott, N. Seghouani, Z. Shang, R. M. Shannon, L. Shao, M. M. Shara, R. Sharp, M. Sokolowski, J. Sollerman, J. Staff, K. Steele, T. Sun, N. B. Suntzeff, C. Tao, S. Tingay, M. C. Towner, P. Thierry, C. Trott, B. E. Tucker, P. Väisänen, V. Venkatraman Krishnan, M. Walker, L. Wang, X. Wang, R. Wayth, M. Whiting, A. Williams, T. Williams, C. Wolf, C. Wu, X. Wu, J. Yang, X. Yuan, H. Zhang, J. Zhou, H. Zovaro Published online by Cambridge University Press: 20 December 2017, e069 The discovery of the first electromagnetic counterpart to a gravitational wave signal has generated follow-up observations by over 50 facilities world-wide, ushering in the new era of multi-messenger astronomy. In this paper, we present follow-up observations of the gravitational wave event GW170817 and its electromagnetic counterpart SSS17a/DLT17ck (IAU label AT2017gfo) by 14 Australian telescopes and partner observatories as part of Australian-based and Australian-led research programs. We report early- to late-time multi-wavelength observations, including optical imaging and spectroscopy, mid-infrared imaging, radio imaging, and searches for fast radio bursts. Our optical spectra reveal that the transient source emission cooled from approximately 6 400 K to 2 100 K over a 7-d period and produced no significant optical emission lines. The spectral profiles, cooling rate, and photometric light curves are consistent with the expected outburst and subsequent processes of a binary neutron star merger. Star formation in the host galaxy probably ceased at least a Gyr ago, although there is evidence for a galaxy merger. Binary pulsars with short (100 Myr) decay times are therefore unlikely progenitors, but pulsars like PSR B1534+12 with its 2.7 Gyr coalescence time could produce such a merger. The displacement (~2.2 kpc) of the binary star system from the centre of the main galaxy is not unusual for stars in the host galaxy or stars originating in the merging galaxy, and therefore any constraints on the kick velocity imparted to the progenitor are poor. The Taipan Galaxy Survey: Scientific Goals and Observing Strategy Elisabete da Cunha, Andrew M. Hopkins, Matthew Colless, Edward N. Taylor, Chris Blake, Cullan Howlett, Christina Magoulas, John R. Lucey, Claudia Lagos, Kyler Kuehn, Yjan Gordon, Dilyar Barat, Fuyan Bian, Christian Wolf, Michael J. Cowley, Marc White, Ixandra Achitouv, Maciej Bilicki, Joss Bland-Hawthorn, Krzysztof Bolejko, Michael J. I. Brown, Rebecca Brown, Julia Bryant, Scott Croom, Tamara M. Davis, Simon P. Driver, Miroslav D. Filipovic, Samuel R. Hinton, Melanie Johnston-Hollitt, D. Heath Jones, Bärbel Koribalski, Dane Kleiner, Jon Lawrence, Nuria Lorente, Jeremy Mould, Matt S. Owers, Kevin Pimbblet, C. G. Tinney, Nicholas F. H. Tothill, Fred Watson Published online by Cambridge University Press: 24 October 2017, e047 The Taipan galaxy survey (hereafter simply 'Taipan') is a multi-object spectroscopic survey starting in 2017 that will cover 2π steradians over the southern sky (δ ≲ 10°, |b| ≳ 10°), and obtain optical spectra for about two million galaxies out to z < 0.4. Taipan will use the newly refurbished 1.2-m UK Schmidt Telescope at Siding Spring Observatory with the new TAIPAN instrument, which includes an innovative 'Starbugs' positioning system capable of rapidly and simultaneously deploying up to 150 spectroscopic fibres (and up to 300 with a proposed upgrade) over the 6° diameter focal plane, and a purpose-built spectrograph operating in the range from 370 to 870 nm with resolving power R ≳ 2000. The main scientific goals of Taipan are (i) to measure the distance scale of the Universe (primarily governed by the local expansion rate, H 0) to 1% precision, and the growth rate of structure to 5%; (ii) to make the most extensive map yet constructed of the total mass distribution and motions in the local Universe, using peculiar velocities based on improved Fundamental Plane distances, which will enable sensitive tests of gravitational physics; and (iii) to deliver a legacy sample of low-redshift galaxies as a unique laboratory for studying galaxy evolution as a function of dark matter halo and stellar mass and environment. The final survey, which will be completed within 5 yrs, will consist of a complete magnitude-limited sample (i ⩽ 17) of about 1.2 × 106 galaxies supplemented by an extension to higher redshifts and fainter magnitudes (i ⩽ 18.1) of a luminous red galaxy sample of about 0.8 × 106 galaxies. Observations and data processing will be carried out remotely and in a fully automated way, using a purpose-built automated 'virtual observer' software and an automated data reduction pipeline. The Taipan survey is deliberately designed to maximise its legacy value by complementing and enhancing current and planned surveys of the southern sky at wavelengths from the optical to the radio; it will become the primary redshift and optical spectroscopic reference catalogue for the local extragalactic Universe in the southern sky for the coming decade. The emergence of enterovirus D68 in England in autumn 2014 and the necessity for reinforcing enterovirus respiratory screening A. I. CARRION MARTIN, R. G. PEBODY, K. DANIS, J. ELLIS, S. NIAZI, S. DE LUSIGNAN, K. E. BROWN, M. ZAMBON, D. J. ALLEN Journal: Epidemiology & Infection / Volume 145 / Issue 9 / July 2017 Published online by Cambridge University Press: 03 April 2017, pp. 1855-1864 In autumn 2014, enterovirus D68 (EV-D68) cases presenting with severe respiratory or neurological disease were described in countries worldwide. To describe the epidemiology and virological characteristics of EV-D68 in England, we collected clinical information on laboratory-confirmed EV-D68 cases detected in secondary care (hospitals), between September 2014 and January 2015. In primary care (general practitioners), respiratory swabs collected (September 2013–January 2015) from patients presenting with influenza-like illness were tested for EV-D68. In secondary care 55 EV-D68 cases were detected. Among those, 45 cases had clinical information available and 89% (40/45) presented with severe respiratory symptoms. Detection of EV-D68 among patients in primary care increased from 0.4% (4/1074; 95% CI 0.1–1.0) (September 2013–January 2014) to 0.8% (11/1359; 95% CI 0.4–1.5) (September 2014–January 2015). Characterization of EV-D68 strains circulating in England since 2012 and up to winter 2014/2015 indicated that those strains were genetically similar to those detected in 2014 in USA. We recommend reinforcing enterovirus surveillance through screening respiratory samples of suspected cases. Abundance ratios & ages of stellar populations in HARPS-GTO sample E. Delgado Mena, M. Tsantaki, V. Zh. Adibekyan, S. G. Sousa, N. C. Santos, J. I. González Hernández, G. Israelian Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S330 / April 2017 Published online by Cambridge University Press: 07 March 2018, pp. 156-159 Print publication: April 2017 In this work we present chemical abundances of heavy elements (Z>28) for a homogeneous sample of 1059 stars from HARPS planet search program. We also derive ages using parallaxes from Hipparcos and Gaia DR1 to compare the results. We study the [X/Fe] ratios for different populations and compare them with models of Galactic chemical evolution. We find that thick disk stars are chemically disjunt for Zn adn Eu. Moreover, the high-alpha metal-rich population presents an interesting behaviour, with clear overabundances of Cu and Zn and lower abundances of Y and Ba with respect to thin disk stars. Several abundance ratios present a significant correlation with age for chemically separated thin disk stars (regardless of their metallicity) but thick disk stars do not present that behaviour. Moreover, at supersolar metallicities the trends with age tend to be weaker for several elements. A global threats overview for Numeniini populations: synthesising expert knowledge for a group of declining migratory birds JAMES W. PEARCE-HIGGINS, DANIEL J. BROWN, DAVID J. T. DOUGLAS, JOSÉ A. ALVES, MARIAGRAZIA BELLIO, PIERRICK BOCHER, GRAEME M. BUCHANAN, ROB P. CLAY, JESSE CONKLIN, NICOLA CROCKFORD, PETER DANN, JAANUS ELTS, CHRISTIAN FRIIS, RICHARD A. FULLER, JENNIFER A. GILL, KEN GOSBELL, JAMES A. JOHNSON, ROCIO MARQUEZ-FERRANDO, JOSE A. MASERO, DAVID S. MELVILLE, SPIKE MILLINGTON, CLIVE MINTON, TAEJ MUNDKUR, ERICA NOL, HANNES PEHLAK, THEUNIS PIERSMA, FRÉDÉRIC ROBIN, DANNY I. ROGERS, DANIEL R. RUTHRAUFF, NATHAN R. SENNER, JUNID N. SHAH, ROB D. SHELDON, SERGEJ A. SOLOVIEV, PAVEL S. TOMKOVICH, YVONNE I. VERKUIL Journal: Bird Conservation International / Volume 27 / Issue 1 / March 2017 Published online by Cambridge University Press: 01 March 2017, pp. 6-34 The Numeniini is a tribe of 13 wader species (Scolopacidae, Charadriiformes) of which seven are Near Threatened or globally threatened, including two Critically Endangered. To help inform conservation management and policy responses, we present the results of an expert assessment of the threats that members of this taxonomic group face across migratory flyways. Most threats are increasing in intensity, particularly in non-breeding areas, where habitat loss resulting from residential and commercial development, aquaculture, mining, transport, disturbance, problematic invasive species, pollution and climate change were regarded as having the greatest detrimental impact. Fewer threats (mining, disturbance, problematic native species and climate change) were identified as widely affecting breeding areas. Numeniini populations face the greatest number of non-breeding threats in the East Asian-Australasian Flyway, especially those associated with coastal reclamation; related threats were also identified across the Central and Atlantic Americas, and East Atlantic flyways. Threats on the breeding grounds were greatest in Central and Atlantic Americas, East Atlantic and West Asian flyways. Three priority actions were associated with monitoring and research: to monitor breeding population trends (which for species breeding in remote areas may best be achieved through surveys at key non-breeding sites), to deploy tracking technologies to identify migratory connectivity, and to monitor land-cover change across breeding and non-breeding areas. Two priority actions were focused on conservation and policy responses: to identify and effectively protect key non-breeding sites across all flyways (particularly in the East Asian- Australasian Flyway), and to implement successful conservation interventions at a sufficient scale across human-dominated landscapes for species' recovery to be achieved. If implemented urgently, these measures in combination have the potential to alter the current population declines of many Numeniini species and provide a template for the conservation of other groups of threatened species. Hydrolysable tannin-based diet rich in gallotannins has a minimal impact on pig performance but significantly reduces salivary and bulbourethral gland size G. Bee, P. Silacci, S. Ampuero-Kragten, M. Čandek-Potokar, A. L. Wealleans, J. Litten-Brown, J.-P. Salminen, I. Mueller-Harvey Journal: animal / Volume 11 / Issue 9 / September 2017 Print publication: September 2017 Tannins have long been considered 'anti-nutritional' factors in monogastric nutrition, shown to reduce feed intake and palatability. However, recent studies revealed that compared with condensed tannins, hydrolysable tannins (HT) appear to have far less impact on growth performance, but may be inhibitory to the total activity of caecal bacteria. This in turn could reduce microbial synthesis of skatole and indole in the hindgut of entire male pigs (EM). Thus, the objective of this study was to determine the impact of a group of dietary HT on growth performance, carcass traits and boar taint compounds of group housed EM. For the study, 36 Swiss Large White boars were assigned within litter to three treatment groups. Boars were offered ad libitum one of three finisher diets supplemented with 0 (C), 15 (T15) or 30 g/kg (T30) of HT from day 105 to 165 of age. Growth performance, carcass characteristics, boar taint compounds in the adipose tissue and cytochrome P450 (CYP) isoenzymes CYP2E1, CYP1A2 and CYP2A19 gene expression in the liver was assessed. Compared with C, feed efficiency but not daily gain and daily feed intake was lower (P<0.05) in T15 and T30 boars. Except for the percentage carcass weight loss during cooling, which tended (P<0.10) to be greater in T30 than C and T15, carcass characteristics were not affected by the diets. In line with the numerically lower androstenone level, bulbourethral and salivary glands of T30 boars were lighter (P<0.05) than of T15 with intermediate values for C. Indole level was lower (P<0.05) in the adipose tissue of T30 than C pigs with intermediate levels in T15. Skatole levels tended (P<0.10) to be lower in T30 and C than T15 pigs. Hepatic gene expression of CYP isoenzymes did not differ between-treatment groups, but was negatively correlated (P<0.05) with androstenone (CYP2E1 and CYP1A2), skatole (CYP2E1, CYP2A) and indole (CYP2A) level. In line with the numerically highest androstenone and skatole concentrations, boar taint odour but not flavour was detected by the panellists in loins from T15 compared with loins from C and T30 boars. These results provide evidence that HT affected metabolism of indolic compounds and androstenone and that they affected the development of accessory sex glands. However, the effects were too small to be detected by sensory evaluation. OS2 - 166 A Novel Model of Human Lung-to-Brain Metastasis and its Application to the Identification of Essential Metastatic Regulatory Genes M Singh, C Venugopal, T Tokar, K R Brown, N McFarlane, D Bakhshinyan, T Vijayakumar, B Manoranjan, S Mahendram, P Vora, M Qazi, M Dhillon, A Tong, K Durrer, N Murty, R Hallett, J A Hassell, D Kaplan, JC Cutz, I Jurisica, J Moffat, S K Singh Journal: Canadian Journal of Neurological Sciences / Volume 43 / Issue S4 / October 2016 Published online by Cambridge University Press: 18 October 2016, p. S2 Print publication: October 2016 Brain Metastases (BM) represent a leading cause of cancer mortality. While metastatic lesions contain subclones derived from their primary lesion, their functional characterization has been limited by a paucity of preclinical models accurately recapitulating the stages of metastasis. This work describes the isolation of a unique subset of metastatic stem-like cells from primary human patient samples of BM, termed brain metastasis initiating cells (BMICs). Utilizing these BMICs we have established a novel patient-derived xenograft (PDX) model of BM that recapitulates the entire metastatic cascade, from primary tumor initiation to micro-metastasis and macro-metastasis formation in the brain. We then comprehensively interrogated human BM to identify genetic regulators of BMICs using in vitro and in vivo RNA interference screens, and validated hits using both our novel PDX model as well as primary clinical BM specimens. We identified SPOCK1 and TWIST2 as novel BMIC regulators, where in our model SPOCK1 regulated BMIC self-renewal and tumor initiation, and TWIST2 specifically regulated cell migration from lung to brain. A prospective cohort of primary lung cancer specimens was used to establish that SPOCK1 and TWIST2 were only expressed in patients who ultimately developed BM, thus establishing both clinical and functional utility for these gene products. This work offers the first comprehensive preclinical model of human brain metastasis for further characterization of therapeutic targets, identification of predictive biomarkers, and subsequent prophylactic treatment of patients most likely to develop BM. By blocking this process, metastatic lung cancer would effectively become a localized, more manageable disease. DESAlert: Enabling Real-Time Transient Follow-Up with Dark Energy Survey Data A. Poci, K. Kuehn, T. Abbott, F. B. Abdalla, S. Allam, A.H. Bauer, A. Benoit-Lévy, E. Bertin, D. Brooks, P. J. Brown, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, R. Covarrubias, L. N. da Costa, C. B. D'Andrea, D. L. DePoy, S. Desai, J. P. Dietrich, C. E Cunha, T. F. Eifler, J. Estrada, A. E. Evrard, A. Fausti Neto, D. A. Finley, B. Flaugher, P. Fosalba, J. Frieman, D. Gerdes, D. Gruen, R. A. Gruendl, K. Honscheid, D. James, N. Kuropatkin, O. Lahav, T. S. Li, M. March, J. Marshall, K. W. Merritt, C.J. Miller, R. C. Nichol, B. Nord, R. Ogando, A. A. Plazas, A. K. Romer, A. Roodman, E. S. Rykoff, M. Sako, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla, C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, J. Thaler, R. C. Thomas, D. Tucker, A. R. Walker, W. Wester, (The DES Collaboration) The Dark Energy Survey is undertaking an observational programme imaging 1/4 of the southern hemisphere sky with unprecedented photometric accuracy. In the process of observing millions of faint stars and galaxies to constrain the parameters of the dark energy equation of state, the Dark Energy Survey will obtain pre-discovery images of the regions surrounding an estimated 100 gamma-ray bursts over 5 yr. Once gamma-ray bursts are detected by, e.g., the Swift satellite, the DES data will be extremely useful for follow-up observations by the transient astronomy community. We describe a recently-commissioned suite of software that listens continuously for automated notices of gamma-ray burst activity, collates information from archival DES data, and disseminates relevant data products back to the community in near-real-time. Of particular importance are the opportunities that non-public DES data provide for relative photometry of the optical counterparts of gamma-ray bursts, as well as for identifying key characteristics (e.g., photometric redshifts) of potential gamma-ray burst host galaxies. We provide the functional details of the DESAlert software, and its data products, and we show sample results from the application of DESAlert to numerous previously detected gamma-ray bursts, including the possible identification of several heretofore unknown gamma-ray burst hosts. The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope. Mob rulers and part-time cleaners: two reef fish associations at the isolated Ascension Island R.A. Morais, J. Brown, S. Bedard, C.E.L. Ferreira, S.R. Floeter, J.P. Quimbayo, L.A. Rocha, I. Sazima Journal: Journal of the Marine Biological Association of the United Kingdom / Volume 97 / Issue 4 / June 2017 Print publication: June 2017 Isolated oceanic islands may give rise not only to new and endemic species, but also to unique behaviours and species interactions. Multi-species fish interactions, such as cleaning, following, mob-feeding and others are understudied in these ecosystems. Here we present qualitative and quantitative observations on cleaning and mob-feeding reef fish associations at the isolated Ascension Island, South Atlantic Ocean. Cleaning interactions were dominated by juveniles of the facultative fish cleaners Bodianus insularis and Pomacanthus paru, with lesser contributions of Chaetodon sanctaehelenae, Thalassoma ascensionis and the cleaner shrimp Lysmata grabhami. Two types of feeding mobs were consistently identified: less mobile mobs led by the surgeonfish Acanthurus bahianus and A. coeruleus and the more mobile mobs led by the African sergeant Abudefduf hoefleri. This is the first record of A. hoefleri from outside of the Eastern Atlantic and also the first report of this species displaying mob-feeding behaviour. The principal follower of both mob types was the extremely abundant Melichthys niger, but the main aggressor differed: Stegastes lubbocki, a highly territorial herbivore, was the main aggressor of Acanthurus mobs; and Chromis multilineata a territorial fish while engaged in egg parental care, was the principal aggressor towards Abudefduf mobs. Our study enhances the scarce information on reef fish feeding associations at the isolated Ascension Island and at oceanic islands in the Atlantic in general.
CommonCrawl
Tagged: Tohoku University[東北大学](JP) Toggle Comment Threads | Keyboard Shortcuts richardmitnick 3:40 pm on December 20, 2022 Permalink | Reply Tags: "Research News - Gamma-Ray Bursts' Hidden Energy", A team of astrophysicists has succeeded in measuring a gamma-ray burst's hidden energy by utilizing light polarization., Astrophysics ( 8,513 ), Basic Research ( 15,945 ), Cosmology ( 8,630 ), Ground based Millimeter/submillimeter astronomy ( 56 ), Ground based Optical Astronomy ( 155 ), Tohoku University[東北大学](JP) From Tohoku University[東北大学](JP): "Research News – Gamma-Ray Bursts' Hidden Energy" From Tohoku University[東北大学](JP) Kenji Toma, Frontier Research Institute for Interdisciplinary Sciences tomafris.tohoku.ac.jp Gamma-ray burst. Credit: Cruz Dewilde/ NASA SWIFT. Gamma-ray bursts are the most luminous explosions in the universe, allowing astrologists [?] to observe intense gamma rays in short durations. Gamma-ray bursts are classified as either short or long, with long gamma-ray bursts being the result of massive stars dying out. Hence why they provide hidden clues about the evolution of the universe. Gamma-ray bursts emit gamma rays as well as radio waves, optical lights, and X-rays. When the conversion of explosion energy to emitted energy, i.e., the conversion efficiency, is high, the total explosion energy can be calculated by simply adding all the emitted energy. But when the conversion efficiency is low or unknown, measuring the emitted energy alone is not enough. Now, a team of astrophysicists has succeeded in measuring a gamma-ray burst's hidden energy by utilizing light polarization. The team was led by Dr. Yuji Urata from the National Central University in Taiwan and MITOS Science CO., LTD and Professor Kenji Toma from Tohoku University's Frontier Research Institute for Interdisciplinary Sciences (FRIS). Details of their findings were published in the journal Nature Astronomy [below] on December 8, 2022. When an electromagnetic wave is polarized, it means that the oscillation of that wave flows in one direction. While light emitted from stars is not polarized, the reflection of that light is. Many everyday items such as sunglasses and light shields utilize polarization to block out the glare of lights traveling in a uniform direction. Measuring the degree of polarization is referred to as polarimetry. In astrophysical observations, measuring a celestial object's polarimetry is not as easy as measuring its brightness. But it offers valuable information on the physical conditions of objects. The team looked at a gamma-ray burst which occurred on December 21, 2019 (GRB191221B). Using the Very Large Telescope of the European Southern Observatory and Atacama Large Millimeter/submillimeter Array – some of the world's most advanced optical and radio telescopes – they calculated the polarimetry of fast-fading emissions from GRB191221B. The European Southern Observatory [La Observatorio Europeo Austral][Observatoire européen austral][Europäische Südsternwarte](EU)(CL), Very Large Telescope at Cerro Paranal in the Atacama Desert •ANTU (UT1; The Sun ) •KUEYEN (UT2; The Moon ) •MELIPAL (UT3; The Southern Cross ), and •YEPUN (UT4; Venus – as evening star). Elevation 2,635 m (8,645 ft) from above Credit J.L. Dauvergne & G. Hüdepohl atacama photo. The European Southern Observatory [La Observatorio Europeo Austral] [Observatoire européen austral][Europäische Südsternwarte](EU)(CL)/National Radio Astronomy Observatory/National Astronomical Observatory of Japan(JP) ALMA Observatory (CL). They then successfully measured the optical and radio polarizations simultaneously, finding the radio polarization degree to be significantly lower than the optical one. "This difference in polarization at the two wavelengths reveals detailed physical conditions of the gamma-ray burst's emission region," said Toma. "In particular, it allowed us to measure the previously unmeasurable hidden energy." When accounting for the hidden energy, the team revealed that the total energy was about 3.5 times bigger than previous estimates. With the explosion energy representing the gravitational energy of the progenitor star, being able to measure this figure has important ramifications for determining stars' masses. "Knowing the measurements of the progenitor star's true masses will help in understanding the evolutionary history of the universe," added Toma. "The first stars in the universe could be discovered if we can detect their long gamma-ray bursts." Science paper: See the full article here. Comments are invited and will be appreciated, especially if the reader finds any errors which I can correct. Use "Reply". Tohoku University (東北大学](JP), located in Sendai, Miyagi in the Tōhoku Region, Japan, is a Japanese national university. It was the third Imperial University in Japan, the first three Designated National University along with the The University of Tokyo[(東京大] (JP) and Kyoto University [京都大学](JP) and selected as a Top Type university of Top Global University Project by the Japanese government. In 2020 and 2021, the Times Higher Education Tohoku University was ranked No. 1 university in Japan. In 2016, Tohoku University had 10 faculties, 16 graduate schools and 6 research institutes, with a total enrollment of 17,885 students. The university's three core values are "Research First [研究第一主義]," "Open-Doors [門戸開放]," and "Practice-Oriented Research and Education [実学尊重]." Information and Intelligent Systems Applied Chemistry, Chemical Engineering and Bio molecular Engineering Civil Engineering and Architecture International Cultural Studies Educational Informatics Research Division / Education Division Accounting School Research Institute of Electrical Communication [電気通信研究所] Institute of Development, Aging and Cancer [加齢医学研究所] Institute of Fluid Science [流体科学研究所] Institute for Materials Research,IMR [金属材料研究所] National Collaborative Research Institute Institute of Multidisciplinary Research for Advanced Materials [多元物質科学研究所] International Research Institute of Disaster Science [災害科学国際研究所] richardmitnick 1:51 pm on December 9, 2022 Permalink | Reply Tags: "FPGA": Field-programmable gate array, "P-bits" oscillate between states., "P-computer": probabilistic computer, "Researchers Develop a Scaled-up Spintronic Probabilistic Computer", "sMTJ": Stochastic magnetic tunnel junctions, "SQA": Simulated quantum annealing, "TPUs": Tensor Processing Units, ("MRAM"): magnetoresistive random access memory, A "p-computer" harnesses naturally stochastic building blocks called probabilistic bits (p-bits)., Applied Research & Technology ( 10,619 ), Basic Research ( 15,945 ), Moore's Law ( 9 ), Only "spintronic" "p-computer' proof-of-concepts for combinatorial optimization and machine learning have been demonstrated., P-computers attempt to tackle probabilistic algorithms widely used for complicated computational problems in combinatorial optimization and sampling., Quantum Computing ( 383 ), Significant hurdles to the practical realization of scalable quantum computers remain., The "sMTJ + FPGA" combination allows much larger networks of p-bits to be implemented in hardware going beyond the earlier small-scale demonstrations., The continued evolution predicted by "Moore's law" is starting to lag., The revolutions in machine learning and artificial intelligence means much higher computational ability is required., Tohoku University[東北大学](JP) From Tohoku University[東北大学](JP): "Researchers Develop a Scaled-up Spintronic Probabilistic Computer" Shunsuke Fukami Research Institute of Electrical Communication s-fukamitohoku.ac.jp The spintronic path. Researchers at Tohoku University, the University of Messina, and The University of California-Santa Barbara have developed a scaled-up version of a probabilistic computer ("p-computer") with stochastic spintronic devices that is suitable for hard computational problems like combinatorial optimization and machine learning. "Moore's law" predicts that computers get faster every two years because of the evolution of semiconductor chips. Whilst this is what has historically happened, the continued evolution is starting to lag. The revolutions in machine learning and artificial intelligence means much higher computational ability is required. Quantum computing is one way of meeting these challenges, but significant hurdles to the practical realization of scalable quantum computers remain. A "p-computer" harnesses naturally stochastic building blocks called probabilistic bits (p-bits). Unlike bits in traditional computers, "P-bits" oscillate between states. A "p-computer" can operate at room-temperature and acts as a domain-specific computer for a wide variety of applications in machine learning and artificial intelligence. Just like quantum computers try to solve inherently quantum problems in quantum chemistry, "p-computers" attempt to tackle probabilistic algorithms widely used for complicated computational problems in combinatorial optimization and sampling. Recently, researchers from Tohoku University, Purdue University, and The University of California-Santa Barbara have shown that the "p-bits" can be efficiently realized using suitably modified "spintronic" devices called stochastic magnetic tunnel junctions ("sMTJ"). Until now, "sMTJ"-based "p-bits" have been implemented at small scale; and only "spintronic" "p-computer" proof-of-concepts for combinatorial optimization and machine learning have been demonstrated. A photograph of the constructed heterogeneous "p-computer" consisting of stochastic magnetic tunnel junction ("sMTJ") based probabilistic bit ("p-bit') and field-programmable gate array ("FPGA"). ©Kerem Camsari, Giovanni Finocchio, and Shunsuke Fukami et al. The research group has presented two important advances at the 68th International Electron Devices Meeting (IEDM) on December 6th, 2022. First, they have shown how "sMTJ"-based "p-bits" can be combined with conventional and programmable semiconductor chips, namely, Field-Programmable-Gate-Arrays ("FPGAs"). The "sMTJ + FPGA" combination allows much larger networks of "p-bits" to be implemented in hardware going beyond the earlier small-scale demonstrations. Second, the probabilistic emulation of a quantum algorithm, simulated quantum annealing ("SQA"), has been performed in the heterogeneous "sMTJ + FPGA" p-computers with systematic evaluations for hard combinatorial optimization problems. The researchers also benchmarked the performance of "sMTJ"-based "p-computers" with that of classical computing hardware, such as graphics processing units (GPUs) and Tensor Processing Units ("TPUs"). They showed that "p-computers", utilizing a high-performance "sMTJ" previously demonstrated by a team from Tohoku University, can achieve massive improvements in throughput and power consumption than conventional technologies. "Currently, the "s-MTJ + FPGA" "p-computer" is a prototype with discrete components," said Professor Shunsuke Fukami, who was part of the research group. "In the future, integrated "p-computers" that make use of semiconductor process-compatible magnetoresistive random access memory ("MRAM") technologies may be possible, but this will require a co-design approach, with experts in materials, physics, circuit design and algorithms needing to be brought in." A comparison of probabilistic accelerators as a function of sampling throughput and power consumption. Graphics Processing Units (GPUs) [plotted as N1-N4], Tensor Processing Units ("TPUs") [plotted as G1-G2], and simulated annealing machine [plotted as F1] are compared with probabilistic computers, where demonstrated value and projected value are plotted as P1 and P2, respectively. ©Kerem Camsari, Giovanni Finocchio, and Shunsuke Fukami et al. Title: "Experimental evaluation of simulated quantum annealing with MTJ-augmented p-bits" Authors: Andrea Grimaldi, Kemal Selcuk, Navid Anjum Aadit, Keito Kobayashi, Qixuan Cao, Shuvro Chowdhury, Giovanni Finocchio, Shun Kanai, Hideo Ohno, Shunsuke Fukami and Kerem Y. Camsari Conference: 68th Annual IEEE International Electron Devices Meeting Tags: "Rare Earth Elements Synthesis Confirmed in Neutron Star Mergers", Astronomy ( 10,700 ), Astrophysics ( 8,513 ), Basic Research ( 15,945 ), Chemistry ( 1,212 ), Cosmology ( 8,630 ), Tohoku University[東北大学](JP) From Tohoku University[東北大学](JP): "Rare Earth Elements Synthesis Confirmed in Neutron Star Mergers" 10.27.22 [Just today in social media.] Neutron Star Merger. A group of researchers has, for the first time, identified rare earth elements produced by neutron star mergers. Details of this milestone were published in The Astrophysical Journal [below] on October 26, 2022. When two neutron stars spiral inwards and merge, the resulting explosion produces a large amount of heavy elements that make up our Universe. The first confirmed example of this process was an event in 2017 named GW 170817. Yet, even five years later, identifying the specific elements created in neutron star mergers has eluded scientists, except for strontium identified in the optical spectra. A research group led by Nanae Domoto, a graduate student at the Graduate School of Science at Tohoku University and a research fellow at the Japan Society for the Promotion of Science (JSPS), has systematically studied the properties of all heavy elements to decode the spectra from neutron star mergers. They used this to investigate the spectra of kilonova – bright emissions caused by the radioactive decay of freshly synthesized nuclei that are ejected during the merger – from GW 170817. Based on comparisons of detailed kilonovae spectra simulations, produced by the supercomputer "ATERUI II" at the National Astronomical Observatory of Japan, the team found that the rare elements lanthanum and cerium can reproduce the near-infrared spectral features seen in 2017. NAOJ ATERUI II Cray XC50 supercomputer located at the National Astronomical Observatory [国立天文台] (JP) (NAOJ). Until now, the existence of rare earth elements has only been hypothesized based on the overall evolution of the kilonova's brightness, but not from the spectral features. "This is the first direct identification of rare elements in the spectra of neutron star mergers, and it advances our understanding of the origin of elements in the Universe," Dotomo said. "This study used a simple model of ejected material. Looking ahead, we want to factor in multi-dimensional structures to grasp a bigger picture of what happens when stars collide," Dotomo added. The observed spectra of a kilonova (gray) and model spectra obtained in this study (blue). The numbers on the left show the days after the neutron star merger occurred. Dashed lines indicate the features of the absorption lines. The names of the elements that produce these features are shown in the same colors with the dashed lines. The spectra are vertically shifted for visualization. The observed spectra around 1400 nanometer and 1800-1900 nanometer are affected by the earth's atmosphere. © Nanae Domoto. The Astrophysical Journal See the science paper for instructive material with images. richardmitnick 1:53 pm on September 30, 2022 Permalink | Reply Tags: "Research News", Astronomy ( 10,700 ), Astrophysics ( 8,513 ), Basic Research ( 15,945 ), Black Hole science ( 138 ), Cosmology ( 8,630 ), Exploring the Plasma Loading Mechanism of Radio Jets Launched from Black Holes, Tohoku University[東北大学](JP) From Tohoku University[東北大学](JP): "Research News" Exploring the Plasma Loading Mechanism of Radio Jets Launched from Black Holes Photon spectrum of a reconnection-driven flare from M87. Parameters are M = 6.3 × 109 M⊙, $\dot{m}=5\times {10}^{-5}$, fl = 1.5, and ξhl = 0.5. The blue-dashed, green-dotted, and red-solid lines are for the high-energy flaring state, the low-energy flaring state, and their sum, respectively. The data points are obtained from Table A8 in EHT MWL Science Working Group et al. (2021), which is in the quiescent state. Our model predicts flares of ∼10 times higher luminosity. The black- and gray-dotted lines are sensitivity curves for HiZ-GUNDAM (2 × 104 s: Yonetoku et al. 2020) and AMEGO (106 s: McEnery et al. 2019), respectively. Photon spectrum of a reconnection-driven flare from Sgr A* (solid red line). Parameters are M = 4.0 × 106 M⊙, $\dot{m}=6\times {10}^{-7}$, fl =0.6, and ξhl = 0.5. The X-ray flare data (cyan and magenta regions) are taken from Nowak et al. (2012) and Barrière et al. (2014), respectively. The black-dashed line is the sensitivity curve for FORCE (100 s: Nakazawa et al. 2018). See the science paper for instructive images. Galaxies, including our Milky Way, host supermassive black holes in their centers, and their masses are millions to billions of times larger than the Sun. Some supermassive black holes launch fast-moving plasma outflows which emit strong radio signals, known as radio jets. Radio jets were first discovered in the 1970s. But much remains unknown about how they are produced, especially their energy source and plasma loading mechanism. Recently, the Event Horizon Telescope Collaboration uncovered radio images of a nearby black hole at the center of the giant elliptical galaxy M87. The observation supported the theory that the spin of the black hole powers radio jets but did little to clarify the plasma loading mechanism. Now, a research team, led by Tohoku University astrophysicists, has proposed a promising scenario that clarifies plasma loading mechanism into radio jets. Recent studies have claimed that black holes are highly magnetized because magnetized plasma inside galaxies carries magnetic fields into the black hole. Then, neighboring magnetic energy transiently releases its energy via magnetic reconnection, energizing the plasma surrounding the black hole. This magnetic reconnection provides the energy source for solar flares. Plasmas in solar flares give off ultraviolet and X-rays; whereas the magnetic reconnection around the black hole can cause gamma-ray emission since the released energy per plasma particle is much higher than that for a solar flare. The present scenario proposes that the emitted gamma rays interact with each other and produce copious electron-positron pairs, which are loaded into the radio jets. This explains the large amount of plasma observed in radio jets, consistent with the M87 observations. Additionally, the scenario makes note that radio signal strengths vary from black hole to black hole. For example, radio jets around Sgr A* – the supermassive black hole in our Milky Way – are too faint and undetectable by current radio facilities. Also, the scenario predicts short-term X-ray emission when plasma is loaded into radio jets. These X-ray signals are missed with current X-ray detectors, but they are observable by planned X-ray detectors. "Under this scenario, future X-ray astronomy will be able to unravel the plasma loading mechanism into radio jets, a long-standing mystery of black holes," points out Shigeo Kimura, lead author of the study. Details of Kimura and his team's research were published in The Astrophysical Journal Letters on September 29, 2022. richardmitnick 8:44 pm on April 11, 2022 Permalink | Reply Tags: "Electron Lens Formed by Light- A New Method for Atomic-resolution Electron Microscopes", Electron microscopy enables researchers to visualize tiny objects such as viruses; the fine structures of semiconductor devices and even atoms arranged on a material surface., Focusing down the electron beam to the size of an atom is vital for achieving such high spatial resolution., Researchers from Tohoku University have proposed a new method to form an electron lens that uses a light field instead of the electrostatic and magnetic fields employed in conventional electron lenses, Tohoku University[東北大学](JP), When the electron beam passes through an electrostatic or magnetic lens the rays of electrons exhibit different focal positions. From Tohoku University[東北大学](JP): "Electron Lens Formed by Light- A New Method for Atomic-resolution Electron Microscopes" Yuuki Uesugi IMRAM, Tohoku University [email protected] Electron microscopy enables researchers to visualize tiny objects such as viruses, the fine structures of semiconductor devices, and even atoms arranged on a material surface. Focusing down the electron beam to the size of an atom is vital for achieving such high spatial resolution. However, when the electron beam passes through an electrostatic or magnetic lens the rays of electrons exhibit different focal positions depending on the focusing angle and the beam spreads out at the focus. Correcting this "spherical aberration" is costly and complex, meaning that only a select few scientists and companies possess electron microscopes with atomic resolution. Researchers from Tohoku University have proposed a new method to form an electron lens that uses a light field instead of the electrostatic and magnetic fields employed in conventional electron lenses. A ponderomotive force causes the electrons traveling in the light field to be repelled from regions of high optical intensity. Using this phenomenon, a doughnut-shaped light beam placed coaxially with an electron beam is expected to produce a lensing effect on the electron beam. A conceptual illustration of the light-field electron lens. An electron beam (blue) receives the focusing force from a doughnut-shaped light beam (red) at the waist position of the light beam. The inset shows details of the waist area. ⒸYuuki Uesugi et al. The researches theoretically assessed the characteristics of the light-field electron lens formed using a typical doughnut-shaped light beam – known as a Bessel or Laguerre-Gaussian beam. From there, they obtained a simple formula for focal length and spherical aberration coefficients which allowed them to determine rapidly the guiding parameters necessary for the actual electron lens design. The formulas demonstrated that the light-field electron lens generates a "negative" spherical aberration which opposes the aberration of electrostatic and magnetic electron lenses. The combination of the conventional electron lens with a "positive" spherical aberration and a light-field electron lens that offset the aberration reduced the electron beams size to the atomic scale. This means that the light-field electron lens could be used as a spherical aberration corrector. "The light-field electron lens has unique characteristics not seen in conventional electrostatic and magnetic electron lenses," says Yuuki Uesugi, assistant professor at the Institute of Multidisciplinary Research for Advanced Materials at Tohoku University and lead author of the study. "The realization of light-based aberration corrector will significantly reduce installation costs for electron microscopes with atomic resolution, leading to their widespread use in diverse scientific and industrial fields," adds Uesugi. Looking ahead, Uesugi and colleagues are exploring ways for the practical application of next-generation electron microscopes using the light-field electron lens. Results of electronic trajectory calculations. An electron objective lens with a spherical aberration of 1 nanometer was corrected using a light-field electronic lens with the negative spherical aberration. The beam radius at the focus (z = 0) was reduced from 1 nm to the atomic scale of 0.3 nm. ⒸYuuki Uesugi et al. Journal of Optics
CommonCrawl
01.06.2015 | Research article | Ausgabe 1/2015 Open Access The cost of dialysis in low and middle-income countries: a systematic review BMC Health Services Research > Ausgabe 1/2015 Lawrencia Mushi, Paul Marschall, Steffen Fleßa All authors contributed equally to the content and writing of the paper. LM was involved in the searching of the data. PM and SF reviewed and edited the paper before submission and all authors approved the final version. Dr. Lawrencia Mushi is a lecturer in the department of health systems management in the School of Public Administration and Management at Mzumbe University Tanzania. Dr: Paul Marschall was interim professor for development economics at the University of Bayreuth. Due to his involvement in teaching and research projects he has comprehensive experience in in the fields of global health and international health economics. Prof. Dr. Steffen Flessa is professor of business administration and head of the department of health care management at the University of Greifswald. He is specialized in international health economics with 25 years of working experience in different developing countries. The prevalence of kidney disease is increasing dramatically and the cost of treating chronic diseases represents a leading threat to health care resources worldwide. In 2008, there were approximately 1.75 million patients worldwide who regularly received renal replacement therapy in the form of dialysis, of which approximately 1.55 (89 %) million were on hemodialysis (HD) and approximately 197,000 (11 %) patients were on peritoneal dialysis (PD) [ 1 ]. In total, out of the 197,000 patients on PD, 59 % were receiving treatment in low and middle-income countries and the remaining 41 % in high-income countries. In the case of HD, nearly 62 % of the patients were being treated in high-income countries and the remaining 38 % in low and middle-income countries [ 1 , 2 ]. Furthermore, the rate of patients receiving dialysis treatment is growing at an annual global average rate of 7 % [ 3 ]. Main reasons for this trend are the universal ageing of populations, multi-morbidity, higher-expectancy of treated end stage renal disease (ESRD) patients and increasing access of a generally younger patient population to treatment in countries in which access had previously been limited [ 4 , 5 ]. Chronic kidney disease and dialysis are not only a medical problem, but also an economic. Renal replacement therapy (RRT) consumes a lot of resources as the equipment and the materials are quite expensive. In addition, dialysis needs quite some input of personnel [ 6 ]. Costs are generally defined as the monetary value of the resource consumption for producing a commodity or service, frequently expressed as a composite sum of quantities of some activity multiplied by their respective prices [ 6 ]. They are generally described in four categories: direct medical costs, direct non-medical costs, indirect costs and intangible costs [ 7 , 8 ]. Direct medical costs of dialysis include staffing costs, physician fees or salary, costs of dialyzers and tubing in HD, costs of solutions and tubing in PD, costs associated with radiology, laboratory and medications, capital costs of HD machines and PD cyclers, costs of hospitalizations and costs of outpatient consultations from other specialties. Direct non-medical costs include building costs, facility utilities and other overhead costs. Intangible costs are the costs associated with pain, suffering and impairment in quality of life (QOL), as well as the value of extending life. These costs are often omitted from economic evaluations because they are difficult to quantify and might appear less immediately relevant to payers and providers [ 9 ]. Indirect costs or productivity losses for patients and their families or caretakers, rarely have been assessed and incorporated in dialysis economic evaluations. There are few review studies conducted to determine the cost of dialysis around the world [ 1 , 6 , 10 ] but to our knowledge there is no review study conducted specifically for low and middle-income countries. To our knowledge, the only review that explicitly included work from low and middle-income countries was published by Just et al. [ 11 ] It reported the ratio of HD and PD costs across the high-income countries and in low and middle- income countries. The authors concluded that HD is a more expensive dialysis modality than PD in high-income countries. However, they stated that research in low and middle-income countries was too limited to draw definitive conclusions. The purpose of our analysis is to provide a wide depiction of the cost of dialysis (both PD and HD) in low and middle-income countries to support health care planners, decision makers and other interested partners in making more evidence-based decisions especially on the preventive measures of the disease and cost minimization. The next section of the paper explains the methods deployed in this study, followed by the results of the study. The paper closes with a discussion of the findings and some conclusions. We conducted a systematic review of published studies reporting cost of ESRD treatment modalities in low and middle-income countries. The methodology followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 10 ] as well as Center for Reviews and Disseminations (CRD's) guidance for undertaking reviews in health care [ 12 ]. We used the database PubMed and Embase; other literature was added from Google Scholar search. The literature was searched using the predefined criteria and limited to articles published between 1998 and March 2013. The search terms used included variations across the following terms: PD, HD, dialysis cost, kidney failure, renal dialysis, economics, costs, developing countries, and ESRD. Two independent reviewers screened all titles identified independently for possible relevance and excluded studies with low quality. For instance, we required that the methodology was economically sound, fixed and variable costs were separated in the methodology, quantities and prices were separated and the years of data collection were stated. A full article was obtained if the title was considered relevant and both reviewers came to the same assessment of sufficient quality. This process was then followed by a meeting between the reviewers. In the meeting we discussed the commonalities and discrepancies between reviewers and agreed on which articles to include in the study. Countries were categorized as low- and middle-income countries according to the DAC list of ODA recipients [ 13 ]. Studies were included if they (i) were published in English or German language and contained the terms 'peritoneal dialysis' and 'renal dialysis or HD' and 'economics or health economics or cost or costs or expenditures;' and (ii) addressed 'Dialysis/economics,' or Renal dialysis/economics or 'Haemodialysis Units, Hospital/economics,' or 'Kidney failure/economics'. As we did not find a sufficient number of cost-of-illness studies, we decided to include into this analysis also the cost information provided by health economic evaluations (e.g. cost-effectiveness analyses, cost-benefit analyses) [ 14 ]. The results are presented in Table 1 recording the authors' name, perspective of the study, types of cost (direct/indirect) modalities assessed (HD and PD), and the cost after the conversion into US dollar. The original figures were inflated for the year 2012 and converted into equivalent 2012 international dollars (Int$) using World Bank Purchasing Power Parity (PPP) conversion table [ 15 ] based on GDP and not concerning the health sector. We presented the costs of both HD and PD annually. For studies which presented HD and PD costs in years and HD cost per session, weekly, monthly and in ranges the following formula were used: $$ \begin{array}{l}\bullet \mathrm{Cost}\ \mathrm{of}\ \mathrm{H}\mathrm{D}\ \mathrm{in}\ \mathrm{a}\ \mathrm{year} = 3\ \mathrm{s}\mathrm{ession}/\mathrm{week}\ \mathrm{X}\ 52\ \mathrm{week}\mathrm{s}/\mathrm{year}\ \mathrm{X}\ \mathrm{H}\mathrm{D}\ \mathrm{cost}/\mathrm{session}\ \mathrm{OR}\ \\ {}\mathrm{H}\mathrm{D}\ \mathrm{cost}\ \mathrm{per}\ \mathrm{month}\ \mathrm{X}\ 12\ \mathrm{month}\mathrm{s}/\mathrm{year}\ \mathrm{OR}\ \mathrm{H}\mathrm{D}\ \mathrm{cost}\ \mathrm{per}\ \mathrm{week}\ \mathrm{X}\ 52\ \mathrm{week}\mathrm{s}/\mathrm{year}\end{array} $$ $$ \begin{array}{l}\bullet \mathrm{Cost}\ \mathrm{of}\ \mathrm{H}\mathrm{D}\ \mathrm{or}\ \mathrm{P}\mathrm{D}\ \mathrm{in}\ \mathrm{a}\ \mathrm{year} = \mathrm{Cost}\mathrm{s}\ \mathrm{of}\ \mathrm{H}\mathrm{D}\ \mathrm{or}\ \mathrm{P}\mathrm{D}\ \mathrm{of}\ \mathrm{year}\mathrm{s}/\mathrm{number}\ \mathrm{of}\ \mathrm{year}\mathrm{s}\ \mathrm{the}\\ {}\mathrm{cost}\ \mathrm{has}\ \mathrm{been}\ \mathrm{presented}.\end{array} $$ $$ \begin{array}{l}\bullet \mathrm{Cost}\ \mathrm{of}\ \mathrm{H}\mathrm{D}\ \mathrm{or}\ \mathrm{P}\mathrm{D}\ \mathrm{in}\ \mathrm{ranges} = \mathrm{average}\ \mathrm{H}\mathrm{D}\ \mathrm{or}\ \mathrm{P}\mathrm{D}\ \mathrm{cost}\ \mathrm{per}\ \mathrm{s}\mathrm{ession}\ \mathrm{X}\ 3\\ {}\mathrm{s}\mathrm{ession}/\mathrm{week}\ \mathrm{X}\ 52\ \mathrm{week}\mathrm{s}/\mathrm{year}\ \mathrm{OR}\ \mathrm{average}\ \mathrm{H}\mathrm{D}\ \mathrm{or}\ \mathrm{P}\mathrm{D}\ \mathrm{cost}\ \mathrm{per}\ \mathrm{week}\ \mathrm{X}\ 52\\ {}\mathrm{week}\mathrm{s}/\mathrm{year}\ \mathrm{OR}\ \mathrm{average}\ \mathrm{H}\mathrm{D}\ \mathrm{or}\ \mathrm{P}\mathrm{D}\ \mathrm{cost}\ \mathrm{per}\ \mathrm{month}\ \mathrm{X}\ 12\ \mathrm{month}\mathrm{s}/\mathrm{year}\end{array} $$ Dialysis cost in low and middle-income countries Author(s) and year OECD categorization Type of costs Cost items included Annually cost per patient [Int$ 2012] Abu-Aisha and Elamin 2010 [ 16 ] least developed HD 11,054.60; PD 12,107.42 Elsharif, Elsharif et al. 2010 [ 33 ] Providerb HDa 15,277.75 Li and Chow 2001 [ 18 ] In center: 5,758.19; CAPD: 7,073.24 Jindali 2011 [ 34 ] HD 4,593.43 El Matri, Elhassan et al. 2008 [ 17 ] PD 27,339.51 lower middle PD 7,974.02 Suja, Anju et al. 2012 [ 31 ] Direct and indirect 3, 12, 14, 13, 12, 15 HD 40,078.25 Khanna 2009 [ 32 ] 3, 4, 7, 9, 10,11 Prodjosudjadi 2006 [ 30 ] HDa 7,112.73; CAPD 6,987.95 In center: 10,504.81; CAPD: 7,003.21 HD 25,794.06; PD 25.794,06 Okafor and Kankam 2012 [ 21 ] HD 42.784,91; PD 47.970,96 In center: 4.668,80; CAPD: 12.450,14 Naqvi 2000 [ 29 ] HDc 4,003.74 Ranasinghe, Perera et al. 2011 [ 28 ] 1, 2, 5, 3, 4, 12, 11, 6, 7, 8, 9 In center: 5,042.00; CAPD: 11,672.00 Abreu, Walker et al. 2013 [ 27 ] upper middle 7, 12, 15, 3, 11, 17, 18, 20 Pacheco, Saffie et al. 2007 [ 20 ] HD 24,461.13 PD 24,389.41 Hu, Lee et al. 1998 [ 26 ] Patientb In center: 7,781.34; CAPD: 7,781,.34; APD: 21,787.75 Arefzadeh, Lessanpezeshki et al. 2009 [ 25 ] Provider/patientb 1, 2,3, 4 7, 8, 9, 10, 11, 12, 13 Mahdawi-Mazdeh, et al. 2008 [ 24 ] 10, 9, 11, 7, 14,15, 2, 3, 4, 1, 5, 8 Hooi, Lim et al. 2005 [ 23 ] 1, 2, 7, 10,11,6, 8, 12, 13 In center: 8,092.59; CAPD: 4,902.24; APD: 16,574.25 HD 7.369,73; PD 12.633.83 Erek, Sever et al. 2004 [ 22 ] 3, 7, 10, 12, 14, 15, 16 HD 28,399.77; CAPD 27,889.40 AFRAN African Association of Nephrology Congress aCosts per 2 sessions bStudy perspective assumed, cstudy modality assumed; 1- Administration; 2- Cleaning services; 3- Drugs and consumables; 4- Electricity; 5- Laundry and sterilization; 6- Security; 7- Staff wages; 8- Waste disposal; 9-Water; 10- Capital expenses (buildings, machines, instruments, etc.); 11- Maintenance and repair; 12- Hospitalization costs; 13- Personal costs to patients; 14- procedural expenses; 15-Laboratory expenses, 16- outpatient follow up; 17- transportation; 18- Caregiver cost; 19- Government aid; 20- Productivity losses; 21- reimbursement We retrieved a total of 1,639 references from PubMed and Embase, and 13 references were additionally identified from Google scholar search. Countries were selected depending on the availability of published peer reviewed articles. After removal of duplicates, 1,243 references remained. Initial screening yielded 85 studies for full-text review. Sixty seven (67) studies out of 85 studies were excluded for the number of reasons (Fig. 1). Our systematic review included 18 peer-reviewed articles published in English and did not take into consideration grey literature. The studies reported cost of dialysis from different countries in the low and middle-income countries. Three studies reported cost of dialysis from multiple countries, i.e. Abu-Aisha and Elamin [ 16 ] reported costs for six countries, El Matri et al. [ 17 ] for five and Li and Chow [ 18 ] for six countries. All studies were published between 1998 and 2012; one study was published in 1998, 11 were studies published between 2001 and 2010 and six studies were published between 2011 and 2012 (Table 1). The majority of the literature did not clearly mention the analytic perspective. In this situation we used the resource items used to calculate the dialysis cost to assume the perspective as indicated in Table 1. Six articles adopted a provider perspective, two—the patient perspective, and one—the societal perspective. For six articles we could not determine the perspective used to estimate their cost. Furthermore, four papers mentioned to include both direct and indirect costs, five papers included only direct cost and the rest did not mentioned the types of costs included in the estimation of dialysis cost. The items included in the calculation of dialysis cost varied from one study to another. For instance, the cost items included by majority of the studies were drugs and consumables (9), staff wages (9), hospitalization (8), capital expenses (6), laboratory expenses (6), and administration (5) (Table 1). Some studies [ 16 , 19 – 21 ] did not describe the cost items used in the estimation of dialysis cost. In the following, we would like to go through the articles by countries. We found 12 papers on the cost of dialyses in eight different upper middle-income countries, 13 papers on the cost in seven different lower middle-income countries, one in a low-income country and five from four different least developed countries. Flow diagram for a systematic review of the literature to select studies evaluating cost of dialysis in low and middle-income countries Upper middle-income countries The upper middle-income countries (e.g. Brazil, Chile, China, Iran, Malaysia, South Africa, Tunisia, and Turkey) frequently have health care systems that are more advanced than of most other countries of the DAC list of ODA recipients. Consequently, we expected that the cost of dialysis in these countries would be well documented in the literature. However, from 53 upper middle-income countries we found data only for eight. One article gives an insight into the cost of dialysis in Turkey. Erek et al. [ 22 ] investigated cost of renal replacement therapy (RRT) in three medical centres and one private dialysis centre. Cost-related data accumulated over a 2-year period for 239 patients were analysed. HD costs included staff salaries (physicians, nurses, technicians, and auxiliaries), dialysis equipment, arteriovenous fistulas, specific dialysis-related expenses (dialysers, lines, etc.) drugs, outpatient follow up and hospitalization costs. The cost of CAPD included staff salaries, procedural expenses, laboratory expenses and expenses for drugs, outpatient follow up and hospitalization. The annual costs were Int$ 28,399.77 per patient for HD and Int$ 27,889.40 per patient for CAPD. Tunisia is an upper middle-income country with an advanced health care system. In 2005, it had a Gross National Income of 7,900 Int$ per capita (p. c.) and allocated 175.00 Int$ p.c. total expenditure on health. Compared to some other African countries it has a high number of nephrologist distributions 7 per million of population (p.m.p) in the country. El Matri et al. [ 17 ] reported the annually cost of HD in Tunisia to be Int$ 11,550.94. South Africa has a well-established health care system and provides a quality renal replacement therapy services. It is reported to have the largest PD population in all of Sub-Saharan Africa (SSA) and spend Int$ 390.00 p.c. total expenditure on health. Annual cost of dialysis at Int$ 7,369.73 HD and Int$ 12,633.83 PD has been reported by Abu-Aisha and Elamin [ 16 ]. Similarly, El Matri et al. [ 17 ] reported the annually cost of dialysis to be Int$ 24,878.98 HD and Int$ 34,174.38 for PD. Malaysia is another upper middle-income country with a well-known quality of health care service at least in some parts of the country. However, little is known about the cost of dialysis in Malaysia. Hooi et al. [ 23 ] conducted a multi-centre study evaluating the economics of centre HD and CAPD in Ministry of Health hospitals. The results showed the cost ranged from RM 79.61 to RM 475.79 per haemodialysis treatment, with a mean cost of RM 169 per HD equivalent to Int$ 23,549.42 annually. The cost of CAPD treatment ranged from RM 1400 to RM 3200 per patient month, with a mean of RM 2186 equivalent to Int$ 23,431.51annually. Similarly, Li and Chow [ 18 ] have reported the cost of dialysis in Malaysia to be Int$ 8,092.59 for in center HD, Int$ 4,902.24 for CAPD, and Int$ 16,574.25 for APD. Iran is in the same category of countries and has a quite advanced health care system including regular dialysis services. Mahdavi-Mazdeh [ 24 ] assessed the health services cost of hemodialysis at the hospital settings. The study included a total of 247 dialysis patients. Data of dialysis were collected at 2 local and referral general public hospitals in Tehran, Iran, which include 28- and 20- station dialysis units, performing HD sessions in 3 shifts per day. Data on lost productivity and patients and family expenses for attendance in the centre were collected from a countrywide sample of 5 dialysis centres from north, south, west, east, and central areas of Teheran. The data was collected in April and May 2007. The annual HD cost was Int$ 13,624.62. Of all the cost, medical supplies was reported to consume a large part of cost at 36.18 % followed by fixed direct capital cost at 21.4 % and staff salaries which consumed 17 % of the dialysis costs. In the same way, Arefzadeh et al. [ 25 ] assessed the cost of dialysis at the Imam Khomeini Hospital. The study included both direct and indirect cost of dialysis treatment and included the total of 63 patients in the analysis. The HD cost annually was estimated Int$ 12,788.88 (Int$ 74 per HD session). In China, Li and Chow [ 18 ] published a paper on the cost barrier to PD in the developing world with an Asian perspective. The costs of HD and PD across Asian countries were reported. The cost of in center HD in China was equal to the cost of CAPD both at Int$ 7,781.34, and APD at Int$ 21,787.75. In the same country, Hu et al. [ 26 ] conducted a study examining the medical cost difference between renal transplantation and hemodialysis. The medical cost for maintenance HD was Int$ 35,425.89 per year for one patient, and these charges included charges for Vitamin D, and erythropoietin injections. Finally, Chile and Brazil are in this group of countries. For Chile, Pacheco et al. [ 20 ] performed a cost evaluation of PD and HD. The study included both direct and indirect costs of dialysis treatment. The data was collected in August 2005. The annual costs were found to be practically the same, close to Int$ 24,000 for both HD and PD. The annual costs were Int$ 24,461.13 for HD and Int$ 24,389.41 for PD. For Brazil, Abreu et al. [ 27 ] evaluated the cost of PD and HD in the treatment of ESRD. Data were collected using a standardized questionnaire and one-on-one interviews. The perspective taken for the analysis was that of societal and, direct and indirect cost were both included. The average total cost per patient per year was Int$ 30,079.00 for HD and Int$ 28,592.45 for PD. It consisted of direct medical-hospital costs (82.3 % for HD, 86.5 % for PD), direct nonmedical costs (5.3 % for HD, 3.7 % for PD), and indirect cost (12.4 % for HD, 9.8 % for PD). Consequently, there is some evidence on the cost of dialysis in upper middle-income countries, but compared with the number of countries in this category the dearth of studies surprises. The lowest figures are presented from South Africa and Malaysia, the highest also from Malaysia. This indicates that there are tremendous variations of cost even within one country category. In all cases the cost of dialysis per year are higher than the gross national product p.c.in the respective countries. Lower middle-income countries The next group comprises the lower middle-income countries. We found 13 articles from seven countries (Egypt, India, Indonesia, Namibia, Nigeria, Pakistan, Sri Lanka), whereas OECD counts 40 countries in this category. In Sri Lanka, Ranasinghe [ 28 ] conducted a multi-centre study evaluating the cost in the provision of hemodialysis in developing countries. The result showed that the annual cost of hemodialysis for a patient with chronic renal failure undergoing 2–3 dialysis sessions of four hours duration per week ranged from Int$ 5,869–8,804. Drug and consumables costs reported to account for 70.4 %–84.9 % of the total costs, followed by the wages of the nursing staff at each unit (7.8 %–19.7 %). The cost of dialysis in the same settings is reported by Li and Chow [ 18 ] to be Int$ 5,042 for in center HD, and Int$ 11,672 for CAPD. Pakistan is among the lower middle-income countries; it has a per capita income of Int$ 1,260 per annum (p.a.). It is reported to allocate 0.9 % of its gross national product (GNP) on health expenditure. Naqvi calculated the cost of dialysis per patient as approximately Int$ 4,003.03 per year [ 29 ]. Li and Chow [ 18 ] reported the cost of dialysis in Pakistan to be Int$ 4,668.80 for in centre HD and Int$ 12,450.14 for CAPD. Namibia is a relatively wealth country but like any other lower middle-income country dialysis cost is a major factor affecting the provision of dialysis treatment. Abu-Aisha and Elamin [ 16 ] reported the cost of dialysis in Namibia to be equally to the cost of PD at Int$ 24,500. This is one of two exceptional cases in this review. In Nigeria, the cost of RRT was reported by Okafor and Kankam [ 21 ] to be 3.3 million naira equivalent to Int$ 42,784.91 for HD and 3.7 million naira equivalent to Int$ 47,970.96 for PD. El Matri et al. [ 17 ] reported the cost of HD to be Int$ 19,684.44 and Abu-Aisha and Elamin [ 16 ] reported the cost to be Int$ 36,322.25 for HD and Int$ 42,112.75 for PD. Egypt is another lower middle-income country. According to Egyptian renal registry in 2008, the prevalence of ESRD is 483 per million populations and the total recorded number of ESRD patient on dialysis is 40,000. 98 % of these patients are on HD. Of the 2 % patients being treated with PD, 1.9 % are on intermittent PD, less than 0.1 % are on CAPD and none of them are on automated peritoneal dialysis (APD). The annual Ministry of Health budget for RRT is INT$100 million, which is about 28 % of total healthcare spending. El Matri et al. reported the cost of PD in Egypt at Int$ 7,974.02. Prodjosudjadi [ 30 ] analyzed the cost of ESRD in Indonesia. Data were collected from various Nephrology centres of Indonesian Society Nephrology. The annual cost of dialysis treatment for twice-weekly HD, 5 h per session was found to be Int$ 7,112.73. The costs for CAPD catheter insertion were Int$ 1,150.00, while annual costs for three to four fluid exchanges were Int$ 6,987.95. Dialysis cost in Indonesia is also reported by Li and Chow [ 18 ] and it consume up to Int$ 10,504.81 for in center HD and Int$ 7,003.21for CAPD. India is also categorized as a lower middle-income country. Our analysis found three studies on the costs of dialysis in India. Suja et al. [ 31 ] performed economic evaluation of ESRD patients undergoing HD at Amrita Institute of Medical Sciences, Kerala. Patient perspective was taken for the analysis of cost component and the details were collected by direct patient interview. Thirty (30) patients were included in the analysis. Direct medical cost, direct non-medical cost and indirect cost were included in the study. Other costs such as intangible cost and opportunity cost were excluded. The total cost per six months was found to be around Rs. 318,822.48 equivalent to Int$ 40,078.25 annually. Fifty six percent (56 %) contributed to direct medical costs whereas 20 % contributed direct non-medical cost. Twenty four per cent (24 %) costs were due to indirect costs. The costs provided by Suja et al. are different from those reported by Khanna [ 32 ]. According to him the cost of each HD session in India varies from Rs. 150 in government hospitals to Rs. 2000 in some corporate hospital and annual cost is equivalent to Int$ 11,663.56. Li and Chow [ 18 ] reported the cost of dialysis in India to be Int$ 3,423.79 for in center HD and Int$ 5,057.87 for CAPD. The number of studies in the lower middle-income category is limited and the cost of RRT in most countries is not known. The highest cost was reported from Sri Lanka and the lowest in Pakistan. However, even though Pakistan is reported with the lowest cost, still this cost a multiple of the average annual per capita income. Unlike in the developed countries, drugs and consumables are the cost driver in these countries. Low-income countries Only one country was reported with the cost of dialysis in low-income countries. In Kenya, unlike other countries with similar level of social economic development, all RRT modalities are available (incl. transplantation). However, the costs of hemodialysis and continuous ambulatory peritoneal dialysis are prohibitive. Abu-Aisha and Elamin [ 16 ] reported the cost of dialysis in Kenya to be Int$ 16,845.10 for HD and Int$ 12,633.83 for PD. Finally, five articles were found from four different least developed countries. Seeing that OECD counts 48 countries in this category we can definitely state that we know very little about the cost of dialysis in poorest nations. For Sudan, Elsharif et al. [ 33 ] conducted a cross-sectional study to estimate the costs of kidney transplantation and compared those with the costs of haemodialysis per year. They included 111 patients, and data was collected in August 2009. Cost analysis was performed including the costs of medications administered by patients on dialysis, all the consumed solutions for dialysis, drugs utilized during the dialysis session, transplantation operation, all medications administered after transplantation, and other medical procedures, costs of laboratory and radiological investigations, costs related to the healthcare staff salaries, nonmedical supply costs, depreciation of installations and equipment and depreciation of reverse osmosis machine. The study did not include transportation costs of patients and their attendants to the dialysis centre, the cost of elapsed time, the expenses related to absence from work, costs of haemodialysis vascular access, dietary costs, and building rental costs. The annual cost of haemodialysis 2 sessions per week was found to be SDG 15,747.68 equivalent to Int$ 15,277.75. Similarly, Abu-Aisha and Elamin [ 16 ] reported the cost of dialysis in Sudan to be equivalent to Int$ 11,054.60 for HD and Int$ 12,107.42 for PD. In Bangladesh, Li and Chow [ 18 ] reported the cost of RRT to be Int$ 5,758.19 for in center HD and Int$ 7,073.24 for CAPD. Jindali [ 34 ] reported the cost of hemodialysis to vary between Int$ 4,000 and 5,500 with annual average cost of Int$ 4,593.43. These costs are lowest compared to costs from similar country category. However, they represent a huge burden to health care system in this country. Democratic Republic of Congo (DRC) is another country in this category. It has one of the lowest Gross National Incomes worldwide with 160 INT$ p.a. p.c.. The population size, poverty scale, and decades of conflict have resulted in the lack of cohesive and functional health systems. In the Dem. Rep. Congo El Matri et al. [ 17 ] reported the cost of dialysis to be Int$ 27,339.51 for PD. Another country in this category is Senegal. It has a Int$ 1,202 Gross National Income per capita and spent nearly 10–12 % of the government expenditure on health care Bamgboye [ 35 ]. The cost of dialysis in Senegal is reported by Abu-Aisha and Elamin [ 16 ] to be Int$ 28,426.11 for HD and Int$ 20,000.56 for PD. In the countries that still struggle to overcome underdevelopment, ESRD is a devastating medical, social and economic problem for patients and their families as well as for national health systems. [ 36 ] However, lack of studies in least developed countries makes it difficult to understand the cost of dialysis in these countries. These few results indicate that RRT is out of reach for everybody who has no social protection – and this group constitutes the vast majority of citizens of these countries. The highest figure is presented from Senegal and lowest in Bangladesh. Again, there was a great variation of cost figures observed in this category of countries. Our survey shows that we have some knowledge about the cost of dialysis in middle-income countries, but we know hardly anything about the cost in low-income and least developed countries. The little data which we have clearly indicates that the annual cost per patient are far beyond the average individual's ability to pay for these services. Dialysis is either limited to the richest minority or it must be financed within the public health service. In our review, the annual cost of HD ranged from Int$ 3,423.79 (India) to Int$ 42,784.91 (Nigeria), PD cost ranged from Int$ 7,974.02 (Egypt) to Int$ 47,970.96 (Nigeria). Compared to other low and middle-income countries, Asian countries had the lowest CAPD costs even though PD fluid is also mostly imported. However, cost advantage has been linked to certain Asian patients populations-especially those with small body size and with residual renal function-can benefit from lower number of CAPD exchanges (3 x 2-L exchanges as compared with 4 x 2-L exchanges or more in Caucasians).[ 18 , 37 ] Based on these studies we cannot state which intervention is more expensive. In six cases PD cost was more expensive than HD, but HD cost more than PD in seven countries. CAPD cost more than HD cost in four countries, and in Namibia and China HD cost was reported to be equal to PD and CAPD cost respectively. However, these findings are not very reliable for several reasons. Firstly, the resource items used in estimating these costs varied significantly. Because of these different methods in allocating and estimating cost, it was difficult to compare the cost of HD and PD from one country to another. Secondly, papers used in this review varied greatly in quality. For instance, some papers failed to adequately describe their methods, and others failed to include costs that were relevant for their perspective. Thus, these costs might not reflect the true cost of dialysis. The cost structure differs strongly between studies. The most important cost component is the consumption of drugs and consumables. For instance, in Malaysia Hooi et al. [ 23 ] found that the personnel cost consumes 18.9 % of total cost while consumables and drugs consume 26.4 % of all costs. In Sri Lanka, Ranasinghe [ 28 ] found that drug and consumables costs accounted for 70.4 %–84.9 % of the total cost, followed by the wage of nursing staff 7.8 %–19.7 %. For most resource-poor countries, consumables have to be imported so that international prices apply. This means that a very poor country will still have almost the same cost of consumables whereas the personnel cost will be much lower than for richer countries. Based on this assumption it is obvious that poor countries must have a higher share of cost of consumables and drugs in the total cost of dialysis. At the same time it is obvious that the relative economic burden of providing dialysis services in a poor country are much higher than in a rich country. The relation between the gross national product p.c.(g i ) and the annual direct cost of HD dialysis (c i ) of country i can be expressed (comp. Table 1 and [ 38 ]) with the linear regression equation c i = 1,0395*g i + 8716.1 with an R2 of 0.1341. For this analysis and Fig. 2 the outlier from Amrita Institute of Medical Sciences, India, (GNP: 1550 INT$, HD cost 40,078.25 INT$) was excluded. The average ratio between direct dialysis cost and GNP p.c. in Int$ was 12.5 for least developed countries, 6.2 for lower middle income-countries and 2.9 for upper middle-income countries. Consequently, we can state that dialysis is relatively more expensive for poorer than for richer developing countries. Relation between GNP and HD cost Source: Table 1 and [ 35 ] If we assume that patients who require dialysis but do not receive it will die, the ration between cost and GNP also expresses the incremental cost-effectiveness ratio (ICER), i.e., an ICER of one would mean that the average gross national product has to be invested in order to save one year of life. Generally an ICER of less than one is seen as highly cost-effective [ 39 ], an ICER of less than three as cost-effective. Based on this cost-effectiveness threshold we can state that dialysis is only cost-effective for upper-middle income countries where it should definitely be included in the socially protected basic health care package. For all other countries it is very likely that other interventions are more cost-effective and should be included in the basic package first before dialysis is supported. For instance, treatment of diabetes is frequently not part of the social protection system in least developing countries although the incremental cost-effectiveness ratio is less than the gross national product [ 40 ]. As inhuman it seems—but dialysis might not be the top priority in least developed countries. This result also indicates that there is a need for policy makers and governments in low and middle-income countries to safeguard that the costs of drugs and consumables of dialyses are not higher than necessary, in particular for CAPD, which involves the use of expensive consumables. Governments can intervene by effectively promoting PD utilization and/or reducing or removing completely the import duty charged on PD materials. This will lower the prices and might increase the supply of materials. In this way it might be more feasible for low and middle-income countries to develop both modalities PD (especially CAPD) and HD. Even with the given uncertainty of data and the poor comparability of papers it is obvious that dialysis is a very expensive intervention. Thus, the decision- and policy-makers of low- and middle-income countries have to discuss whether they want to include dialyses into their basic package of health care. It is likely that other interventions are more cost-effective. The ultimate goal of universal health coverage will require that dialysis is included into the basic package. But it might not be time for all countries to do this now. Further research on the cost and utility of dialysis in low- and middle-income countries is urgently needed in order to base this decision on evidence. This review has shown that economic evaluation of RRT in low and middle-income countries faces methodological challenges. Authors used different resource items and approaches in the calculation of dialysis costs, even though some of them have used similar perspectives. Due to this the cost of dialysis was found to differ from one author to another, and in some countries the cost differences between HD and PD was reported to be insignificant. However, even the limited knowledge about the cost of dialyses in low- and middle-income countries clearly indicates that the cost are beyond the capability of the average individual to pay for these services. Dialyses will have to be included into the national social protection or it will not be available for the majority of cases. Moreover, in order to be able to compare and transfer studies results, researchers should base their studies on existing economic evaluation guidelines. The authors thank Frank Rau for his support in the literature review. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated. Karopadi AN, Mason G, Rettore E, Ronco C. Cost of peritoneal dialysis and haemodialysis across the world. Nephrol Dial Transplant. 2013;28(10):2553–69. CrossRefPubMed Jain AK, Blake P, Cordy P, Garg AX. Global Trends in Rates of Peritoneal Dialysis. Journal of the American Society of Nephrology. 2012;23(3):533–44. CrossRefPubMedPubMedCentral Lysaght MJ. Maintenance Dialysis Population Dynamics: Current Trends and Long-Term Implications. J Am Soc Nephrol. 2002;13:S37–40. PubMed Grassmann A, Gioberge S, Moeller S, Brown G. ESRD patients in 2004: global overview of patient numbers, treatment modalities and associated trends. Nephrology Dialysis Transplantation. 2005;20(12):2587–93. CrossRef El Nahas AM, Bello AK. Chronic kidney disease: the global challenge. The Lancet. 2005;365(9456):331–40. CrossRef Peeters P, Rublee D, Just PM, Joseph A. Analysis and interpretation of cost data in dialysis: review of Western European literature. Health Policy. 2000;54(3):209–27. CrossRefPubMed Just PM, Charro FT, Tschosik EA, Noe LL, Bhattacharyya SK, Riella MC. Reimbursement and economic factors influencing dialysis modality choice around the world. Nephrology Dialysis Transplantation. 2008;23(7):2365–73. CrossRefPubMedCentral Letsios A. The effect of the expenditure increase in the morbidity and the mortality of patients with end stage renal disease: the USA case. Hippokratia. 2011;15 Suppl 1:16–21. PubMedPubMedCentral Blake PG, Just PM. Economics of dialysis. In: Hörl W, Koch K, Lindsay R, Ronco C, Winchester J, editors. Replacement of Renal Function by Dialysis. Netherlands: Springer; 2004. p. 1455–86. CrossRef Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann Intern Med. 2009;151(4):264–9. CrossRefPubMed Just PM, Riella MC, Tschosik EA, Noe LL, Bhattacharyya SK, Charro F. Economic evaluations of dialysis treatment modalities. Health Policy. 2008;86(2–3):163–80. CrossRefPubMed Akers J, Aguiar-Ibáñez R, Baba-Akbari Sari A. CRD's Guidance for Undertaking Reviews in Health Care. York: Centre for Reviews and Dissemination (CRD); 2009. OECD. Development Co-operation Report 2012: Lessons in Linking Sustainability and Development. OECD Publishing; 2012 Drummond MF, Sculpher MJ, Roannance GW, O'Brien BJ, Stoddard GL. Methods for the Economic Evaluation of Health Care Programmes. 3rd ed. Oxford: Oxford University Press; 2005. Conversion tables. [ http://​data.​worldbank.​org/​indicator/​PA.​NUS.​PPPC.​RF] Abu-Aisha H, Elamin S. Peritoneal Dialysis in Africa. Peritoneal Dialysis International. 2010;30(1):23–8. CrossRefPubMed El Matri A, Elhassan E, Abu-Aisha H. Renal replacement therapy resources in Africa. Arab Journal of Nephrology and Transplantation. 2008;1(1):9–14. Li PK, Chow KM. The cost barrier to peritoneal dialysis in the developing world--an Asian perspective. Peritoneal Dialysis International. 2001;21 Suppl 3:S307–13. PubMed Chow S, Wong F. Health-related quality of life in patients undergoing peritoneal dialysis: effects of a nurse-led case management programme. Journal of Advanced Nursing. 2010;8:1780–92. CrossRef Pacheco A, Saffie A, Torres R, Tortella C, Llanos C, Vargas D, et al. Cost/Utility Study of Peritoneal Dialysis and Hemodialysis in Chile. Peritoneal Dialysis International. 2007;27(3):359–63. Okafor C, Kankam C. Future Options for the Management of Chronic Kidney Disease in Nigeria. Gender Medicine. 2012;9(1):S86–93. CrossRefPubMed Erek E, Sever MS, Akoglu E, Sariyar M, Bozfakioglu S, Apaydin S, et al. Cost of renal replacement therapy in Turkey. Nephrology. 2004;9(1):33–8. Hooi LS, Lim TO, Goh A, Wong HS, Tan CC, Ahmad G, et al. Economic evaluation of centre haemodialysis and continuous ambulatory peritoneal dialysis in Ministry of Health hospitals. Malaysia Nephrology. 2005;10(1):25–32. Mahdavi-Mazdeh M, Zamani M, Zamyadi M, Rajolani H, Tajbakhsh K, Heidary Rouchi A, et al. Hemodialysis cost in Tehran, Iran. Hemodial Int. 2008;12(4):492–8. Arefzadeh A, Lessanpezeshki M, Seifi S. The cost of hemodialysis in Iran. Saudi J Kidney Dis Transpl. 2009;20(2):307–11. PubMed Hu R-H, Lee P-H, Tsai M-K, Lee C-Y. Medical cost difference between renal transplantation and hemodialysis. Transplantation Proceedings. 1998;30(7):3617–20. CrossRefPubMed Abreu MM, Walker DR, Sesso RC, Ferraz MB. A Cost Evaluation of Peritoneal Dialysis and Hemodialysis in the Treatment of End-Stage Renal Disease in Sao Paulo, Brazil. Peritoneal Dialysis International. 2013;33(3):304–15. CrossRefPubMedPubMedCentral Ranasinghe P, Perera YS, Makarim MFM, Wijesinghe A, Wanigasuriya K. The costs in provision of haemodialysis in a developing country: A multi-centered study. BMC Nephrology. 2011;12(1):42. CrossRefPubMedPubMedCentral Naqvi SAJ. Nephrology services in Pakistan. Nephrol Dial Transplant. 2000;15:769–71. CrossRef Prodjosudjadi W. Incidence, Prevalence, Treatment and Cost of End-Stage Renal Disease in Indonesia. Ethnicity & Disease. 2006;16(Spring):14–6. Suja A, Anju V, Peeyush P, Anju R, Neethu J, Saraswathy R. Economic evaluation of end stage renal disease patients undergoing hemodialysis. Journal of Pharmacy and Bioallied Sciences. 2012;4(2):107. CrossRefPubMedPubMedCentral Khanna U. The Economics of Dialysis in India. Indian Journal of Nephrology. 2009;19(1):1–4. CrossRefPubMedPubMedCentral Elsharif ME, Elsharif EG, Gadour WH. Costs of Hemodialysis and Kidney Transplantation in Sudan: A Single Center Experience. Iranian Journal of Kidney Diseases. 2010;4(4):282–4. PubMed Jindali MR. Health Policy for Renal Replacement Therapy in developing countries. Journal of Health care, Science and Humanities. 2011;1(1):41–54. Bamgboye EL. End-Stage Renal Disease in Sub-Saharan Africa. Ethnicity & Disease. 2006;16(Spring):5–9. Kerr M, Bray B, Medcalf J, O'Donoghue DJ, Matthews B. Estimating the financial cost of chronic kidney disease to the NHS in England. Nephrol Dial Transplant. 2012. Cheng I. Peritoneal dialysis in Asia. Peritoneal Dialysis International. 1996;16 Suppl 1:S381–5. PubMed World Bank. World Development Indicators 2014. Washington: The World Bank Press; 2014. CrossRef WHO. Scaling up action against noncommunicable diseases: how much will it cost? Geneva: World Health Organisation; 2011. Flessa S. Costing of diabetes mellitus in Cambodia. In . Phnom Penh: Ministry of Health; 2014. Lawrencia Mushi Paul Marschall Steffen Fleßa BMC Health Services Research BMC series – open, inclusive and trusted Implementing a care pathway for elderly patients, a comparative qualitative process evaluation in primary care Impact evaluation of a healthy lifestyle intervention to reduce cardiovascular disease risk in health centers in San José, Costa Rica and Chiapas, Mexico The perceptions and perspectives of patients and health care providers on chronic diseases management in rural South Africa: a qualitative study Cost-effectiveness of adding rituximab to splenectomy and romiplostim for treating steroid-resistant idiopathic thrombocytopenic purpura in adults Excess length of stay and economic consequences of adverse events in Dutch hospital patients Counselling practices in community pharmacies in Riyadh, Saudi Arabia: a cross-sectional study
CommonCrawl
Evaluation of rational nonsteroidal anti-inflammatory drugs and gastro-protective agents use; association rule data mining using outpatient prescription patterns Oraluck Pattanaprateep ORCID: orcid.org/0000-0001-9570-26351, Mark McEvoy2, John Attia2 & Ammarin Thakkinstian1 Nonsteroidal anti-inflammatory drugs (NSAIDs) and gastro-protective agents should be co-prescribed following a standard clinical practice guideline; however, adherence to this guideline in routine practice is unknown. This study applied an association rule model (ARM) to estimate rational NSAIDs and gastro-protective agents use in an outpatient prescriptions dataset. A database of hospital outpatients from October 1st, 2013 to September 30th, 2015 was searched for any of following drugs: oral antacids (A02A), peptic ulcer and gastro-oesophageal reflux disease drugs (GORD, A02B), and anti-inflammatory and anti-rheumatic products, non-steroids or NSAIDs (M01A). Data including patient demographics, diagnoses, and drug utilization were also retrieved. An association rule model was used to analyze co-prescription of the same drug class (i.e., prescriptions within A02A-A02B, M01A) and between drug classes (A02A-A02B & M01A) using the Apriori algorithm in R. The lift value, was calculated by a ratio of confidence to expected confidence, which gave information about the association between drugs in the prescription. We identified a total of 404,273 patients with 2,575,331 outpatient visits in 2 fiscal years. Mean age was 48 years and 34% were male. Among A02A, A02B and M01A drug classes, 12 rules of associations were discovered with support and confidence thresholds of 1% and 50%. The highest lift was between Omeprazole and Ranitidine (340 visits); about one-third of these visits (118) were prescriptions to non-GORD patients, contrary to guidelines. Another finding was the concomitant use of COX-2 inhibitors (Etoricoxib or Celecoxib) and PPIs. 35.6% of these were for patients aged less than 60 years with no GI complication and no Aspirin, inconsistent with guidelines. Around one-third of occasions where these medications were co-prescribed were inconsistent with guidelines. With the rapid growth of health datasets, data mining methods may help assess quality of care and concordance with guidelines and best evidence. Nonsteroidal anti-inflammatory drugs (NSAIDs) are used to relieve pain and inflammation. However, conventional NSAIDs (e.g., Diclofenac, Meloxicam, Ibuprofen) can induce gastrointestinal (GI) upset and adverse events, especially peptic ulceration [1]. To reduce this risk, gastro-protective agents are commonly co-prescribed with NSAIDs; alternatively, cyclooxygenase (COX)-2 inhibitors (e.g., Etoricoxib, Celecoxib) are used, a new generation of NSAIDs claimed to cause fewer gastrointestinal adverse events [2,3,4]. Co-prescription of COX-2 inhibitors with gastro-protective agents are recommended only in patients at high risk of GI disease, such as elderly patients (aged ≥ 60 years), those using antiplatelet agents (e.g., Aspirin), or patients with a history of GI events [2, 5]. Commonly used gastro-protective agents are histamine H2-receptor antagonists (H2RAs, e.g., Ranitidine) and proton pump inhibitors (PPIs, e.g., Omeprazole, Pantoprazole, Esomeprazole, Lansoprazole). The H2RAs competitively antagonize the histamine effects at H2-receptors in the stomach to reduce the amount and concentration of gastric acid. PPIs suppress stomach acid secretion by specific inhibition of the H+/K± ATPase system found at the secretory surface of gastric parietal cells [6,7,8,9]. Concomitant use of H2RAs and PPIs are recommended only in the treatment of gastro-oesophageal reflux disease (GORD) [10, 11]. In the past, identification of poor quality drug use in the hospital was not easily done, because of the volume and complexity of prescription data. In our institution (Ramathibodi Hospital, Bangkok, Thailand) data warehouses have been available since 2014, and there has been interest in using these to drive quality improvement in health care practice and service delivery. These data include drug prescriptions, demographic data, diagnoses, laboratory tests, imaging, etc., and are routinely extracted from hospital information systems (HIS). Currently, a wide variety of data mining algorithms (i.e., technique for big data analysis) are available; they are classified into 2 main categories: supervised and unsupervised learning [12]. Supervised learning algorithms produce a model using classification or regression that can predict the response values for a particular outcome or behavior of interest. Unsupervised learning algorithms describe the form and hidden structure of data, using methods such as clustering, anomaly detection, and association rule mining (ARM), which has been applied for detecting co-prescription patterns in many studies [13,14,15,16,17]. The Apriori algorithm is a classical ARM technique, based on the principle of frequent pattern mining [18,19,20,21]. First, a candidate set is generated to identify items that occur with a frequency that exceeds a pre-specified threshold (i.e., defined as the support measure). Second, the association rules are derived by indicating conditional probabilities between a pair of items; groups are defined if the conditional probability value exceeds a user-defined threshold (called the confidence measure). Our study aimed to assess associations within the gastro-protective agents (H2RAs and PPIs), and NSAIDs (including COX-2 inhibitors), as well as between these two drug classes using ARM. Once associations were identified, prescription patterns were explored for congruence with guidelines. An electronic database of outpatients records at Ramathibodi Hospital between October 1st, 2013 and September 30th, 2015 was extracted from the hospital data warehouse focusing on H2RAs and PPIs (A02A and A02B codes), and NSAIDs and COX-2 inhibitors (M01A). Only fields for patient demographics, prescriptions, drug utilization, and diagnoses were retrieved. Two steps of data manipulation and analysis were then performed using R software version 3.3.0 in RStudio® version 0.99.902 (RStudio Inc., Boston, MA, USA). First, the data frame was constructed and then data was analyzed to identify association rules and evaluate rational drug use. Data retrieval and manipulation Five tables in the hospital data warehouse were retrieved as follows: 1) physician prescriptions, 2) master drug lists, 3) drug utilization, 4) diagnosis data, and 5) patient demographic data. The study protocol was approved by the ethics committee of Ramathibodi Hospital without requirement of consent for participation. As for our hospital's rule, data were not available for public and thus we could not provide and share individual patient data. The physician prescriptions over 2 fiscal years were retrieved. These data had been already cleaned through an "Extract, Transform, Load" (ETL) process while being loaded into the data warehouse on a daily basis [22]. Master drug lists from the data warehouse were also loaded and merged in RStudio®. To manipulate the data frame, R commands were constructed and run to select ambulatory or outpatient prescriptions with Anatomical Therapeutic Chemical (ATC) classification system codes of A02A: Antacids, A02B: Drug for peptic ulcer and GORD, and M01A: Anti-inflammatory and anti-rheumatic products, non-steroids or NSAIDs (see Table 1). Table 1 Drug code of 1A and 4 L drugs and their names Two years of data were combined and drug strength and dosage were ascertained from the left 4 digits of the drug code substring, e.g. IBUP1T- (Ibuprofen 200 mg tablet), IBUP2T- (Ibuprofen 400 mg tablet), IBUP-S- (Ibuprofen 100 mg/5 ml) syrup transformed to the same code - IBUP for Ibuprofen. HN (patient's hospital number) and date were joined to create HNDate, to represent visit date. Data frame was reshaped from long to wide format e.g. And records with only one drug item per patient per day were excluded. Drug utilization, diagnosis data, and patients' demographic data were also retrieved from tables in the hospital data warehouse to get each prescription's dose and frequency, primary/secondary diagnosis of each visit (with International Classification of Disease, Tenth Edition ICD-10), date of birth (to calculate age), and gender. All data were merged with physician prescriptions by HNDate. Patient age and number of OPD visits/person/year were described using mean (SD) and number of male and number of diagnoses, defined by ICD-10 codes: K20-K29.9, K30-K38.9, K90-K93.8 for gastrointestinal complications. The Apriori algorithm with ARM was applied to assess the pattern of associations within the same drug classes (i.e., gastro-protective agents, NSAIDs) and between different drug classes (i.e., gastro-protective agents and NSAIDs). Association rules were derived based on prescription data. The rules were aimed to detect prescribing patterns of NSAIDs and gastro-protective agents for individual patients in the same visit with detail as follows: Let I be a set of prescribed drug items (i.e., NSAIDs and gastro-protective agents) listed in the database and P = {P 1, P 2,…, P n} be a set of number of prescriptions, where P i (1 ≤ i ≤ n) is a set of drugs in prescription i. Given X and Y as non-overlapping sets of drug items (i.e., X ∩ Y = ∅), the ARM is used to measure how often X (called antecedent or left-hand-side or LHS) and Y (called consequent or right-hand-side or RHS) occurred/appeared together in the same prescription (P i). The association rules use 3 probability estimations: support, confidence, and lift without adjusting for derivation of multiple sets of drug items. Support is defined as the probability of prescriptions in P contains X and Y, i.e., support(X➔Y) = P(X∪Y). Confidence is defined as the conditional probability of having Y given X; confidence(X➔Y) = P(Y|X). Lift is the deviation of the support parameter from what would be expected if X and Y were independent; lift(X➔Y) = P(X,Y) / P(X) x P(Y); lift values of <1, >1, and 1 refer to negative, positive, and independent associations between X and Y, respectively [20, 21, 23]. The Apriori algorithm in R was used for analyzing the ARM parameters with the command [24] as $$ \mathrm{Apriori}\ \left(\mathrm{data},\mathrm{parameter}=\mathrm{NULL},\mathrm{appearance}=\mathrm{NULL},\mathrm{control}=\mathrm{NULL}\right) $$ From ARM, related data in 3 tables including drug utilization, diagnosis data, and patients' demographic data, were explored and assessed to evaluate rational use of 2 concomitant drugs. In the first group - concomitant use of H2RAs and PPIs - dose and frequency appearing in each prescription along with clinic data were cross-checked for drug interaction or over-dosage. Number and percentage of prescriptions for any concomitant use of H2RAs and PPIs were compared with GORD (described in primary/secondary diagnosis). In the second group - concomitant use of COX-2 inhibitors and PPIs - patients' characteristics, number and percentage of prescriptions by age groups, co-therapy with Aspirin, and GI complication were described. A total of 2,575,331 outpatient visits over 2 fiscal years were retrieved. The mean age and number of OPD visits were 48.4 (SD = 21.4) years and 4.7 (SD = 4.4) per person per year, respectively, and the majority were females (66%). The percentages with GI complications and arthritis were 1.80% and 0.74%, respectively. Among them, 134,285 prescriptions had at least one oral antacid (A02A), drug for peptic ulcer and GORD (A02B), or NSAIDs (M01A) in the same day. A total of 128,117 (95.4%) observations were omitted due to prescription of only one drug per visit, leaving 6168 observations for ARM analysis. The ARM was applied starting with a threshold of 1% for both support and confidence parameters, and increasing the threshold until association rules were found. Twelve rules were identified and pass the thresholds of 1% and 50% for support and confidence parameters, respectively (see Table 2). The strongest support parameter (0.2244) was between Aspirin and Omeprazole. The strongest confidence parameter (0.9738) was between Naproxen and Omeprazole. Lift values of <1, >1, and 1 refer to negative, positive, and independent associations between antecedent and consequent, respectively, the larger of the value indicates the more significant of the association. The most significant association was between Omeprazole and Ranitidine with highest lift of 7.6153. The rest was low associations between other drugs and Omeprazole. Table 2 LHS, RHS, support, confidence and lift of 12 rules Among these 12 association rules, the number of prescriptions of concomitant use for the first and second lifts (i.e., H2RAs and PPIs and COX-2 inhibitors and PPIs) were next calculated. For H2RAs and PPIs (i.e., Ranitidine and Omeprazole), the support and numbers of observations were 0.0552 and 6168, respectively. As a result, 340 (0.0552 × 6168) visits were prescribed with Omeprazole and Ranitidine on the same day. Since Omeprazole and Ranitidine are in the same drug class (A02B) for treatment of GORD, rational concomitant drug uses for these 340 visits were therefore explored, see Table 3. Drug dose and frequency from each prescription were retrieved. Among these, one patient was prescribed both drugs from different clinics, 12 patients were prescribed Omeprazole and Ranitidine by the same physicians with taking both drugs at the same meals, while the rest of the patients received two drugs from one physician but for different meals. All GI related diagnoses were further explored among these 340 patients, see Table 4. The results indicate that in 118 visits or one-third of these patients, the combination was not prescribed for GORD. Table 3 Drug's dose and frequency of Omeprazole (OMPZ) and Ranitidine (XAND) Table 4 Diagnosis related to GI complications of visits prescribed Omeprazole and Ranitidine on the same day, frequency (%) In the second group, we looked at concomitant use of COX-2 inhibitors with PPIs, a combination that is indicated only in elderly patients or those who have GI complications or are taking Aspirin. From a total of 828 visits, there were no COX-2 inhibitors (i.e., Etoricoxib or Celecoxib) prescribed in the same visit. Of these, 295 (35.6%) visits (Table 5) did not comply with the clinical practice guidelines, i.e. for patients aged less than 60 years with no GI complication and no Aspirin taken. Table 5 Category of visits prescribed COX-2 inhibitors (Etoricoxib or Celecoxib) with Omeprazole, frequency (%) The study applied ARM to find association rules in prescribing drugs that contained any of 2 drug groups in the same day, i.e., NSAIDs and gastro-protective agents. Data were manipulated and analyzed by Apriori algorithm in RStudio®. Twelve rules were found with >1% support and >50% confidence thresholds and revealed 2 non-guideline prescription patterns of NSAIDs and gastro-protective agents from a hospital data warehouse i.e., Omeprazole with Ranitidine, and COX-2 inhibitors with Omeprazole. The overwhelming majority of prescriptions (95%) were only for single agents, indicating that rational drug prescriptions was occurring the majority of the time. However, the remaining 5% still represented over 6000 prescriptions and these need more analysis to ascertain whether they complied with clinical practice guidelines. Among scripts with more than one drug, the strongest association was between Omeprazole and Ranitidine, both of which are in the same drug group, (A02B). Although their pharmacological pathways are different [5], most physicians prescribe either one or another. However, evidence from few studies indicated that taking these 2 drugs in the same meal can improve gastric acid control [10, 11]. The second prescription pattern was between COX-2 inhibitors and Omeprazole. There is no cost effectiveness study directly supporting the benefits of this combination strategy [25], and PPIs are clinically not indicated to prescribe with COX-2 inhibitors, except for high GI risk patients [5]. This study showed that ARM could detect possible poor quality of drug prescription patterns from a hospital data warehouse. Applying this ARM in a routine practice of drug prescriptions should support and lead to health care improvement. The ARM has also found benefits in other clinical studies to identify risk patterns for type 2 diabetes [26], analyze the records of patients diagnosed with essential hypertension [27], identify interesting patterns of infection control [28], find disease association rules from the national health insurance research database in Taiwan [29], and to identify product–multiple adverse event associations in the US Vaccine Adverse Event Reporting System (VAERS) [30]. Apriori is an algorithm for generating association rules; other ARM algorithms are Eclat and FP-Growth algorithms [31, 32]. This study used data in a hospital data warehouse to explore the prescription pattern of 2 drug groups. The method uses an existing algorithm (Apriori) within an open source package (R) for deriving the association rules. Twelve rules were found, representing around one-third of visits (i.e., 118 of 340 who were prescribed Omeprazole with Ranitidine and 295 from 828 who were prescribed Omeprazole with Etoricoxib or Celecoxib), where prescriptions were potentially not congruent with guidelines. This Apriori algorithm should be implemented in hospital monitoring systems in order to detect guideline-discordant use of medicines and routinely feedback to prescribers for increased patient safety. Momeni M, Katz JD. Mitigating GI risks associated with the use of NSAIDs. Pain Med. 2013;14:S18–22. Masclee GMC, Valkhoff VE, Soest EM, Mazzaglia G, Molokhia M, Trifiro G, et al. Cyclo-oxygenase-2 inhibitors or nonselective NSAIDs plus gastroprotective agents: what to prescribe in daily clinical practice? Aliment Pharmacol Ther. 2013;38(2):178–89. Satoh H, Amagase K, Takeuchi K. Mucodal protective agents prevent exacerbation of NSAID-induced small intestinal lesions caused by antisecretory drugs in rats. J Pharmacol Exp Ther. 2014;348:227–35. Targownik LE, Thomson PA. Gastroprotective strategies among NSAID users: guidelines for appropriate use in chronic illness. Can Fam Physician. 2006;52:1100–5. Cryer B. A COX-2-specific inhibitor plus a proton-pump inhibitor: is this a reasonable approach to reduction in NSAIDs' GI toxicity? Am J Gastroenterol. 2006;101:711–3. Huang J, Hunt RH. Pharmacological and pharmacodynamics essentials of H2-receptor antagonists and proton pump inhibitors for the practicing physician. Best Pract Res Cl Ga. 2001;15(3):355–70. Schubert ML, Peura DA. Reviews in basic and clinical gastroenterology: control of gastric acid secretion in health and disease. Gastroenterology. 2008;134:1842–60. Laine L, Takeuchi K. Tarnawski. Reviews in basic and clinical gastroenterology: gastric mucosal defense and cytoprotection: bench to bedside. Gastroenterology. 2008;135:41–60. Aihara T., Nakamura E., Amagase K., Tomita K., Fujishita T., Furutani K., et.al. Pharmacological control of gastric acid secretion for the treatment of acid-related peptic disease: pase, present, and future. Pharmacol Therapeut 2003;98:109-127. Abdul-Hussein M, Freeman J, Castell D. Concomitant administration of a histamine 2 receptor antagonist and proton pump inhibitor enhances gastric acid suppression. Pharmacotherapy. 2015;35(12):1124–9. Katz PO, Tutuian R. Histamine receptor antagonists, proton pump inhibitors and their combination in the treatment of gastro-oesophageal reflux disease. Best Pract Res Cl Ga. 2001;15(3):371–84. Han J, Kamber M. Data mining: concepts and techniques. 2nd ed. CA: Morgan Kaufmann Publisher; 2006. Chen TJ, Chou LF, Hwang SJ. Application of a data-mining technique to analyze coprescription patterns for antacids in Taiwan. Clin Ther. 2003;25:2453–63. He Y, Zheng X, Sit C, Loo WT, Wang ZY, Xie T, et al. Using association rules mining to explore pattern of Chinese medicinal formulae (prescription) in treating and preventing breast cancer recurrence and metastasis. J Transl Med. 2012;10(Suppl 1):S12. Yang DH, Kang JH, Park YB, Park YJ, Oh HS, Kim SB. Associtaion rule mining and network analysis in oriental medicine. PLoS One. 2013;8(3):e59241. doi:10.1371/journal.pone.0059241. Yang PR, Shih WT, Chu YH, Chen PC, Wu CY. Frequency and co-prescription pattern of Chinese herbal products for hypertension in Taiwan: a cohort study. BMC Complement Altern Med. 2015;15:163. Yoosofan A, Ghajar FG, Ayat S, Hamidi S, Mahini F. Identifying association rules among drugs in prescription of a single drugstore using Apriori method. Intell Inf Manag. 2015;7:253–9. Lantz B. Machine learning with R. 2nd ed. Birmingham: Packt Publishing; 2015. Hahsler M, Chelluboina S, Hornik K, Buchta C. The arules R-package ecosystem: analyzing interesting patterns from large transaction data sets. J Mach Learn Res. 2011;12:2021–5. Agrawal R, Imieliński T, Swami A. Mining association rules between sets of items in large databases. In: ACM SIGMOD Record 22(2): 1993: ACM. 1993;207–16. Agrawal R, Srikant R. Proceedings of the 20th International Conference on Very Large Databases. Fast algorithms for mining association rules in large databases. VLDB. 1994;1215:478–99. Reeves LL. A manager's guide to data warehousing. IN: Wiley Publishing; 2009. Hahsler M, Bettina G, Hornik K. Arules - a computational environment for mining association rules and frequent item sets. J Stat Softw. 2005;14(15):1–25. Package 'arules'. April 14, 2016. Version 1.4–1. Date 2016–04-10. URL: https://cran.r-project.org/web/packages/arules/arules.pdf (Accessed 7th Jul 2016). Brown TJ, Hooper L, Elliott RA, Payne K, Webb R, Roberts C, et. al. A comparison of the cost-effectiveness of five strategies for the prevention of non-steroidal anti-inflammatory drug-induced gastrointestinal toxicity: a systematic review with economic modelling. Health Technol. Assess. 2006;10(38):127-64. Ramezankhani A, Pournik O, Shahrabi J, Azizi F, Hadaegh F. An application of association rule mining to extract risk pattern for type 2 diabetes using Tehran lipid and glucose study database. Int J Endocrinol Metab. 2015;13(2):e25389. Shin A.M., Lee I.H., Lee G.H., Park H.J., Park H.S., Yoon K.I., et.al. Diagnostic analysis of patients with essential hypertension using association rule mining. Healthc Inform Res 2010;16(2):77-81. Brossette S, Sprague AP, Hardin JM, Waites KB, Jones WT, Moser SA. Association rules and data mining in hospital infection control and public health surveillance. JAMIA. 1998;5:373–81. Kuo RJ, Shih CW. Association rule mining through the ant colony system for national health insurance research database in Taiwan. Int J Comput Math. 2007;54:1303–18. Wei L, Scott J. Association rule mining in the US vaccine adverse event reporting system (VAERS). Pharmacoepidem Dr S. 2015;24:922–33. Data Mining Algorithms In R. URL: http://en.wikibooks.org/w/index.php?oldid=2579992 (Accessed 11th Apr 2017). Hunyadi D. Performance comparison of Apriori and FP-Growth algorithms in generating association rules. Paris, France: Proceedings of the European Computing Conference; 2011. As for our hospital's rule, data were not available for public and thus we could not provide and share individual patient data. Section for Clinical Epidemiology and Biostatistics, The Faculty of Medicine Ramathibodi Hospital, Mahidol University, 270 Rama VI Rd., Ratchathewi, Bangkok, 10400, Thailand Oraluck Pattanaprateep & Ammarin Thakkinstian Centre for Clinical Epidemiology and Biostatistics, and Hunter Medical Research Institute, The University of Newcastle, Newcastle, NSW, Australia Mark McEvoy & John Attia Search for Oraluck Pattanaprateep in: Search for Mark McEvoy in: Search for John Attia in: Search for Ammarin Thakkinstian in: OP contributed in conception, acquire and analyze data. AT participated in design and interpret the result. MM and JA participated in discussing the results and revising the manuscript critically for important intellectual content. All authors read and approved the final manuscript. Correspondence to Oraluck Pattanaprateep. Described on the title page. The study protocol was reviewed and approved by the ethics committee of Ramathibodi Hospital without requirement of consent for participation. Pattanaprateep, O., McEvoy, M., Attia, J. et al. Evaluation of rational nonsteroidal anti-inflammatory drugs and gastro-protective agents use; association rule data mining using outpatient prescription patterns. BMC Med Inform Decis Mak 17, 96 (2017) doi:10.1186/s12911-017-0496-3 Association rule Apriori algorithm Prescription patterns Rational drug use Nonsteroidal anti-inflammatory drugs Gastro-protective agents Standards, technology, machine learning, and modeling
CommonCrawl
High Power Laser Science and Engineering Electromagnetic radiation from ... Electromagnetic radiation from laser wakefields in underdense plasma 2. Wake radiation from one-dimensional PIC simulations 3. Two-dimensional PIC simulations 4. Theoretical model and discussions Li, Wen-Tao Wang, Wen-Tao Liu, Jian-Sheng Wang, Cheng Zhang, Zhi-Jun Qi, Rong Yu, Chang-Hai Li, Ru-Xin and Xu, Zhi-Zhan 2015. Developments in laser wakefield accelerators: From single-stage to two-stage. Chinese Physics B, Vol. 24, Issue. 1, p. 015205. High Power Laser Science and Engineering, Volume 2 01 July 2014 , e7 Yue Liu (a1), Wei-Min Wang (a2) and Zheng-Ming Sheng (a1) (a3) 1Key Laboratory for Laser Plasmas (MoE) and Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, China 2Beijing National Laboratory of Condensed Matter Physics, Institute of Physics, CAS, Beijing, China 3Department of Physics, SUPA, Strathclyde University, Rottenrow 107, Glasgow, UK The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution licence . DOI: https://doi.org/10.1017/hpl.2014.6 Published online by Cambridge University Press: 31 March 2014 Figure 1. (Color online) The electromagnetic radiation waveform (red solid lines) in the wake at time $t=100T_L$ and $t=150T_L$ ; snapshots of the transverse current densities (blue dashed lines) and the electron densities (black dotted lines) are also plotted, where $a=0.5$ , $n_0=0.0001$ . The filled red region indicates the incident laser. Figure 2. (Color online) The temporal waveforms of the radiation pulses observed at a fixed position at the right side of the simulation box in vacuum for three different laser intensities with initial plasma density $n_0=0.001$ (a) and the corresponding frequency spectra (b). Frequency spectra of the radiation produced with different plasma densities at a given laser amplitude $a=0.5$ are also plotted in (c). Figure 3. (Color online) The temporal evolution of the electromagnetic radiation emitted in the wake at three different laser intensities (a–c) and the corresponding frequency spectra (d), where $n_0=0.02$ . Figure 4. (Color online) The amplitude of the first pulse normalized by $m\omega _Lc/e$ in the wake radiation (a) and its central frequency (b) versus the normalized laser strength $a$ , where $n_0=0.02$ . Figure 5. (Color online) 2D simulation results of wake radiation generation: (a) spatial distribution of electromagnetic radiation, (b) waveform of the radiation on the laser axis, and (c) frequency spectrum of the wake radiation, where $a=0.5$ and $n_0=0.02$ . Figure 6. The trajectory of a typical electron during and after the laser interaction in phase space (a) and the temporal evolution of its transverse momentum (b), where $a=0.5$ , $n_0=0.02$ . The inset in (b) shows the transverse momentum at a later time after the laser interaction. It is demonstrated by simulations and analysis that a wakefield driven by an ultrashort intense laser pulse in underdense plasma can emit tunable electromagnetic radiation along the laser propagation direction. The profile of such a kind of radiation is closely associated with the structure of the laser wakefield. In general, electromagnetic radiation in the terahertz range with its frequency a few times the electron plasma frequency can be generated in the moderate intensity regime. In the highly nonlinear case, a chain of radiation pulses is formed corresponding to the nonlinear structure of the wake. Study shows that the radiation is associated with the self-modulation process of the laser pulse in the wakefield and resulting transverse electron momenta from modulated asymmetric laser fields. Radiation generation in relativistic laser–plasma interaction has been a hot issue in the last two decades due to its wide application in medicine, biology, chemistry, etc. Nowadays, radiation over a wide range of frequency, from terahertz (THz) to $X/\gamma $ , can be produced. Meanwhile, various scenarios of radiation generation have been developed, such as high-order harmonics generation[1–3], betatron radiation[4, 5], Thomson scattering[6], Cerenkov radiation[7], and linear mode conversion[8–10]. Each scheme of producing radiation sources has its own advantages and disadvantages in terms of pulse duration, spectral range, collimating property, and energy conversion efficiency. It is well known that the laser wakefield driven in homogeneous underdense plasma generally cannot produce radiation by itself. Only when the wakefield is excited in inhomogeneous plasma[9]or in homogeneous plasma applied with an external DC magnetic field[11, 12], emission from laser wakefields can be generated. Recently, we showed that extremely ultraviolet (XUV) radiation can be produced from laser wakefields excited in homogeneous plasma[13]. The emission is caused by a transverse current sheet co-moving with the laser pulse, which is formed by an electron density spike with trapped electrons in the wakefields together with certain transverse residual momentum of electrons left behind the laser pulse. In this work, we extend our previous work on the wakefield emission in homogeneous plasma[13]to a wider parameter range based upon numerical simulations. It is shown that electromagnetic radiation covering a wide frequency range from THz to XUV can be produced simultaneously with the laser wakefield excitation in underdense homogeneous plasmas. The strength and frequency of the radiation change significantly with the laser or plasma parameters. In the case with a weakly relativistic driving laser, the radiation produced turns out to be a multi-cycle THz pulse. If one increases the plasma density and the laser intensity, the radiation shows up in the form of a pulse chain with frequency around the laser frequency. As the laser intensity is enhanced further, wake wave-breaking occurs and XUV radiation is emitted[13, 14]. A simple theoretical analysis is presented for the physical mechanisms involved. To illustrate the emission from laser wakefield excitation, a series of one-dimensional (1D) particle-in-cell (PIC) simulations were conducted with the code KLAP[15]and OSIRIS 2.0[16]. We take a laser pulse whose envelope in the form $a_L=a{\rm sin}^2(\pi t/\tau )$ , where $a= \sqrt{I\lambda _L^2/(1.37\times 10^{18}~ {\rm W}~ \mu {\rm m}^2 {\rm cm}^2)}$ , with $I$ the peak laser intensity. We also take $\lambda _L =1~ \mu m$ , and $\tau =7T_L$ is the full laser duration, with $T_L$ the laser oscillation period. It is assumed that the laser pulse is linearly polarized along the $z$ -direction and is normally incident on a uniform plasma slab with density $n_0=n_e/n_c=0.001$ (corresponding to the electron plasma frequency $\omega _p/2\pi =9.4~ \mathrm{THz}$ ) along the $x$ -direction, where $n_c$ is the critical density. The left vacuum–plasma boundary is set at $x=20\lambda _L$ . The length of the plasma slab depends on the laser propagation distance, and generally it varies between $100\lambda _L$ and $200\lambda _L$ in the simulation according to different cases. Simulations were conducted with different time and space resolutions to ensure that the results were not a numerical artefact. Figure 1 displays the wake radiation waveform travelling forward in the wake right behind the laser pulse and the corresponding spatial distribution of the density perturbation (denoted by $\delta {n}=n-n_0$ ). The corresponding transverse current associated with the radiation is also shown. The frequency and the amplitude of the radiation behind the laser pulse are found to be different for each cycle, i.e., both the frequency and amplitude decrease with the distance from the driving laser pulse. At a later time $t=150T_L$ , there are more cycles of radiation formed. It is obvious that the amplitude of the wake radiation is larger at time $t=150T_L$ than that at $t=100T_L$ . The wake radiation is found with the first cycle strongest. The radiation appears frequency chirped with the high-frequency component at the front, which remains unchanged during the whole process. This can be understood partially as due to plasma dispersion, such that the high-frequency component propagates at a higher speed. For plasmas with $n_0=n_e/n_c=0.001$ , the wake radiation turns out to be in the THz frequency regime. We show the temporal evolution of the radiation observed at the right side of the plasma slab in vacuum for three typically different laser intensities and their corresponding spectra in Figures 2(a) and 2(b). The radiation penetrates through the plasma into the vacuum following the laser pulse and has a waveform with the first cycle strongest, as mentioned above. In Figure 2(a), the THz radiation amplitude is found to be up to $9.6~ {\rm MV~ cm}^{-1}$ for the incident laser at $\sim 10^{18}~ {\rm W~ cm}^{-2}$ . However, the THz pulse frequency changes weakly with the laser amplitude in Figure 2(b) in the moderate intensity regime. In this case the central frequency of the THz pulses is about $18.9~ \mathrm{THz}$ . In addition, if we increase the plasma density, the radiation frequency is also shown to increase. As plotted in Figure 2(c), by increasing the plasma density from $0.001n_c$ to $0.01n_c$ , the frequency spectrum is broadened, and its central frequency grows from $0.0625\omega _L $ to $0.1875\omega _L $ . This implies that the radiation frequency increases with increasing plasma density. Note that these frequencies are also larger than the corresponding electron plasma frequencies. In the $n_0=0.005$ and $n_0=0.01$ cases, in addition to the main peaks, there are also other subpeaks in the spectra. Next, we increase laser intensity further. Figure 3 shows the radiation propagating forward in the right side of the simulation box in vacuum for three different incident laser intensities in plasma with initial density $n_0=0.02$ , as well as the corresponding frequency spectra. At weakly relativistic laser intensity, the radiation displays similar temporal waveforms as shown above. But at higher intensity for $a\ge 1$ , the radiation shows up in the form of a pulse train. This appears to be associated with the highly nonlinear features of the wakefield excitation. Note that, with the increase of the laser intensity, the separation between neighbouring subpulses also increases. This agrees with the fact that the wavelength of a nonlinear plasma wave increases with its amplitude due to the relativistic-mass increase of electrons. Similar to the case of $n_0=0.001$ , the central frequency of the radiation is shown not to change a lot for different laser intensity in the weakly relativistic regime, even for plasma density $n_0=0.02$ in Figure 3(d). In the case of $a=2$ , the radiation frequency is approaching the laser frequency. Figure 4 illustrates the amplitude of the wake radiation in its first cycle behind the laser pulse and its central frequency as a function of the normalized laser vector potential. In non-relativistic and weakly relativistic regimes, the radiation amplitude scales with the laser intensity following the power law, as shown in Figure 4(a), whereas it generally changes little with the laser intensity in the relativistic regime. When the laser intensity is high enough that wave-breaking occurs in the driven wakefield, electron injection and trapping in the first wave-bucket are found. Thus the intensity of the emitted radiation is strengthened remarkably. As shown in Figure 4(a), the field for $a=5$ is about 20 times that for $a=3$ . This is the regime which we have identified before[13, 14]. Likewise, the frequency spectrum differs significantly for the weakly relativistic, fully relativistic, and the wave-breaking regimes, as illustrated in Figure 4(b). The central frequency of the radiation pulse remains nearly the same in the weakly relativistic regime, which confirms the previous results in Figure 2(b) under a different plasma density. In the fully relativistic regime, the radiation frequency becomes close to the laser frequency. In the wave-breaking regime, when the laser amplitude is enhanced to $a=5$ , the central frequency approaches $50\omega _L$ in the XUV regime. To illustrate the multi-dimensional effects on the wake emission, we performed 2D PIC simulations. The 2D simulation box size is $50~ \mu {\rm m}\times 50~ \mu {\rm m}$ , with 2500 cells along the laser propagation axis in the $x$ -direction and 1000 cells along the $y$ -direction. The driving laser pulse has a Gaussian profile in the transverse direction with $10\lambda _L$ waist radius at the focal plane, and it is polarized in the $z$ -direction with peak amplitude $a=0.5$ . The laser pulse enters plasmas with density $n_0=0.02$ at $t=20T_L$ , and we choose the same plasma parameters as the above 1D simulation in Figure 4. Figure 5 shows the wake radiation in the laser wakefields behind the driving laser and its transverse field on the laser axis, as well as the corresponding frequency spectrum. Study shows that the waveform of the radiation is consistent with the results found in the 1D simulation. In addition, the amplitude and the frequency of the radiation here are both similar to the results of 1D simulations (as shown in Figure 3). Generally, the wake radiation tends to decay more quickly in the 2D case. This is due to the spatial transverse spreading effect of the electromagnetic waves when the radiation source size is comparable to the radiation wavelength. In previous work, we proposed a simple model for XUV emission in the wakefields[13]. Here, we consider the case for a weakly relativistic intense laser pulse passing through underdense plasmas. The key issue is to understand the mechanism of the transverse current formation, which is responsible for the wakefield emission. For the case with an ultrashort few cycle laser pulse with its waveform highly asymmetric, it has been shown that a slowly varying transverse current can be generated behind the laser pulse[17], which can produce radiation at the electron plasma frequency through the vacuum–plasma boundary. In the present case, however, the radiation is produced inside the homogeneous plasma, and its frequency is higher than the electron plasma frequency. Therefore, other physical processes may be involved. Let us first check out a typical electron trajectory during and after the laser interaction. As illustrated in Figure 6(a), the electron is initially located at $x=84.54\lambda _L$ . During the laser interaction, it starts to move forward until it is at about $x=84.59\lambda _L$ , when it starts to move backward due to the charge separation field and the decreasing laser amplitude in its trailing edge. Finally, when the laser pulse passes by, it simply oscillates along the longitudinal direction in the established laser wakefield between $x=84.42\lambda _L$ and $x=84.64\lambda _L$ . Figure 6(b) displays the transverse momentum of the electron during and after the laser interaction. After the laser interaction, a small transverse momentum is left. Note that the transverse momentum decays with time according to Figure 6(b). Correspondingly, the emitted radiation is also the strongest just behind the laser pulse, which agrees with the simulation results shown above. This transverse momentum oscillates much more slowly with time as compared with the laser frequency. Such oscillations occur after the laser pulse interacts with the electrons in the plasma. The net transverse momentum can be attributed to evolution of the laser pulse into slight asymmetry. Alternatively, the slowly varying transverse moment of electrons can also be attributed to self-modulation of the laser pulse. During this process, its carrier phase changes with time so that the electrons obtain net transverse acceleration at a certain time. In the case when the driving laser is at fully relativistic intensity, the wakefield becomes highly nonlinear and has high-density peaks. The resulting radiation then appears in the form of a pulse chain with the subpulse separated by a plasma wavelength. This can explains the results shown in Figure 3(c). On either increasing the propagation distance or increasing the plasma density, simulation shows that the transverse momentum of electrons tends to increase because stronger modulation of the laser pulse may occur. This usually results in enhanced radiation, as shown in the simulation. It is shown that, when an intense laser pulse propagates through homogeneous underdense plasma, tunable radiation in a wide frequency range can be produced in the wake of the laser pulse. Usually, it shows up with frequency higher than the electron plasma frequency and can transmit through plasma into vacuum following the driving laser pulse. Numerical simulations indicate that this kind of electromagnetic emission occurs over a wide range of parameters for laser intensities and plasma densities. In the weakly relativistic regime, the wake emission turns out to be THz radiation in extremely tenuous plasmas. With increasing plasma density and laser intensity, the radiation appears in the form of a pulse chain corresponding to the wake structure. If the laser intensity is increased further, then wave-breaking occurs in the first plasma bucket and XUV radiation is produced. Qualitatively, the self-modulation of the laser pulse and the laser wakefield excitation are responsible for the emission. To reach a quantitative understanding, further study is required. This mechanism may provide a new possibility to produce a tunable coherent radiation for various applications. This research was supported by the National Science Foundation of China (Grant No. 11121504, 11374209, 11374210, and 11375261). The authors also would like to acknowledge the OSIRIS Consortium, consisting of UCLA and IST (Lisbon, Portugal), for providing access to the OSIRIS 2.0 framework. The computational resources utilized in this research were provided partially by Shanghai Supercomputer Center. 1. Salieres, P. L'Huillier, A. and Lewenstein, M. Phys. Rev. Lett. 74, 3776 (1995). 2. Takahashi, E. J. Nabekawa, Y. and Midorikawa, K. Appl. Phys. Lett. 84, 4 (2004). 3. Dromey, B. Rykovanov, S. G. Adams, D. Hörlein, R. Nomura, Y. Carroll, D. C. Foster, P. S. Kar, S. Markey, K. McKenna, P. Neely, D. Geissler, M. Tsakiris, G. D. and Zepf, M. Phys. Rev. Lett. 102, 225002 (2009). 4. Kiselev, S. Pukhov, A. and Kostyukov, I. Phys. Rev. Lett. 93, 135004 (2004). 5. Kneip, S. McGuffey, C. Martins, J. L. Martins, S. F. Bellei, C. Chvykov, V. Dollar, F. Fonseca, R. Huntington, C. Kalintchenko, G. Maksimchuk, A. Mangles, S. P. D. Matsuoka, T. Nagel, S. R. Palmer, C. A. J. Schreiber, J. Phuoc, K. Ta. Thomas, A. G. R. Yanovsky, V. Silva, L. O. Krushelnick, K. and Najmudin, Z. Nat. Phys. 6, 980 (2010). 6. Meyer-ter Vehn, J. and Wu, H. C. Eur. Phys. J. D 55, 433 (2009). 7. D'Amico, C. Houard, A. Franco, M. Prade, B. Mysyrowicz, A. Couairon, A. and Tikhonchuk, V. T. Phys. Rev. Lett. 98, 235002 (2007). 8. Sheng, Z. M. Wu, H. C. Li, K. and Zhang, J. Phys. Rev. E 69, 025401 (2004). 9. Sheng, Z. M. Mima, K. Zhang, J. and Sanuki, H. Phys. Rev. Lett. 94, 095003 (2005). 10. Quere, F. Thaury, C. Monot, P. Dobosz, S. Martin, Ph. Geindre, J. P. and Audebert, P. Phys. Rev. Lett. 96, 125004 (2006). 11. Hu, Z. D. Sheng, Z. M. Ding, W. J. Wang, W. M. Dong, Q. L. and Zhang, J. J. Plasma Phys. 78, 421 (2012). 12. Hu, Z. D. Sheng, Z. M. Wang, W. M. Chen, L. M. Li, Y. T. and Zhang, J. Phys. Plasmas 20, 080702 (2013). 13. Liu, Y. Sheng, Z. M. Zheng, J. Li, F. Y. Xu, X. L. Lu, W. Mori, W. B. Liu, C. S. and Zhang, J. New J. Phys. 14, 083031 (2012). 14. Liu, Y. Li, F. Y. Zeng, M. Chen, M. and Sheng, Z. M. Laser Part. Beams 31, 233 (2013). 15. Chen, M. Sheng, Z. M. Zheng, J. Ma, Y. Y. and Zhang, J. Chin. J. Comput. Phys. 25, 50 (2008). 16. Fonseca, R. A. Silva, L. O. Tsung, F. Decyk, V. K. Lu, W. Ren, C. Mori, W. B. Deng, S. Lee, S. Katsouleas, T. and Adam, J. C. Lect. Notes Comput. Sci. 2331, 342 (2002). 17. Wang, W. M. Kawata, S. Sheng, Z. M. Li, Y.-T. and Zhang, J. Phys. Plasmas 18, 073108 (2011).
CommonCrawl
Selected articles from the 5th Translational Bioinformatics Conference (TBC 2015): medical informatics and decision making CLASH: Complementary Linkage with Anchoring and Scoring for Heterogeneous biomolecular and clinical data Yonghyun Nam1, Myungjun Kim1, Kyungwon Lee2 & Hyunjung Shin1 The study on disease-disease association has been increasingly viewed and analyzed as a network, in which the connections between diseases are configured using the source information on interactome maps of biomolecules such as genes, proteins, metabolites, etc. Although abundance in source information leads to tighter connections between diseases in the network, for a certain group of diseases, such as metabolic diseases, the connections do not occur much due to insufficient source information; a large proportion of their associated genes are still unknown. One way to circumvent the difficulties in the lack of source information is to integrate available external information by using one of up-to-date integration or fusion methods. However, if one wants a disease network placing huge emphasis on the original source of data but still utilizing external sources only to complement it, integration may not be pertinent. Interpretation on the integrated network would be ambiguous: meanings conferred on edges would be vague due to fused information. In this study, we propose a network based algorithm that complements the original network by utilizing external information while preserving the network's originality. The proposed algorithm links the disconnected node to the disease network by using complementary information from external data source through four steps: anchoring, connecting, scoring, and stopping. When applied to the network of metabolic diseases that is sourced from protein-protein interaction data, the proposed algorithm recovered connections by 97%, and improved the AUC performance up to 0.71 (lifted from 0.55) by using the external information outsourced from text mining results on PubMed comorbidity literatures. Experimental results also show that the proposed algorithm is robust to noisy external information. This research has novelty in which the proposed algorithm preserves the network's originality, but at the same time, complements it by utilizing external information. Furthermore it can be utilized for original association recovery and novel association discovery for disease network. The amount of information on disease-disease association has been ever increasing over the last decade and the source of information also has been diversified from multi-levels of genomic data to clinical data, such as copy number alteration at the genomic level, miRNA expression or DNA methylation at the epigenomic level, protein-protein interaction at the proteomic level, disease comorbidity at the clinical level, and etc. [1–4]. One of the most effective ways to describe disease-disease association is by constructing a disease network, which consists of nodes and edges, representing diseases and disease-disease relations, respectively [5, 6]. In a disease network, the concept of disease-disease association (i.e., edges) varies depending on the source of information that the network utilizes. Many researches have been conducted using various sources of data. In Goh et al. [7], the authors created a disease network based on gene-disease associations by connecting diseases that are associated with the same genes. It had further developed in Zhou et al. [8] which constructed a diseases network by using disease-gene information and disease-symptom information. Lee et al. [9] constructed a network in which two diseases are linked if mutated enzymes associated with them catalyze adjacent metabolic reactions. While these researches are based on genomic data, there are also other researches that utilize clinical data for associated disease concerning patient records. In Hidalgo et al. [10], authors constructed a disease network, which reflects information of two co-occurred diseases, by utilizing clinical records of 13,039,018 patients. The authors utilized prevalence of two diseases co-occurring in a patient for edges. On the other hand, Žitnik et al. [11] is a research that uses both genomic and clinical data. In Žitnik et al. [11], the authors integrated data on disease-gene association, disease ontology, drugs and genes so that they could utilized such information to deduce disease-disease associations. So far, we see that most of these researches only utilize a single source of data to find disease-disease associations. On the other hand, if diverse and heterogeneous sources of data are available, there also have been network-wise approaches to integrate multiple disease networks for inferring associations between diseases [3, 12–15]. However, if one wants a disease network placing huge emphasis on a particular source of data but still utilizing other sources only to complement the original source, which researches above can be applied to it? For example, if we were to target drug discovery or to reposition by using disease network, the one constructed with protein information would be more preferred [16, 17]. On the other hand, if physicians were to treat a patient, they would prefer a disease network constructed with comorbidity information based on prevalence of diseases. If, however, there are losses or deficiencies of information in each original source, what would we do? In such a case, disease-disease associations cannot be defined, resulting in a disconnected network. See Fig. 1(a). If external source of data is usable, we could integrate the original network and the external network in a network-wise fashion by using one of up-to002Ddate integration methods [3, 12, 14, 18]. But interpretation on the results would be ambiguous: meanings conferred on edges would be vague in the resulting disease network. Proposed Method: a original network with disconnected nodes, and b complemented network that links the disconnected nodes to the connected network through newly found edges using external information This motivates the present research. In this paper, we propose an algorithm that preserves the network's originality, but at the same time, complements it by utilizing external information. We denote the proposed algorithm as CLASH which abbreviates complementary linkage with anchoring and scoring for heterogeneous data. An original disease network is constructed from PPI information as in Goh et al. [7] and Zhou et al. [8]. And then, CLASH is applied to the network in order to link disconnected nodes to the network through newly found edges using external information. In the complementing process, clinical comorbidity information is used as external source of information. The resulting network is called as a complemented disease network. See Fig. 1(b). The remainder of the paper is organized as follows. Section 2 introduces CLASH in length. Section 3 provides the experimental results on validity and utility of CLASH by applying it to metabolic disease group. Section 4 presents conclusion. Complementary linkage with anchoring and scoring for heterogeneous data Disease network is a graph, G = (V, W), that describes connection between diseases with nodes and edges. In a disease network, a node denotes a disease and an edge denotes disease-disease association. Here, disease-disease association is a value obtained by calculating similarity between two diseases based on their shared genes (or proteins) and co-occurrence information through clinical trials. On the graph, similarity between two diseases are assigned with a weight value on the edge and higher of its value implies higher association between two diseases. In our study, the disease network is constructed using shared proteins: a disease vector has n-dimensional protein vector, and the similarity between two diseases are calculated with cosine similarity between disease vectors. If all disease gets connected to more than one edge, the disease network becomes a connected graph. On the other hand, if a disease is left to be disconnected from the network due to lack of disease-disease association with other diseases, it becomes impossible to deduce any inference about the disease from the network. To circumvent the difficulty, we propose an algorithm for linking the disconnected node to the disease network by using complementary information from external data source, CLASH. The method is composed of four steps, anchoring, connecting, scoring and stopping. Figure 2 presents each step, beginning with a graph of eight nodes of which five are connected and three are disconnected. Schematic description of CLASH Algorithm At the anchoring step, disconnected nodes are initially connected to the network (i.e., disconnected nodes drop their anchor to the connected graph). During the process, disconnected nodes must select a node to drop their anchor by utilizing possible external data source. Here, external data source is information unsuitable or less preferred, for purpose or usage of the proposed network. Thus, it is information that is not mainly used for constructing the network, but can be utilized to supplement the network. Figure 2(b) describes an anchoring step of a disconnected node, v 6, to the connected graph of five nodes. Based on external data source, the fact that v 6 is related to {v 1, v 2, v 3} allows us to initially connect v 6 to associated nodes. These associated nodes are defined as candidate nodes. The scoring step allows disconnected nodes to select connectable nodes from anchored nodes through scoring. In this paper, we utilize the Semi-Supervised Learning (SSL) algorithm. The way it works is that given a connected graph, the SSL computes f-score for each labeled node. See Appendix. In the present study, the label of disconnected node is set to '1', and '0's for others. The f-score increases with stronger connectivity of associated edges and the number of edges [19–21]. In addition, the higher the f-score implies the higher similarity to the labeled node. Figure 2(c) shows the result of scoring step of a disconnected node, v 6, on candidate nodes, v 1, v 2, v 3. The f-scores for given nodes are {0.9, 0.8, 0.6}, respectively. At the connecting step, disconnected nodes connect to the graph based on scoring results. The order of connection is determined based on the f-scores on candidate nodes. Higher the f-score means higher the priority in the order of connection. (If the f-scores of candidate nodes are the same, then they are connected to the graph with the same priority.) Newly formed edges through connection can cause disturbances (sometimes severe disturbances) on the network. Because severe disturbances could cause the original network to lose its property, there needs to be a standard that could determine the connection with certainty. In this research, we provide such standard due to its principle of preserving network properties and utilizing external data source. The preservation of network's property can be measured through performance of the network whenever a new edge is formed between a disconnected node and a candidate node. Performance of network is measured on validation nodes, which excludes disconnected nodes and candidate nodes. Under the condition that the network's performance stays within certain range (denoted by ϵ in (2) in Fig. 3), we then allow additional edges to be formed. If a change in network performance after connection is trivial, it implies that a newly connected node does not incur unexpected perturbation in the original network, thus preserving the original property of network. Figure 2(d) shows a candidate node, v 1, connects to v 6 due to its higher f-score compared to other candidate nodes {v 2, v 3}. At this point, the validation nodes are {v 4, v 5}. The connection is finalized since the change in additional/pre-post performance of the edge is within a certain range. After the first connection, we proceed to another candidate node, v 2, which has the second largest f-score. Figure 2(e) shows the disconnected node, v 6, making final connections to two of the candidate nodes, {v 1, v 2}, out of three candidate nodes that had been anchored. Pseudo Code of CLASH Algorithm The proposed algorithm stops when there are no more disconnected nodes, or external data or the performance of the network decreases. Figure 2(f) shows a network in which all the disconnected nodes, v 6, v 7, v 8, are connected through previous steps. The pseudo-code for the proposed algorithm is presented in Fig. 3. The proposed algorithm was applied to the metabolic disease group. Demographically, metabolic diseases are widespread among people and show increasing rate in recent years. In up-to-date genome researches or molecular biology, however, it is difficult to trace disease-protein associations for metabolic diseases. This means that in researches that construct diseases network based on genome or protein information, it is also difficult to trace disease-disease associations for metabolic diseases. For example, in Goh et al. [7], it shows that there are almost no connections between metabolic disease nodes in human disease network, which is significantly different from nodes for cancer that have dense connections in the network. Thus, we chose metabolic diseases to construct a denser disease network by supplementing connections through CLASH. To construct a metabolic disease network, a list of diseases was obtained from Medical Subject Headings (MeSH) of the National Library of Medicine [22]. When considering up to the second level of the taxonomy, there are 302 descriptors for metabolic diseases out of 27,149 listed diseases. For the nodes, we acquired 53,430 data points on disease-protein associations. From the obtained set of data, we have selected and utilized 181 metabolic diseases and 15,281 proteins that were eligible to construct the disease network. The edge weights were calculated with cosine similarity between 15,281 dimensional disease vectors. We denote this network as the original disease network. For external data sources that could be used to complement the original disease network, we used comorbidity information reported on clinical literatures. Comorbidity addresses the concomitant occurrence of different medical conditions or diseases, usually complex and often chronic, in the same patient [23, 24]. In order to acquire external data source, text mining was conducted on 1,000,254 medical literatures from PubMed. From this point onward, we define complemented disease network as the resulting disease network complemented with comorbidity information through CLASH. Table 1 summarizes the source and type of data used in our experiment. Table 1 Data sources for metabolic diseases, proteins, disease-protein associations, comorbidity Experimental settings First, we have performed verification tests to see how the proposed algorithm, CLASH, complements the network. To carry out the tests, we gave artificial damages to the original network, allowing CLASH to recover the damaged network and to construct the complemented disease network. More specifically, we randomly chose and deleted 20, 40, 60 and 80 %, of the edges from the original diseases network and specified each resulting network as '%-damaged original network'. (For our convenience, '0 %-damaged original network' is denoted as the reference network.) After constructing the complemented disease networks from each levels of damage, we compared them to each of the %-damaged original networks. Second, the overall performances would increase if we add further information from extra data. However, this would only happen if the extra source of data is useful to complement the original source of data. Therefore, to further clarify the validity of CLASH, we performed additional experiments comparing effects of noise data when employed to complement the %-damaged original network. They are denoted as noisy networks. To measure the network's performances, we used SSL algorithm on prediction problems on possibly co-occurring diseases in the case when there is an outbreak of a certain disease [19]. Leave-one-out validation method is used [25]. The f-scores for all diseases are calculated by (1) except for one target disease. Then, the ROC was obtained by comparing f-scores and PubMed Literatures: presence ('1') or absence ('0') of PubMed literatures is used as a standard for disease association. For 181 diseases, the ROC was similarly calculated. The whole experiment was repeated 10 times. Results and Disscussion Results for validity of CLASH Figure 4(a) presents network density that depicts proportion of edges, which had been recovered through CLASH. It shows that, regardless of the degree of damages, by utilizing external data sources, the proportion of edges have been recovered by 18 %, on average. In case of 20 %-damaged network, 97 % of edges were recovered when comparing with those of reference network (97 % = (0.130/0.134) × 100 %.) Also, it is interesting to see that it is possible to recover severely damaged edges that had been deleted by 80 %. Fig. 4(b) shows comparisons of AUC performances of damaged network and complemented network. From the bar chart on 80 %-damaged network, we can see that CLASH improves the performance up to 0.71 (lifted from 0.55). Considering that the performance of reference network was 0.69, it can be inferred that CLASH has led to improvement in AUC even in the most severely damaged network. For other damaged networks, the comparisons can be similarly interpreted. On the other hand, the noisy networks incurred insignificant degradation or no change in performance to %-damaged networks. (The amount of noisy edges corresponds to those of complemented edges for %-damaged networks.) This shows that CLASH is robust to noisy external source data and preserves the original information. Results for Complementing Ability of CLASH: a shows that the proportion of edges have been recovered by 18 %, on average. b shows that CLASH improves AUC performance up to 0.79. The p-values for statistical tests for pairwise comparison between %-damaged original network and complemented network are 0.0002, 0.0001, 0.0002 and 0.000, respectively. On the other hand, CLASH is robust to noise: the noisy networks incurred insignificant degradation or no change in performance to %-damaged networks, preserving the original information Result for utility of CLASH In this section, we show utility of CLASH by demonstrating its process and typical results for a case disease. Malabsorption syndrome was selected as a target disease out of 181 metabolic diseases. Malabsorption syndrome refers to a wide variety of frequent and uncommon disorders of the process of alimentation in which the intestine's ability to absorb certain nutrients, such as vitamin B12 and iron, into the bloodstream is negatively affected [26, 27]. Fig. 5 presents step-by-step process of CLASH for the target disease. Figure 5(a) shows a reference network of 13 disease nodes which simplifies the whole network of 181 diseases. In the figure, malabsorption syndrome (node 1) has four connections with celiac disease, glucose intolerance, metabolic disease X and diabetes mellitus (node 2, 5, 9, 11, in due order.) The four edges were purposely deleted to show if CLASH successfully recovers the original ones and further compliments the network with new edges from external knowledge found from PubMed comorbidity literatures. This is shown in Fig. 5(b), the original network. Figure 5(c) briefly describes anchoring, scoring and connecting: firstly, the node of malabsorption syndrome anchors at 10 nodes (See anchored diseases [28–37]) which includes the four nodes of the originally associated (node 2, 5, 9, 11) and six nodes of the newly found (node 3, 4, 6, 7, 8, 10). Among them, eight nodes are finally connected which have the highest values of f-score after dropping out two nodes with the lowest scores, node 6 and 7. Figure 5(d) presents the complemented network of four recovered edges and four newly found ones. Solid single line in the network refers to the former and double-line denotes the latter. Consequently, we see that malabsorption syndrome extends its associations with more diseases, hyperhomocysteinemia, hypoglycemia, osteomalacia and insulin resistance (node 3, 4, 8, 10), apart from the originally connected four diseases shown in Fig. 5(a). Utility of CLASH by demonstrating the process for the malabsorption syndrome: CLASH algorithm complements the network with four recovered edges and four newly found ones. Therefore, malabsorption syndrome extends its associations with more diseases, hyperhomocysteinemia, hypoglycemia, osteomalacia and insulin resistance, apart from the originally connected four diseases. Single solid lines refer to extended edges and double lines refer to original edges. Also notations '†', '*' and '**' denotes associated diseases via PPI, PubMed and multiple paths involving more than one edge, respectively To validate the utility of newly found edges, we performed disease scoring on reference network in Fig. 5(a) and complemented network in Fig. 5(d), and then compared the top tier ranked up to 10th associated diseases from each network. Figure 6 presents a comparison of disease list obtained from results of reference network and complemented network. Figure 6(b) shows that celiac disease, glucose intolerance, metabolic disease X and diabetes mellitus are highly ranked. If we compare these diseases with those connected to malabsorption syndrome in Fig. 5(d) (node 2, 5, 9, 11), we get an interesting result in which all these diseases are also included in the disease list. On the other hand, it is also notable that four diseases, hyperhomocysteinemia, hypoglycemia, osteomalacia and insulin resistance, that are associated with newly found edges in Fig. 5 (node 3, 4, 8, 10) are included in the list as well. From the results of Figs. 5 and 6, we see that CLASH is able to preserve the originality of the disease network built from PPI information, but at the same time, complements it by utilizing PubMed comorbidity literatures. Top tier ranked up to 10th associated diseases with malabsorption syndrome: Notations '†', '*' and '**' are identical to those in Fig. 5 In a similar manner, an experiment has been carried out on 181 diseases (Supplemental materials http://www.alphaminers.net.). Table 2 illustrates results for 10 diseases. The first 5 diseases, similar to malabsorption syndrome, are artificially disconnected diseases from the original network of 181 diseases while the last 5 diseases are real disconnected diseases that does not contain any PPI information (not valid). Table 2 Top tier ranked up to 10th associated diseases Through results from the experiment, we verified usefulness and effectiveness of CLASH, which uses both original and external data source to find diseases that could co-occur with target diseases. The research proposes an algorithm, also known as CLASH, which complements or strengthens connections between diseases in a disease network. The proposed algorithm is useful when the original disease network is incomplete and when supplementary information on disease association is available. The verification process for CLASH has been done by applying the algorithm on metabolic diseases. The original disease network was constructed based on PPI information. And through CLASH, disconnected edges were complemented or strengthened by supplemental information obtained from PubMed comorbidity literatures. In the experiment on validity, CLASH not only successfully recovered purposely deleted edges but also improved their performances: It showed full recovery of 20 % damaged edges and an increase of AUC performance from 0.69 to 0.79. In the experiment on utility, the research illustrates how to utilize CLASH through the toy example: In the case of malabsorption syndrome as the target disease, it delineates the process of finding a list of diseases that could co-occur with the target disease. Similar results are also shown with other metabolic diseases. This research has novelty in following aspects. CLASH is a methodology that preserves the network's originality, but at the same time, complements it by utilizing external information. CLASH has different utility than other methods that integrate multiple data sources in a network-wise fashion. It puts more emphasis on one data source than others: To complement disease-gene information (from biology) with comorbidity information (from medicines), or oppositely, to complement comorbidity information with disease-gene information. Examples of former usage can be found in drug discovery/repositioning in pharmacology while an example of latter usage is inferring disease co-occurrence when practicing. Moreover, these usages are topics for further researches. Piro RM. Network medicine: linking disorders. Hum Genet. 2012;131(12):1811–20. Li Y, Agarwal P. A pathway-based view of human diseases and disease relationships. PLoS One. 2009;4(2):e4346. Kim D, Joung J-G, Sohn K-A, Shin H, Park YR, Ritchie MD, Kim JH. Knowledge boosting: a graph-based integration approach with multi-omics data and genomic knowledge for cancer clinical outcome prediction. J Am Med Inform Assoc. 2015;22(1):109–20. Sun K, Buchan N, Larminie C, Pržulj N. The integrated disease network. Integr Biol. 2014;6(11):1069–79. Altaf-Ul-Amin M, Afendi FM, Kiboi SK, Kanaya S. Systems biology in the context of big data and networks. BioMed Res Int. 2014;2014:428570. Pavlopoulos GA, Secrier M, Moschopoulos CN, Soldatos TG, Kossida S, Aerts J, Schneider R, Bagos PG. Using graph theory to analyze biological networks. BioData Min. 2011;4(10):1–27. Goh K-I, Cusick ME, Valle D, Childs B, Vidal M, Barabasi A-L. The human disease network. Proc Natl Acad Sci. 2007;104(21):8685–90. Zhou X, Menche J, Barabási A-L, Sharma A. Human symptoms–disease network. Nat Commun. 2014;5:4212. Lee D-S, Park J, Kay K, Christakis N, Oltvai Z, Barabási A-L. The implications of human metabolic network topology for disease comorbidity. Proc Natl Acad Sci. 2008;105(29):9880–5. Hidalgo CA, Blumm N, Barabási A-L, Christakis NA. A dynamic network approach for the study of human phenotypes. PLoS Comput Biol. 2009;5(4):e1000353. Žitnik M, Janjić V, Larminie C, Zupan B, Pržulj N. Discovering disease-disease associations by fusing systems-level molecular data. Sci Rep. 2013;13:3202. Shin H, Lisewski AM, Lichtarge O. Graph sharpening plus graph integration: a synergy that improves protein functional classification. Bioinformatics. 2007;23(23):3217–24. Kim D, Shin H, Sohn K-A, Verma A, Ritchie MD, Kim JH. Incorporating inter-relationships between different levels of genomic data into cancer clinical outcome prediction. Methods. 2014;67(3):344–53. Tsuda K, Shin H, Schölkopf B. Fast protein classification with multiple networks. Bioinformatics. 2005;21(2):ii59–65. Sun K, Gonçalves JP, Larminie C. Predicting disease associations via biological network analysis. BMC Bioinformatics. 2014;15(1):304. Yıldırım MA, Goh K-I, Cusick ME, Barabási A-L, Vidal M. Drug—target network. Nat Biotechnol. 2007;25(10):1119–26. Kim HU, Sohn SB, Lee SY. Metabolic network modeling and simulation for drug targeting and discovery. Biotechnol J. 2012;7(3):330–42. Kim D, Shin H, Song YS, Kim JH. Synergistic effect of different levels of genomic data for cancer clinical outcome prediction. J Biomed Inform. 2012;45(6):1191–8. Shin H, Nam Y, Lee D-g, Bang S. The Translational Disease Network—from Protein Interaction to Disease Co-occurrence. Proc of 4th Translational Bioinformatics Conference (TBC) 2014. Chapelle O, Schölkopf B, Zien A. Semi-supervised learning, MIT Press; 2006. Kim J, Shin H. Breast cancer survivability prediction using labeled, unlabeled, and pseudo-labeled patient data. J Am Med Inform Assoc. 2013;20(4):613–8. Medical Subject Headings (www.ncbi.nlm.nih.gov/mesh, Accessed 5 Jan 2014). Capobianco E. Comorbidity: a multidimensional approach. Trends Mol Med. 2013;19(9):515–21. Ambert KH, Cohen AM. A system for classifying disease comorbidity status from medical discharge summaries using automated hotspot and negated concept detection. J Am Med Inform Assoc. 2009;16(4):590–5. Fukunaga K, Hummels DM. Leave-one-out procedures for nonparametric error estimates. Pattern Anal Mach Intell IEEE Trans. 1989;11(4):421–3. Ghoshal UC, Mehrotra M, Kumar S, Ghoshal U, Krishnani N, Misra A, Aggarwal R, Choudhuri G. Spectrum of malabsorption syndrome among adults & factors differentiating celiac disease & tropical malabsorption. Indian J Med Res. 2012;136(3):451. Hayman SR, Lacy MQ, Kyle RA, Gertz MA. Primary systemic amyloidosis: a cause of malabsorption syndrome. Am J Med. 2001;111(7):535–40. Benson Jr J, Culver P, Ragland S, Jones C, Drummey G, Bougas E. The d-xylose absorption test in malabsorption syndromes. N Engl J Med. 1957;256(8):335–9. Casella G, Bassotti G, Villanacci V, Di Bella C, Pagni F, Corti GL, Sabatino G, Piatti M, Baldini V. Is hyperhomocysteinemia relevant in patients with celiac disease. World J Gastroenterol. 2011;17(24):2941–4. Jenkins D, Gassull M, Leeds A, Metz G, Dilawari J, Slavin B, Blendis L. Effect of dietary fiber on complications of gastric surgery: prevention of postprandial hypoglycemia by pectin. Gastroenterology. 1977;73(2):215–7. Penckofer S, Kouba J, Wallis DE, Emanuele MA. Vitamin D and diabetes let the sunshine in. Diabetes Educ. 2008;34(6):939–54. Förster H. Hypoglycemia. Part 4. General causes, physiological newborn hyperglycemia, hyperglycemia in various illnesses, metabolic deficiency, and metabolic error. Fortschr Med. 1976;94(16):332–8. Dedeoglu M, Garip Y, Bodur H. Osteomalacia in Crohn's disease. Arch Osteoporos. 2014;9(1):1–3. Traber MG, Frei B, Beckman JS. Vitamin E revisited: do new data validate benefits for chronic disease prevention? Curr Opin Lipidol. 2008;19(1):30–8. Viganò A, Cerini C, Pattarino G, Fasan S, Zuccotti GV. Metabolic complications associated with antiretroviral therapy in HIV-infected and HIV-exposed uninfected paediatric patients. Expert Opin Drug Saf. 2010;9(3):431–45. Tosiello L. Hypomagnesemia and diabetes mellitus: a review of clinical implications. Arch Intern Med. 1996;156(11):1143–8. van Thiel DH, Smith WI, Rabin BS, Fisher SE, Lester R. A syndrome of immunoglobulin a deficiency, diabetes mellitus, malabsorption, and a common HLA haplotype: immunologic and genetic studies of forty-three family members. Ann Intern Med. 1977;86(1):10–9. HJS would like to gratefully acknowledge support from the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2015R1D1A1A01057178/2012-0000994). KWL acknowledges the support from the National Research Foundation of Korea(NRF) Grant funded by the Korean Government(MSIP) (No.2015R1A5A7037630). Publication for this article has been funded by National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP) (No. 2015R1D1A1A01057178/2012-0000994). This article has been published as part of BMC Medical Informatics and Decision Making Volume 16 Supplement 3, 2016. Selected articles from the 5th Translational Bioinformatics Conference (TBC 2015): medical genomics. The full contents of the supplement are available online https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-16-supplement-3. The data can be found in PharmDB (http://pharmdb.org/). PharmDB is a tripartite pharmacological network database of human diseases, drugs and proteins, which compiles and integrates nine existing interaction databases. (Access date: 2014.01.05). HJS designed the idea, wrote the manuscript and supervised the study process. YHN and MJK analyzed the data and implemented the system. KWL and all the other authors read and approved the final manuscript. Department of Industrial Engineering, Ajou University, Wonchun-dong, Yeongtong-gu, Suwon, 443-749, South Korea Yonghyun Nam , Myungjun Kim & Hyunjung Shin Department of Digital Media, Ajou University, Wonchun-dong, Yeongtong-gu, 443-749, Suwon, South Korea Kyungwon Lee Search for Yonghyun Nam in: Search for Myungjun Kim in: Search for Kyungwon Lee in: Search for Hyunjung Shin in: Correspondence to Hyunjung Shin. Graph-based Semi-Supervised Learning Disease network is a graph, G = (V, W), that describes connection between diseases with nodes and edges. In a disease network, a node denotes a disease and an edge denotes disease-disease association. Given a disease network, graph-based Semi-Supervised Learning (SSL) is employed to calculate the scores when a target diease is given. In the present study, the target disease is labeled as '1', and other diseasese are labeled as '0' (unlabeled). With this setting on a disease network, SSL provides the scores for diseases, in terms of f-score. The algorithm is summarized as follows, and more details can be found in [19–21]. In graph-based SSL, a connected graph G = (V, W) is constructed where the nodes V represent the labeled and unlabeled data points while the edges W reflect the similarity between data points. In binary classification problem, given n(=n l + n u ) data points from the sets of labeled \( {S}_L=\left\{{\left({x}_i,{y}_i\right)}_{i=1}^{n_l}\right\} \) and unlabeled \( {S}_U=\left\{{\left({x}_j\right)}_{j = {n}_l+1}^n\right\}, \) the labeled nodes are set to y l ∈ {−1, + 1}, while the unlabeled nodes are set to zero (y u = 0). However, for scoring problem in the proposed algorithm, the n l nodes are set to a unary label y l ∈ {1} while the unlabeled n u nodes are set to zero (y u = 0). Resulting the learning process is to assign scores f u T = (f nl + 1, …, f n )T on nodes V U . The edges between the two nodes v i and v j . are measured by the Gaussian function $$ {w}_{ij}=\left\{\begin{array}{cc}\hfill { \exp}^{- dist\ \left({v}_i,\ {v}_j\right)/{\sigma}^2}\ \hfill & \hfill if\ i\sim j\hfill \\ {}\hfill 0\hfill & \hfill otherwise\hfill \end{array}\right. $$ where i ~ j indicates that the two nodes are connected, and the value of the similarity is represented by a matrix W = {w ij }. Then the label information can propagate from labeled node v i to unlabeled node v j when the value of w ij is large, their outputs are lely to be close. The algorithm will output an n-dimensional real-valued vector f = [f l T f u T ] = (f 1, …, f l , f l + 1, …, f n = l + u )T. There are two assumptions: a loss function (f i should be close to given label of y i in labeled nodes) and label smoothness (overall, f i should not be too different from f i for the neighboringes.hese assumptions are reflected in the value of f by minimizing the following quadratic function: $$ \underset{f}{ \min }\ {\left(f-y\right)}^T\left(f-y\right)+\mu {f}^TLf $$ where \( \boldsymbol{y}={\left[{y}_1, \dots,\ {y}_{n_l},\ 0, \dots,\ 0\right]}^T \) and the matrix L, which is known as the graph Laplacian matrix, is defined as L = D − W where D = diag(d i ), \( {d}_i={\displaystyle \sum_i}{w}_{ij} \). The parameter μ trades off loss and smoothness. Thus, the solution of this problem becomes $$ f={\left(I+\mu L\right)}^{-1}y $$ Nam, Y., Kim, M., Lee, K. et al. CLASH: Complementary Linkage with Anchoring and Scoring for Heterogeneous biomolecular and clinical data. BMC Med Inform Decis Mak 16, 72 (2016) doi:10.1186/s12911-016-0315-2 External Information Original Network Candidate Node Reference Network
CommonCrawl
Methodology Article Shrinkage Clustering: a fast and size-constrained clustering algorithm for biomedical applications Chenyue W. Hu1, Hanyang Li1 & Amina A. Qutub1 BMC Bioinformatics volume 19, Article number: 19 (2018) Cite this article Many common clustering algorithms require a two-step process that limits their efficiency. The algorithms need to be performed repetitively and need to be implemented together with a model selection criterion. These two steps are needed in order to determine both the number of clusters present in the data and the corresponding cluster memberships. As biomedical datasets increase in size and prevalence, there is a growing need for new methods that are more convenient to implement and are more computationally efficient. In addition, it is often essential to obtain clusters of sufficient sample size to make the clustering result meaningful and interpretable for subsequent analysis. We introduce Shrinkage Clustering, a novel clustering algorithm based on matrix factorization that simultaneously finds the optimal number of clusters while partitioning the data. We report its performances across multiple simulated and actual datasets, and demonstrate its strength in accuracy and speed applied to subtyping cancer and brain tissues. In addition, the algorithm offers a straightforward solution to clustering with cluster size constraints. Given its ease of implementation, computing efficiency and extensible structure, Shrinkage Clustering can be applied broadly to solve biomedical clustering tasks especially when dealing with large datasets. Cluster analysis is one of the most frequently used unsupervised machine learning methods in biomedicine. The task of clustering is to automatically uncover the natural groupings of a set of objects based on some known similarity relationships. Often employed as a first step in a series of biomedical data analyses, cluster analysis helps to identify distinct patterns in data and suggest classification of objects (e.g. genes, cells, tissue samples, patients) that are functionally similar or related. Typical applications of clustering include subtyping cancer based on gene expression levels [1–3], classifying protein subfamilies based on sequence similarities [4–6], distinguishing cell phenotypes based on morphological imaging metrics [7, 8], and identifying disease phenotypes based on physiological and clinical information [9, 10]. Many algorithms have been developed over the years for cluster analysis [11, 12], including hierarchical approaches [13] (e.g., ward-linkage, single-linkage) and partitional approaches that are centroid-based (e.g., K-means [14, 15]), density-based (e.g., DBSCAN [16]), distribution-based (e.g., Gaussian mixture models [17]), or graph-based (e.g., Normalized Cut [18]). Notably, nonnegative matrix factorization (NMF) has received a lot of attention in application to cluster analysis, because of its ability to solve challenging pattern recognition problems and the flexibility of its framework [19]. NMF-based methods have been shown to be equivalent to a relaxed K-means clustering and Normalized Cut spectral clustering with particular cost functions [20], and NMF-based algorithms have been successfully applied to clustering biomedical data [21]. With few exceptions, most clustering algorithms group objects into a pre-determined number of clusters, and do not inherently look for the number of clusters in the data. Therefore, cluster evaluation measures are often employed and are coupled with clustering algorithms to select the optimal clustering solution from a series of solutions with varied cluster numbers. Commonly used model selection methods for clustering, which vary in cluster quality assessment criteria and sampling procedures, include Silhouette [22], X-means [23], Gap Statistic [24], Consensus Clustering [25], Stability Selection [26], and Progeny Clustering [27]. The drawbacks of coupling cluster evaluation with clustering algorithms include (i) computation burden, since the clustering needs to be performed with various cluster numbers and sometimes multiple times to assess the solution's stability; and (ii) implementation burden, since the integration can be laborious if algorithms are programmed in different languages or are available on different platforms. Here, we propose a novel clustering algorithm Shrinkage Clustering based on symmetric nonnegative matrix factorization notions [28]. Specifically, we utilize unique properties of a hard clustering assignment matrix to simplify the matrix factorization problem and to design a fast algorithm that accomplishes the two tasks of determining the optimal cluster number and performing clustering in one. The Shrinkage Clustering algorithm is mathematically straightforward, computationally efficient, and structurally flexible. In addition, the flexible framework of the algorithm allows us to extend it to clustering applications with minimum cluster size constraints. Let X={X1, …, X N } be a finite set of N objects. The task of cluster analysis is to group objects that are similar to each other and separate those that are dissimilar to each other. The completion of a clustering task can be broken down to two steps: (i) deriving similarity relationships among all objects (e.g., Euclidean distance); (ii) clustering objects based on these relationships. The first step is sometimes omitted when the similarity relationships are directly provided as raw data, for example in the case of clustering genes based on their sequence similarities. Here, we assume that the similarity relationships were already derived and are available in the form of a similarity matrix SN×N, where S ij ∈[0,1] and S ij =S ji . In the similarity matrix, a larger S ij represents more resemblance in pattern or closer proximity in space between X i and X j , and vice versa. Suppose AN×K is a clustering solution for objects with similarity relationships SN×N. Since we are only considering the case of hard clustering, we have A ik ∈{0,1} and \(\sum _{k=1}^{K} A_{ik} =1\). Specifically, K is the number of clusters obtained, and A ik takes the value of 1 if X i belongs to cluster k and takes the value of 0 if it does not. The product of A and its transpose AT represents a solution-based similarity relationship \(\hat {S}\) (i.e. \(\hat {S} = AA^{T}\)), in which \(\hat {S}_{ij}\) takes the value of 1 when X i and X j are in the same cluster and 0 otherwise. Unlike S ij which can take continuous values between 0 and 1, \(\hat {S}_{ij}\) is a binary representation of the similarity relationships indicated by the clustering solution. If a clustering solution is optimal, the solution-based similarity matrix \(\hat {S}\) should be similar to the original similarity matrix S if not equal. Based on this intuition, we formulate the clustering task mathematically as $$ \begin{aligned} & \underset{A}{\text{min}} \quad\quad\quad\quad ||{S-AA}^{T}||_{F}\\ & \text{subject~to} \quad\quad A_{ik} \in \{0,1\}, \quad {\sum\limits}_{k=1}^{K} {A}_{ik}=1, \quad {\sum\limits}_{i=1}^{N} {A}_{ik} \neq 0 \ . \end{aligned} $$ The goal of clustering is therefore to find an optimal cluster assignment matrix A, which represents similarity relationships that best approximate the similarity matrix S derived from the data. The task of clustering is transformed into a matrix factorization problem, which can be readily solved by existing algorithms. However, most matrix factorization algorithms are generic (not tailored to solving special cases like Function 1), and are therefore computationally expensive. Properties and rationale In this section, we explore some special properties of the objective Function 1 that lay the ground for Shrinkage Clustering. Unlike traditional matrix factorization problems, the solution A we are trying to obtain has special properties, i.e. A ik ∈{0,1} and \(\sum _{k=1}^{K} A_{ik}=1\). This binary property of A greatly simplifies the objective Function 1 as below. $$ \begin{aligned} & \underset{A}{\text{min}} \|S-AA^{T}\|_{F}\\ &\quad = \underset{A}{\text{min}} \sum\limits_{i=1}^{N}\sum\limits_{j=1}^{N} (S_{ij}-A_{i} \bullet A_{j})^{2} \\ &\quad = \underset{A}{\text{min}} \sum\limits_{i=1}^{N} \left(\sum\limits_{j \in \{ j|A_{i}=A_{j} \}} (S_{ij} -1)^{2} + \sum\limits_{j \in \{ j|A_{i}\neq A_{j}\}} S_{ij}^{2}\right) \\ &\quad = \underset{A}{\text{min}} \left(\sum\limits_{i=1}^{N} \sum\limits_{j \in \{ j|A_{i}=A_{j} \}} (1-2S_{ij}) + \sum\limits_{i=1}^{N} \sum\limits_{j=1}^{N} S_{ij}^{2} \right) \end{aligned} $$ Here, A i represents the ith row of A, and the symbol ∙ denotes the inner product of two vectors. Note that A i ∙A j take binary values of either 0 or 1, because A ik ∈{0,1} and \(\sum _{k=1}^{K} A_{ik} =1\). In addition, \(\sum _{i=1}^{N} \sum _{j=1}^{N} S_{ij}^{2}\) is a constant that does not depend on the clustering solution A. Based on this simplification, we can reformulate the clustering problem as $$ \begin{aligned} &\underset{A}{\text{min}} f(A) = \sum\limits_{i=1}^{N}\sum\limits_{j \in \{ j|A_{i}=A_{j} \}} \left(1-2S_{ij}\right).\\ \end{aligned} $$ Let's now consider how the value of the objective Function 2 changes when we change the cluster membership of an object X i . Suppose we start with a clustering solution A, in which X i belongs to cluster k (A ik =1). When we change the cluster membership of X i from k to k′ with the rest remaining the same, we would obtain a new clustering solution A′, in which \(A^{\prime }_{ik'}=1\) and \(A^{\prime }_{ik}=0\). Since S is symmetric (i.e. S ij =S ji ), the value change of the objective Function 2 is $$ \begin{aligned} \bigtriangleup f_{i} &:= f(A')-f(A) \\ & =\sum\limits_{j \in k'} \left(1-2S_{ij}\right)-\sum\limits_{j \in k}\left(1-2S_{ij}\right) + \sum\limits_{j \in k'}\left(1-2S_{ji}\right)\\ &\quad - \sum\limits_{j \in k}\left(1-2S_{ji}\right) \\ &=2\left(\sum\limits_{j \in k'} \left(1-2S_{ij}\right)-\sum\limits_{j \in k}\left(1-2S_{ij}\right)\right) \ \ \ . \end{aligned} $$ Shrinkage clustering: Base algorithm Based on the simplified objective Function 2 and its properties with cluster changes (Function 3), we designed a greedy algorithm Shrinkage Clustering to rapidly look for a clustering solution A that factorizes a given similarity matrix S. As described in Algorithm 1, Shrinkage Clustering begins by randomly assigning objects to a sufficiently large number of initial clusters. During each iteration, the algorithm first removes any empty clusters generated from the previous iteration, a step that gradually shrinks the number of clusters; then it permutes the cluster membership of the object that most minimizes the objective function. The algorithm stops when the solution converges (i.e. no cluster membership permutation can further minimize the objective function), or when a pre-specified maximum number of iterations is reached. Shrinkage Clustering is guaranteed to converge to a local optimum (see Theorem 1 below). The main and advantageous feature of Shrinkage Clustering is that it shrinks the number of clusters while finding the clustering solution. During the process of permuting cluster memberships to minimize the objective function, clusters automatically collapse and become empty until the optimization process is stabilized and the optimal cluster memberships are found. The number of clusters remaining in the end is the optimal number of clusters, since it stabilizes the final solution. Therefore, Shrinkage Clustering achieves both tasks of (i) finding the optimal number of clusters and (ii) finding the clustering memberships. Theorem 1 Shrinkage Clustering monotonically converges to a (local) optimum. We first demonstrate the monotonically decreasing property of the objective Function 2 in each iteration of the algorithm. There are two steps taken in each iteration: (i) removal of empty clusters; and (ii) permutation of cluster memberships. Step (i) does not change the value of the objective function, because the objective function only depends on non-empty clusters. On the other hand, step (ii) always lowers the objective function, since a cluster membership permutation is chosen based on its ability to achieve the greatest minimization of the objective function. Combing step (i) and (ii), it is obvious that the value of the objective function monotonically decreases with each iteration. Since ∥S−AAT∥ F ≥0 and \(\big \|S-AA^{T}\big \|_{F}=\sum _{i=1}^{N} \sum _{j \in \{ j|A_{i}=A_{j} \}} \left (1-2S_{ij}\right) + \sum _{i=1}^{N} \sum _{j=1}^{N} S_{ij}^{2}\), the objective function has a lower bound of \(-\sum _{i=1}^{N} \sum _{j=1}^{N} S_{ij}^{2}\). Therefore, a convergence to a (local) optimum is guaranteed, because the algorithm is monotonically decreasing with a lower bound. □ Shrinkage clustering with cluster size constraints It is well-known that K-means can generate empty clusters when clustering high-dimensional data with over 20 clusters, and Hierarchical Clustering often generate tiny clusters with few samples. In practice, clusters of too small a size can sometimes be full of outliers, and they are often not preferred in cluster interpretation since most statistical tests do not apply to small sample sizes. Though extensions to K-means were proposed to solve this issue [29], the attempt to control cluster sizes has not been easy. In contrast, the flexibility and the structure of Shrinkage Clustering offers a straightforward and rapid solution to enforcing constraints on cluster sizes. To generate a clustering solution with each cluster containing at least ω objects, we can simply modify Step 1 of the iteration loop in Algorithm 1. Instead of removing empty clusters in the beginning of each iteration, we now remove clusters of sizes smaller than a pre-specified size ω. The base algorithm (Algorithm 1) can be viewed as a special case of w=0 in the size-constrained Shrinkage Clustering algorithm. Experiments on similarity data Testing with simulated similarity matrices We first use simulated similarity matrices to test the performance of Shrinkage Clustering and to examine its sensitivity to the initial parameters and noise. As a proof of concept, we generate a similarity matrix S directly from a known cluster assignment matrix A by S=AAT. Here, the cluster assignment matrix A100×5 is randomly generated to consist of 100 objects grouped into 5 clusters with unequal cluster sizes (i.e. 15, 17, 20, 24 and 24 respectively). The similarity matrix S100×100 generated from the product of A and AT therefore represents an ideal case, where there is no noise, since each entry of S only takes a binary value of either 0 or 1. We apply Shrinkage Clustering to this simulated similarity matrix S with 20 initial random clusters and repeat the algorithm for 1000 times. Each run, the algorithm accurately generates 5 clusters with cluster assignments \(\tilde {A}\) in perfect match with the true cluster assignments A (an example shown in Table 1 under ω=0), demonstrating the algorithm's ability to perfectly recover the cluster assignments in a non-noisy scenario. The shrinkage paths of the first 5 runs (Fig. 1a) illustrate that most runs start around a number of 20 clusters, and all of them shrink down gradually to a final number of 5 clusters when the solution reaches an optimum. Performances of the base algorithm on simulated similarity data. Shrinkage paths plot changes in cluster numbers through the entire iteration process. a The first five shrinkage paths from the 1000 runs (with 20 initial random clusters) are illustrated. b Example shrinkage paths are shown from initiating the algorithm with 5, 10, 20, 50 and 100 random clusters Table 1 Clustering results of simulated similarity matrices with varying size constraints (ω), where C is the cluster generated by Shrinkage Clustering To examine whether Shrinkage Clustering is able to accurately identify imbalanced cluster structures, we generate an alternative version of A100×5 with great differences in cluster sizes (i.e. 2, 3, 10, 35 and 50). We run the algorithm with the same parameters as before (20 initial random clusters repeated for 1000 times). The algorithm generates 5 clusters with the correct cluster assignment in every run, showing its ability to accurately find the true cluster number and true cluster assignments in data with imbalanced cluster sizes. We then test whether the algorithm is sensitive to the initial number of clusters (K0) by running it with K0 ranging from 5 (true number of clusters) to 100 (maximum number of clusters). In each case, the true cluster structure is recovered perfectly, demonstrating the robustness of the algorithm to different initial cluster numbers. The shrinkage paths in Fig. 1b clearly show that in spite of starting with various initial numbers of clusters, all paths converge to the same number of clusters at the end. Next, we investigate the effects of size constraints on Shrinkage Clustering's performance by varying ω from 1 to 5, 10, 20 and 25. The algorithm is repeated 50 times in each case. We find that as long as ω is smaller than the true minimum cluster size (i.e. 15), the size constrained algorithm can perfectly recover the true cluster assignments A in the same way as the base algorithm. Once ω exceeds the true minimum cluster size, clusters are forced to merge and therefore result in a smaller number of clusters (example clustering solutions of ω=20 and ω=25 shown in Table 1). In these cases, it is impossible to find the true cluster structure because the algorithm starts off with fewer clusters than the true number of clusters and it works uni-directionally (i.e. only shrinks). Besides enabling supervision on the cluster sizes, size-constrained Shrinkage Clustering is also computationally advantageous. Figure 2a shows that a larger ω results in fewer iterations needed for the algorithm to converge, and the effect reaches a plateau once ω reaches certain sizes (e.g. ω=10 in this case). The shrinkage paths (Fig. 2b) show that it is the reduced number of iterations at the beginning of a run that speeds up the entire process of solution finding when ω is large. Performances of Shrinkage Clustering with cluster size constraints. a The average number of iterations spent is plotted with ω taking values of 1 to 5, 10, 15, 20 and 25. b Example shrinkage paths are shown for ω of 1 to 5, 10, 15, 20 and 25 (path of ω=10 is in overlap with ω=15) In reality, it is rare to find a perfectly binary similarity matrix similar to what we generated from a known cluster assignment matrix. There is always a certain degree of noise clouding our observations. To investigate how much noise the algorithm can tolerate in the data, we add a layer of Gaussian noise over the simulated similarity matrix. Since S ij ∈{0,1}, we create a new similarity matrix SN containing noise defined by $$S_{ij}^{N} = \left\{ \begin{array}{ll} |\varepsilon_{ij}| & \quad \text{if}\ S_{ij}=0 \\ 1-|\varepsilon_{ij}| & \quad \text{if}\ S_{ij}=1 \end{array}, \right. $$ where ε ij ∼N(0,σ2). The standard deviation σ is varied from 0 to 0.5, and SN is generated 1000 times by randomly sampling ε ij with each σ value. Figure 3a illustrates the changes of the similarity distribution density as σ increases. When σ=0 (i.e. no noise), SN is Bernoulli distributed. As σ becomes larger and larger, the bimodal shape is flattened by noise. When σ=0.5, approximately 32% of the similarity relationships are reversed, and hence observations have been perturbed too much to infer the underlying cluster structure. The performances of Shrinkage Clustering in these noisy conditions are shown in Fig. 3b. The algorithm proves to be quite robust against noise, as the true cluster structure is 100% recovered in all conditions except for when σ>0.4. Robustness of Shrinkage Clustering against noise. a The distribution density of SN is shown with a varying degree of noise, as ε is sampled with σ from 0 to 0.5. b The probability of successfully recovering the underlying cluster structure is plotted against different noise levels. The true cluster recovery is defined as the frequency of generating the exact same cluster assignment as the true cluster assignement when clustering the data with noise generated 1000 times Case Study: TCGA Dataset To illustrate the performance of Shrinkage Clustering on real biological similarity data, we apply the algorithm to subtyping tumors from the Cancer Genome Atlas (TCGA) dataset [30]. Derived from the TCGA database, the dataset includes 293 samples from 3 types of cancers, which are Breast Invasive Carcinoma (BRCA, 207 samples), Glioblastoma Multiforme (GBM, 67 samples) and Lung Squamous Cell Carcinoma (LUSC, 19 samples). The data is presented in the form of a similarity matrix, which integrates information from the gene expression levels, DNA methylation and copy number aberration. Since the similarity scores from the TCGA dataset are in general skewed to 1, we first normalize the data by shifting its median around 0.5 and by bounding values that are greater than 1 and smaller than 0 to 1 and 0 respectively. We then perform Shrinkage Clustering to cluster the cancer samples, the result of which is shown in comparison to the true cancer types (Table 2). We can see that the algorithm generates three clusters, successfully predicting the true number of cancer types contained in the data. The clustering assignments also demonstrate high accuracy, as 98% of samples are correctly clustered with only 5 samples misclassified. In addition, we compared the performance of Shrinkage Clustering to that of five commonly used clustering algorithms that directly cluster similarity data: Spectral Clustering [31], Hierarchical Clustering [13] (Ward's method [32]), PAM [33], AGNES [34], and SymNMF [28]. Since these five methods do not determine the optimal cluster number, the mean Silhouette [22] width is used to pick the optimal cluster number from a range of 2 to 10 clusters. Notably, Shrinkage Clustering is one of the two algorithms that estimate a three-cluster structure (with AGNES), and its accuracy outperforms the rest (Table 5). Table 2 Clustering results of the TCGA dataset, where the clustering assignments from Shrinkage Clustering are compared against the three known tumor types Experiments on feature-based data Testing with simulated and standardized data Since similarity matrices are not always available in most clustering applications, we now test the performance of Shrinkage Clustering using feature-based data that does not directly provide the similarity information between objects. To run Shrinkage Clustering, we first convert the data to a similarity matrix using S= exp(−(D(X)/(βσ))2), where [D(X)] ij is the Euclidean distance between X i and X j , σ is the standard deviation of D(X), and β=E(D(X)2)/σ2. The same conversion method is used for all datasets in the rest of this paper. As a proof of concept, we first generate a simulated three-cluster two-dimensional data set by sampling 50 points for each cluster from bivariate normal distributions with a common identity covariance matrix around centers at (-2, 2), (-2, 2) and (0, 2) respectively. The clustering result from Shrinkage Clustering is shown in Table 3, where the algorithm successfully determines the existence of 3 clusters in the data and obtains a clustering solution with high accuracy. Table 3 Performances of Shrinkage Clustering on Simulated, Iris and Wine data, where the clustering assignments are compared against the three simulated centers, three Iris species and three wine types respectively Next, we test the performance of Shrinkage Clustering using two real data sets, the Iris [35] and the wine data [36], both of which are frequently used to test clustering algorithms; and they can be downloaded from the University of California Irvine (UCI) machine learning repository [37]. The clustering results from Shrinkage Clustering for both datasets are shown in Table 3, where the clustering assignments are compared to the true cluster memberships of the Iris and the wine samples respectively. In application to the wine data, Shrinkage Clustering successfully identifies a correct number of 3 wine types and produces highly accurate cluster memberships. For the Iris data, though the algorithm generates two instead of three clusters, the result is acceptable because the species versicolor and virginica are known to be hardly distinguishable given the features collected. Case study 1: Breast Cancer Wisconsin Diagnostic (BCWD) The BCWD dataset [38, 39] contains 569 breast cancer samples (357 benign and 212 malignant) with 30 characteristic features computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. The dataset is available on the UCI machine learning repository [37] and is one of the most popularly tested dataset for clustering and classification. Here, we apply Shrinkage Clustering to the data and compare its performance against nine commonly used clustering methods: Spectral Clustering [31], K-means [14], Hierarchical Clustering [13] (Ward's method [32]), PAM [33], DBSCAN [16], Affinity Propagation [40], AGNES [34], clusterdp [41], SymNMF [28]. Since K-means, Spectral Clustering, Hierarchical Clustering, PAM, AGNES and SymNMF do not inherently determine the optimal cluster number and require the cluster number as an input, we first run these algorithms with cluster numbers from 2 to 10, and then use the mean Silhouette width as the criterion to select the optimal cluster number. For algorithms that internally select the optimal cluster number (i.e. DBSCAN, Affinity Propagation and clusterdp), we tune the parameters to generate clustering solutions with cluster numbers similar to the true cluster numbers so that the accuracy comparison is less biased. The parameter values for each algorithm are specified in Table 4. For DBSCAN, the clustering memberships of non-noise samples are used for assessing accuracy. The accuracy of all clustering solutions is evaluated using four metrics: Normalized Mutual Information (NMI) [42], Rand Index [42], F1 score [42], and the optimal cluster number (K). Table 4 Parameter values of DBSCAN, Affinity Propagation and clusterdp Table 5 Performance comparison of ten algorithms on six biological data sets, i.e. TCGA, BCWD, Dyrskjot-2003, Nutt-2003-v1, Nutt-2003-v3 and AIBT The performance results (Table 5) show that Shrinkage Clustering correctly predicts a 2 cluster structure from the data and generates the clustering assignments with high accuracy. When comparing the cluster assignments against the true cluster memberships, we can see that Shrinkage Clustering is among the top three best performers across all accuracy metrics. Case study 2: Benchmarking gene expression data for cancer subtyping Next, we test the performance of Shrinkage Clustering as well as the nine commonly used algorithms in application to identifying cancer subtypes using three benchmarking datasets from de Souto et al. [43]: Dyrskjot-2003 [44], Nutt-2003-v1 [45] and Nutt-2003-v3 [45]. Dyrskjot-2003 contains the expression levels of 1203 genes in 40 well-characterized bladder tumor biopsy samples from three subclasses of bladder carcinoma: T2+ (9 samples), Ta (20 samples), and T1 (11 samples). Nutt-2003-v1 contains the expression levels of 1377 genes in 50 gliomas from four subclasses: classic gliobalstomas (14 samples), classic anaplastic oligodendrogliomas (7 samples), nonclassic glioblastomas (14 samples), and nonclassic anaplastic oligodendrogliomas (15 samples). Nutt-2003-v3 is a subset of Nutt-2003-v1, containing 7 samples of classic anaplastic oligodendrogliomas and 15 samples of nonclassic anaplastic oligodendrogliomas with the expression of 1152 genes. All three data sets are small in sample sizes and high in dimensions, which is often the case in clinical research. The performance of all ten algorithms is compared using the same metrics as in the previous case study, and the result is shown in Table 5. Though there is no clear winning algorithm across all data sets, Shrinkage Clustering is among the top three performers in all cases, along with other top performing algorithms such as SymNMF, K-means and DBSCAN. Since the clustering results from DBSCAN are compared to the true cluster assignments excluding the noise samples, the accuracy of DBSCAN may be slightly overestimated. Case Study 3: Allen Institute Brain Tissue (AIBT) The AIBT dataset [46] contains RNA sequencing data of 377 samples from four types of brain tissues, i.e. 99 samples of temporal cortex, 91 samples of parietal cortex, 93 samples of cortical white matter, and 94 samples hippocampus isolated by macro-dissection. For each sample, the expression levels of 50282 genes are included as features, and each feature is normalized to have a mean of 0 and a standard deviation of 1 prior to testing. In contrast to the previous case study, the AIBT data is much larger in size with significantly more features being measured. Therefore, this would be a great example to test both the accuracy and the speed of clustering algorithms in face of greater data sizes and higher dimensions. Similar to the previous case studies, we apply Shrinkage Clustering and the nine commonly used clustering algorithms to the data, and use mean Silhouette width to select the optimal cluster number for algorithms that do not inherently determine the cluster number. The performances of all ten algorithms measured across the four accuracy metrics (i.e. NMI, Rand, F1, K) are shown in Table 5. We can see that Shrinkage Clustering is the second best performer among all ten algorithms in terms of clustering quality, with comparable accuracy to the top performer (K-means). Next, we record and compare the speed of the ten algorithms for clustering the data. The speed comparison results, shown in Fig. 4, demonstrate the unparalleled speed of Shrinkage Clustering compared to the rest of the algorithms. Compared to algorithms that automatically select optimal number of clsuters (DBSCAN, Affinity Propagation and Clusterdp), Shrinkage Clustering is two times faster in speed; compared to algorithms that are coupled with external cluster validation algorithms for cluster number selection, Shrinkage Clustering is at least 14 times faster. In particular, the same data that takes Shrinkage Clustering only 73 s to cluster can take Spectral clustering more than 20 h. Speed comparison using the AIBT data. The computation time of Shrinkage Clustering is recorded and compared against other commonly used clustering algorithms From the biological case studies, we showed that Shrinkage Clustering is computationally advantageous in speed with comparable clustering accuracy to top performing clustering algorithms and higher clustering accuracy than algorithms that internally select cluster numbers. The advantage in speed mainly comes from the fact that Shrinkage Clustering integrates the clustering of the data and the determination of the optimal cluster number into one seamless process, so the algorithm only needs to run once in order to complete the clustering task. In contrast, algorithms like K-means, PAM, Spectral Clustering, AGNES and SymNMF perform clustering on a single cluster number basis, therefore they need to be repeatedly run for all cluster numbers of interest before a clustering evaluation method can be applied. Notably, the clustering evaluation method Silhouette that we used in this experiment does not perform any repetitive clustering validation and therefore is a much faster method compared to other commonly used methods that require repetitive validation [27]. This means that Shrinkage Clustering would have an even greater advantage in computation speed compared to the methods tested in this paper if we use a cluster evaluation method that has a repetitive nature (e.g. Consensus Clustering, Gap Statistics, Stability Selection). One prominent feature of Shrinkage Clustering is its flexibility to add the constraint of minimum cluster sizes. The size constraints can help prevent generating empty or tiny clusters (which are often observed in Hierarchical Clustering and sometimes in K-means applications), and can produce clusters of sufficiently large sample sizes as required by the user. This is particularly useful when we need to perform subsequent statistical analyses based on the clustering solution, since clusters of too small a size can make a statistical testing infeasible. For example, one application of cluster analysis in clinical studies is identifying subpopulations of cancer patients based on their gene expression levels, which is usually followed with a survival analysis to determine the prognostic value of the gene expression patterns. In this case, clusters that contain too few patients can hardly generate any significant or meaningful patient outcome comparison. In addition, it is difficult to take actions based on tiny patient clusters (e.g. in the context of designing clinical trials), because these clusters are hard to validate. Since adding minimum size constraints is essentially merging tiny clusters into larger ones and might result in less homogeneous clusters, this approach is unfavorable if the researcher wishes to identify the outliers in the data or to obtain more homogeneous clusters. In these scenarios, we would recommend using the base algorithm without adding the minimum size constraint. Despite its superior speed and high accuracy, Shrinkage Clustering has a couple of limitations. First, the automatic convergence to an optimal cluster number is a double-edged sword. This feature helps to determine the optimal cluster number and speeds up the clustering process dramatically, however it can be unfavorable when the researcher has a desired cluster number in mind that is different from the cluster number identified by the algorithm. Second, the algorithm is based on the assumption of hard clustering, therefore it currently does not provide probabilistic frameworks as those offered by soft clustering. In addition, due to the similarity between symNMF and K-means, the algorithm likely prefers spherical clusters if the similarity matrix is derived from Euclidean distances. Interesting future research directions include exploring and extending the capability of Shrinkage Clustering to identify oddly-shaped clusters, to deal with missing data or incomplete similarity matrices, as well as to handle semi-supervised clustering tasks with must-link and cannot-link constraints. In summary, we developed a new NMF-based clustering method, Shrinkage Clustering, which shrinks the number of clusters to an optimum while simultaneously optimizing the cluster memberships. The algorithm performed with high accuracy on both simulated and actual data, exhibited excellent robustness to noise, and demonstrated superior speeds compared to some of the commonly used algorithms. The base algorithm has also been extended to accommodate requirements on minimum cluster sizes, which can be particularly beneficial to clinical studies and the general biomedical community. Sørlie T, Tibshirani R, Parker J, Hastie T, Marron J, Nobel A, et al.Repeated observation of breast tumor subtypes in independent gene expression data sets. Proc Natl Acad Sci. 2003; 100(14):8418–23. Wirapati P, Sotiriou C, Kunkel S, Farmer P, Pradervand S, Haibe-Kains B, et al.Meta-analysis of gene expression profiles in breast cancer: toward a unified understanding of breast cancer subtyping and prognosis signatures. Breast Cancer Res. 2008; 10(4):R65. Rouzier R, Perou CM, Symmans WF, Ibrahim N, Cristofanilli M, Anderson K, et al.Breast cancer molecular subtypes respond differently to preoperative chemotherapy. Clin Cancer Res. 2005; 11(16):5678–85. Abascal F, Valencia A. Clustering of proximal sequence space for the identification of protein families. Bioinformatics. 2002; 18(7):908–21. Stam MR, Danchin EG, Rancurel C, Coutinho PM, Henrissat B. Dividing the large glycoside hydrolase family 13 into subfamilies: towards improved functional annotations of α-amylase-related proteins. Protein Eng Des Sel. 2006; 19(12):555–62. de Lima EB, Júnior WM, de Melo-Minardi RC. Isofunctional Protein Subfamily Detection Using Data Integration and Spectral Clustering. PLoS Comput Biol. 2016; 12(6):e1005001. Chen X, Velliste M, Weinstein S, Jarvik JW, Murphy RF. Location proteomics—Building subcellular location tree from high resolution 3D fluorescence microcope images of randomly-tagged proteins. Manipulation and Analysis of Biomolecules, Cells, and Tissues, Proceedings of SPIE 4962; 2003, pp. 298–306. Slater JH, Culver JC, Long BL, Hu CW, Hu J, Birk TF, et al.Recapitulation and modulation of the cellular architecture of a user-chosen cell of interest using cell-derived, biomimetic patterning. ACS nano. 2015; 9(6):6128–38. Haldar P, Pavord ID, Shaw DE, Berry MA, Thomas M, Brightling CE, et al.Cluster analysis and clinical asthma phenotypes. Am J Respir Crit Care Med. 2008; 178(3):218–24. Moore WC, Meyers DA, Wenzel SE, Teague WG, Li H, Li X, et al.Identification of asthma phenotypes using cluster analysis in the Severe Asthma Research Program. Am J Respir Crit Care Med. 2010; 181(4):315–23. Jain AK, Murty MN, Flynn PJ. Data clustering: a review. ACM Comput Surv (CSUR). 1999; 31(3):264–323. Wiwie C, Baumbach J, Röttger R. Comparing the performance of biomedical clustering methods. Nat Med. 2015; 12(11):1033–8. Johnson SC. Hierarchical clustering schemes. Psychometrika. 1967; 32(3):241–54. MacQueen J, et al.Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. 1, No. 14. California: University of California Press: 1967. p. 281–97. Lloyd S. Least squares quantization in PCM. Inf Theory IEEE Trans. 1982; 28(2):129–37. Ester M, Kriegel HP, Sander J, Xu X. A density-based algorithm for discovering clusters in large spatial databases with noise. In: KDD. vol. 96, No. 34. Portland: 1996. p. 226–31. McLachlan GJ, Basford KE. Mixture models: inference and applications to clustering. New York: Marcel Dekker; 1988. Shi J, Malik J. Normalized cuts and image segmentation. Pattern Anal Mach Intell IEEE Trans. 2000; 22(8):888–905. Li T, Ding CH. Data Clustering: Algorithms and Applications. Boca Raton: CRC Press; 2013, pp. 149–76. Ding C, He X, Simon HD. On the equivalence of nonnegative matrix factorization and spectral clustering. In: Proceedings of the 2005 SIAM International Conference on Data Mining. Philadelphia: SIAM: 2005. p. 606–10. Brunet JP, Tamayo P, Golub TR, Mesirov JP. Metagenes and molecular pattern discovery using matrix factorization. Proc Natl Acad Sci. 2004; 101(12):4164–9. Rousseeuw PJ. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math. 1987; 20:53–65. Pelleg D, Moore AW, et al.X-means: Extending K-means with Efficient Estimation of the Number of Clusters. In: ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning. San Francisco: Morgan Kaufmann Publishers Inc.: 2000. p. 727–734. Tibshirani R, Walther G, Hastie T. Estimating the number of clusters in a data set via the gap statistic. J R Stat Soc Ser B Stat Methodol. 2001; 63(2):411–23. Monti S, Tamayo P, Mesirov J, Golub T. Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data. Mach Learn. 2003; 52(1-2):91–118. Lange T, Roth V, Braun ML, Buhmann JM. Stability-based validation of clustering solutions. Neural Comput. 2004; 16(6):1299–323. Hu CW, Kornblau SM, Slater JH, Qutub AA. Progeny Clustering: A Method to Identify Biological Phenotypes. Sci Rep. 2015; 5(12894):5. https://doi.org/10.1038/srep12894. Kuang D, Ding C, Park H. Symmetric nonnegative matrix factorization for graph clustering. In: Proceedings of the 2012 SIAM international conference on data mining. Philadelphia: SIAM: 2012. p. 106–17. Bradley P, Bennett K, Demiriz A. Constrained k-means clustering. Redmond: Microsoft Research; 2000, pp. 1–8. Speicher N, Lengauer T. Towards the identification of cancer subtypes by integrative clustering of molecular data. Saarbrücken: Universität des Saarlandes; 2012. Zeileis A, Hornik K, Smola A, Karatzoglou A. kernlab-an S4 package for kernel methods in R. J Stat Softw. 2004; 11(9):1–20. Ward Jr JH. Hierarchical grouping to optimize an objective function. J Am Stat Assoc. 1963; 58(301):236–44. Maechler M, Rousseeuw P, Struyf A, Hubert M, Hornik K. Cluster: cluster analysis basics and extensions. R Package Version. 2012; 1(2):56. Kaufman L, Rousseeuw PJ. Finding groups in data: an introduction to cluster analysis, vol. 344.Hoboken: John Wiley & Sons; 2009. Fisher RA. The use of multiple measurements in taxonomic problems. Ann Eugenics. 1936; 7(2):179–88. Aeberhard S, Coomans D, De Vel O. Comparison of classifiers in high dimensional settings. Dept Math Statist, James Cook Univ, North Queensland, Australia. Tech Rep. 1992;92-02. Bache K, Lichman M. UCI Machine Learning Repository: University of California, Irvine, School of Information and Computer Sciences; 2013. http://archive.ics.uci.edu/ml. Street WN, Wolberg WH, Mangasarian OL. Nuclear feature extraction for breast tumor diagnosis. In: IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology. San Jose: International Society for Optics and Photonics: 1993. p. 861–70. Mangasarian OL, Street WN, Wolberg WH. Breast cancer diagnosis and prognosis via linear programming. Oper Res. 1995; 43(4):570–7. Frey BJ, Dueck D. Clustering by passing messages between data points. Science. 2007; 315(5814):972–6. Rodriguez A, Laio A. Clustering by fast search and find of density peaks. Science. 2014; 344(6191):1492–6. Manning CD, Raghavan P, Schütze H, et al.Introduction to information retrieval, vol. 1.Cambridge: Cambridge university press; 2008. Book Google Scholar de Souto MC, Costa IG, de Araujo DS, Ludermir TB, Schliep A. Clustering cancer gene expression data: a comparative study. BMC Bioinformatics. 2008; 9(1):497. Dyrskjøt L, Thykjaer T, Kruhøffer M, Jensen JL, Marcussen N, Hamilton-Dutoit S, et al.Identifying distinct classes of bladder carcinoma using microarrays. Nat Genet. 2003; 33(1):90. Nutt CL, Mani D, Betensky RA, Tamayo P, Cairncross JG, Ladd C, et al.Gene expression-based classification of malignant gliomas correlates better with survival than histological classification. Cancer Res. 2003; 63(7):1602–7. Montine JT, Sonnen AJ, Montine SK, Crane KP, Larson BE. Adult Changes in Thought study: dementia is an individually varying convergent syndrome with prevalent clinically silent diseases that may be modified by some commonly used therapeutics. Curr Alzheim Res. 2012; 9(6):718–23. This research was funded in part by NSF CAREER 1150645 and NIH R01 GM106027 grants to A.A.Q., and a HHMI Med-into-Grad fellowship to C.W. Hu. The datasets used in this study are publicly available (see references in the text where each dataset is first introduced). Department of Bioengineering, Rice University, Main Street, Houston, 77030, USA Chenyue W. Hu, Hanyang Li & Amina A. Qutub Chenyue W. Hu Hanyang Li Amina A. Qutub Method conception and development: CWH; method testing and manuscript writing: CWH, HL, AAQ; study supervision: AAQ. All authors read and approved the final manuscript. Correspondence to Amina A. Qutub. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Hu, C., Li, H. & Qutub, A. Shrinkage Clustering: a fast and size-constrained clustering algorithm for biomedical applications. BMC Bioinformatics 19, 19 (2018). https://doi.org/10.1186/s12859-018-2022-8 Matrix factorization Cancer subtyping Results and data-
CommonCrawl
Möbius invariants in image recognition JGM Home The Madelung transform as a momentum map June 2017, 9(2): 167-189. doi: 10.3934/jgm.2017007 Local well-posedness of the EPDiff equation: A survey Boris Kolev Aix Marseille Université, CNRS, Centrale Marseille, I2M, UMR 7373, 13453 Marseille, France Received December 2015 Revised August 2016 Published May 2017 Full Text(HTML) This article is a survey on the local well-posedness problem for the general EPDiff equation. The main contribution concerns recent results on local existence of the geodesics on $\text{Dif}{{\text{f}}^{\infty }}\left( {{\mathbb{T}}^{d}} \right)$ and $\text{Dif}{{\text{f}}^{\infty }}\left( {{\mathbb{R}}^{d}} \right)$ when the inertia operator is a non-local Fourier multiplier. Keywords: EPDiff equation, diffeomorphism groups, Sobolev metrics of fractional order. Mathematics Subject Classification: Primary: 58D05; Secondary: 35Q35. Citation: Boris Kolev. Local well-posedness of the EPDiff equation: A survey. Journal of Geometric Mechanics, 2017, 9 (2) : 167-189. doi: 10.3934/jgm.2017007 V. I. Arnold, Sur la géométrie différentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits, Ann. Inst. Fourier (Grenoble), 16 (1966), 319-361. doi: 10.5802/aif.233. Google Scholar V. I. Arnold and B. Khesin, Topological Methods in Hydrodynamics, vol. 125 of Applied Mathematical Sciences, Springer-Verlag, New York, 1998. Google Scholar V. I. Averbukh and O. G. Smolyanov, The various definitions of the derivative in linear topological spaces, Russian Math. Surveys, 23 (1968), 67-116. Google Scholar M. Bauer, J. Escher and B. Kolev, Local and Global Well-posedness of the fractional order EPDiff equation on ${R}^d$, Journal of Differential Equations, 258 (2015), 2010-2053. doi: 10.1016/j.jde.2014.11.021. Google Scholar M. Bauer, B. Kolev and S. C. Preston, Geometric investigations of a vorticity model equation, J. Differential Equations, 260 (2016), 478-516. doi: 10.1016/j.jde.2015.09.030. Google Scholar Y. Brenier, The least action principle and the related concept of generalized flows for incompressible perfect fluids, J. Amer. Math. Soc., 2 (1989), 225-255. doi: 10.1090/S0894-0347-1989-0969419-8. Google Scholar Y. Brenier, Minimal geodesics on groups of volume-preserving maps and generalized solutions of the Euler equations, Comm. Pure Appl. Math., 52 (1999), 411-452. doi: 10.1002/(SICI)1097-0312(199904)52:4<411::AID-CPA1>3.0.CO;2-3. Google Scholar R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar J. -Y. Chemin, Équations d'Euler d'un fluide incompressible, in Facettes mathématiques de la mécanique des fluides, Ed. Éc. Polytech., Palaiseau, 2010, 9-30. Google Scholar E. Cismas, Euler-Poincaré-Arnold equations on semi-direct products, Discrete and Continuous Dynamical Systems -Series A, 36 (2016), 5993-6022. doi: 10.3934/dcds.2016063. Google Scholar A. Constantin, T. Kappeler, B. Kolev and P. Topalov, On geodesic exponential maps of the Virasoro group, Ann. Global Anal. Geom., 31 (2007), 155-180. doi: 10.1007/s10455-006-9042-8. Google Scholar A. Constantin and B. Kolev, Geodesic flow on the diffeomorphism group of the circle, Comment. Math. Helv., 78 (2003), 787-804. doi: 10.1007/s00014-003-0785-6. Google Scholar A. Constantin and B. Kolev, On the geometry of the diffeomorphism group of the circle, in Number Theory, Analysis and Geometry, Springer, New York, 2012,143-160. doi: 10.1007/978-1-4614-1260-1_7. Google Scholar P. Constantin, P. D. Lax and A. Majda, A simple one-dimensional model for the three-dimensional vorticity equation, Comm. Pure Appl. Math., 38 (1985), 715-724. doi: 10.1002/cpa.3160380605. Google Scholar A. Degasperis, D. D. Holm and A. N. I. Hone, A new integrable equation with peakon solutions, Teoret. Mat. Fiz., 133 (2002), 170-183. doi: 10.1023/A:1021186408422. Google Scholar A. Degasperis and M. Procesi, Asymptotic integrability, in Symmetry and Perturbation Theory (Rome, 1998), World Sci. Publ., River Edge, NJ, 1999, 23-37. Google Scholar D. G. Ebin, J. E. Marsden and A. E. Fischer, Diffeomorphism groups, hydrodynamics and relativity, in Proceedings of the Thirteenth Biennial Seminar of the Canadian Mathematical Congress Differential Geometry and Applications, (Dalhousie Univ., Halifax, N. S., 1971), Canad. Math. Congr., Montreal, Que., 1 (1972), 135-279. Google Scholar D. G. Ebin and J. E. Marsden, Groups of diffeomorphisms and the motion of an incompressible fluid, Ann. of Math.(2), 92 (1970), 102-163. doi: 10.2307/1970699. Google Scholar D. G. Ebin, On the space of Riemannian metrics, Bull. Amer. Math. Soc., 74 (1968), 1001-1003. doi: 10.1090/S0002-9904-1968-12115-9. Google Scholar D. G. Ebin, The manifold of Riemannian metrics, in Global Analysis (Proc. Sympos. Pure Math., Vol. XV, Berkeley, Calif., 1968), Amer. Math. Soc., Providence, R. I., 1970, 11-40. Google Scholar D. G. Ebin, A concise presentation of the Euler equations of hydrodynamics, Comm. Partial Differential Equations, 9 (1984), 539-559. doi: 10.1080/03605308408820341. Google Scholar H. I. Elĭasson, Geometry of manifolds of maps, J. Differential Geometry, 1 (1967), 169-194. doi: 10.4310/jdg/1214427887. Google Scholar J. Escher and B. Kolev, The Degasperis-Procesi equation as a non-metric Euler equation, Math. Z., 269 (2011), 1137-1153. doi: 10.1007/s00209-010-0778-2. Google Scholar J. Escher and B. Kolev, Geodesic completeness for Sobolev $H^s$-metrics on the diffeomorphism group of the circle, J. Evol. Equ., 14 (2014), 949-968. doi: 10.1007/s00028-014-0245-3. Google Scholar J. Escher and B. Kolev, Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle, J. Geom. Mech., 6 (2014), 335-372. doi: 10.3934/jgm.2014.6.335. Google Scholar J. Escher, B. Kolev and M. Wunsch, The geometry of a vorticity model equation, Commun. Pure Appl. Anal., 11 (2012), 1407-1419. doi: 10.3934/cpaa.2012.11.1407. Google Scholar L. P. Euler, Du mouvement de rotation des corps solides autour d'un axe variable, Mémoires de l'académie des sciences de Berlin, 14 (1765), 154-193. Google Scholar A. Frölicher and A. Kriegl, Linear Spaces and Differentiation Theory, Pure and Applied Mathematics (New York), John Wiley & Sons, Ltd., Chichester, 1988, A Wiley-Interscience Publication. Google Scholar F. Gay-Balmaz, Infinite Dimensional Geodesic Flows and the Universal Teichmüller Space, PhD thesis, Ecole Polytechnique Fédérale de Lausanne, Lausanne, 2009. Google Scholar F. Gay-Balmaz, Well-posedness of higher dimensional Camassa-Holm equations, Bull. Transilv. Univ. Braş sov Ser. Ⅲ, 2 (2009), 55-58. Google Scholar R. S. Hamilton, The inverse function theorem of Nash and Moser, Bull. Amer. Math. Soc. (N.S.), 7 (1982), 65-222. doi: 10.1090/S0273-0979-1982-15004-2. Google Scholar N. Hermas and S. Djebali, Existence de l'application exponentielle riemannienne d'un groupe de difféomorphismes muni d'une métrique de Sobolev, J. Math. Pures Appl.(9), 94 (2010), 433-446. doi: 10.1016/j.matpur.2009.11.004. Google Scholar P. Iglesias-Zemmour, Diffeology, vol. 185 of Mathematical Surveys and Monographs, American Mathematical Society, 2013. doi: 10.1090/surv/185. Google Scholar H. Inci, T. Kappeler and P. Topalov, On the Regularity of the Composition of Diffeomorphisms, vol. 226 of Memoirs of the American Mathematical Society, 1st edition, American Mathematical Society, 2013. doi: 10.1090/S0065-9266-2013-00676-4. Google Scholar T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., 41 (1988), 891-907. doi: 10.1002/cpa.3160410704. Google Scholar H. H. Keller, Differential Calculus in Locally Convex Spaces, Lecture Notes in Mathematics, Vol. 417, Springer-Verlag, Berlin-New York, 1974. Google Scholar B. Khesin and G. Misiolek, Euler equations on homogeneous spaces and Virasoro orbits, Adv. Math., 176 (2003), 116-144. doi: 10.1016/S0001-8708(02)00063-4. Google Scholar B. Khesin and V. Ovsienko, The super Korteweg-de Vries equation as an Euler equation, Funktsional. Anal. i Prilozhen., 21 (1987), 81-82. Google Scholar S. Kouranbaeva, The Camassa-Holm equation as a geodesic flow on the diffeomorphism group, J. Math. Phys., 40 (1999), 857-868. doi: 10.1063/1.532690. Google Scholar A. Kriegl and P. W. Michor, The Convenient Setting of Global Analysis, vol. 53 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 1997. doi: 10.1090/surv/053. Google Scholar S. Lang, Fundamentals of Differential Geometry, vol. 191 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1999. doi: 10.1007/978-1-4612-0541-8. Google Scholar J. Lenells, The Hunter-Saxton equation describes the geodesic flow on a sphere, J. Geom. Phys., 57 (2007), 2049-2064. doi: 10.1016/j.geomphys.2007.05.003. Google Scholar J. Lenells, The Hunter-Saxton equation: A geometric approach, SIAM J. Math. Anal., 40 (2008), 266-277. doi: 10.1137/050647451. Google Scholar J. Lenells, G. Misiołek and F. Tiğlay, Integrable evolution equations on spaces of tensor densities and their peakon solutions, Comm. Math. Phys., 299 (2010), 129-161. doi: 10.1007/s00220-010-1069-9. Google Scholar A. J. Majda and A. L. Bertozzi, Vorticity and Incompressible Flow, vol. 27 of Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2002. Google Scholar R. McLachlan and X. Zhang, Well-posedness of modified Camassa-Holm equations, J. Differential Equations, 246 (2009), 3241-3259. doi: 10.1016/j.jde.2009.01.039. Google Scholar P. W. Michor, Manifolds of Differentiable Mappings, vol. 3 of Shiva Mathematics Series, Shiva Publishing Ltd., Nantwich, 1980. Google Scholar P. W. Michor, Some geometric evolution equations arising as geodesic equations on groups of diffeomorphisms including the Hamiltonian approach, in Phase Space Analysis of Partial Differential Equations, vol. 69 of Progr. Nonlinear Differential Equations Appl., Birkhäuser Boston, Boston, MA, 2006,133-215. doi: 10.1007/978-0-8176-4521-2_11. Google Scholar P. Michor and D. Mumford, On Euler's equation and 'EPDiff', The Journal of Geometric Mechanics, 5 (2013), 319-344. doi: 10.3934/jgm.2013.5.319. Google Scholar J. Milnor, Remarks on infinite-dimensional Lie groups, in Relativity, Groups and Topology, II (Les Houches, 1983), North-Holland, Amsterdam, 1984,1007-1057. Google Scholar G. Misiolek, A shallow water equation as a geodesic flow on the Bott-Virasoro group, J. Geom. Phys., 24 (1998), 203-208. doi: 10.1016/S0393-0440(97)00010-7. Google Scholar G. Misiolek, Classical solutions of the periodic Camassa-Holm equation, Geom. Funct. Anal., 12 (2002), 1080-1104. doi: 10.1007/PL00012648. Google Scholar G. Misiolek and S. C. Preston, Fredholm properties of Riemannian exponential maps on diffeomorphism groups, Invent. Math., 179 (2010), 191-227. doi: 10.1007/s00222-009-0217-3. Google Scholar J. J. Moreau, Une méthode de "cinématique fonctionnelle" en hydrodynamique, C. R. Acad. Sci. Paris, 249 (1959), 2156-2158. Google Scholar H. Omori, On the group of diffeomorphisms on a compact manifold, in Global Analysis (Proc. Sympos. Pure Math., Vol. XV, Berkeley, Calif., 1968), Amer. Math. Soc., Providence, R. I., 1970,167-183. Google Scholar H. Omori, Infinite-dimensional Lie Groups, vol. 158 of Translations of Mathematical Monographs, American Mathematical Society, Providence, RI, 1997, Translated from the 1979 Japanese original and revised by the author. Google Scholar R. S. Palais, Foundations of Global Non-Linear Analysis, W. A. Benjamin, Inc., New York-Amsterdam, 1968. Google Scholar H. Poincaré, Sur une nouvelle forme des équations de la mécanique, C.R. Acad. Sci., 132 (1901), 369-371. Google Scholar M. Ruzhansky and V. Turunen, Pseudo-differential Operators and Symmetries, vol. 2 of Pseudo-Differential Operators. Theory and Applications, Birkhäuser Verlag, Basel, 2010, Background analysis and advanced topics. doi: 10.1007/978-3-7643-8514-9. Google Scholar S. Shkoller, Geometry and curvature of diffeomorphism groups with $H^1$ metric and mean hydrodynamics, J. Funct. Anal., 160 (1998), 337-365. doi: 10.1006/jfan.1998.3335. Google Scholar S. Shkoller, Analysis on groups of diffeomorphisms of manifolds with boundary and the averaged motion of a fluid, J. Differential Geom., 55 (2000), 145-191. doi: 10.4310/jdg/1090340568. Google Scholar A. Shnirelman, Generalized fluid flows, their approximation and applications, Geometric and Functional Analysis, 4 (1994), 586-620. doi: 10.1007/BF01896409. Google Scholar A. I. Shnirelman, The geometry of the group of diffeomorphisms and the dynamics of an ideal incompressible fluid, Mat. Sb. (N.S.), 128 (1985), 82-109,144. Google Scholar A. Trouvé and L. Younes, Local geometry of deformable templates, SIAM J. Math. Anal., 37 (2005), 17-59 (electronic). doi: 10.1137/S0036141002404838. Google Scholar M. Wunsch, On the geodesic flow on the group of diffeomorphisms of the circle with a fractional Sobolev right-invariant metric, J. Nonlinear Math. Phys., 17 (2010), 7-11. doi: 10.1142/S1402925110000544. Google Scholar Joachim Escher, Boris Kolev. Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle. Journal of Geometric Mechanics, 2014, 6 (3) : 335-372. doi: 10.3934/jgm.2014.6.335 David Mumford, Peter W. Michor. On Euler's equation and 'EPDiff'. Journal of Geometric Mechanics, 2013, 5 (3) : 319-344. doi: 10.3934/jgm.2013.5.319 Gerard Thompson. Invariant metrics on Lie groups. Journal of Geometric Mechanics, 2015, 7 (4) : 517-526. doi: 10.3934/jgm.2015.7.517 Martin Bauer, Philipp Harms, Peter W. Michor. Sobolev metrics on shape space of surfaces. Journal of Geometric Mechanics, 2011, 3 (4) : 389-438. doi: 10.3934/jgm.2011.3.389 Martin Bauer, Philipp Harms, Peter W. Michor. Sobolev metrics on shape space, II: Weighted Sobolev metrics and almost local metrics. Journal of Geometric Mechanics, 2012, 4 (4) : 365-383. doi: 10.3934/jgm.2012.4.365 Firas Hindeleh, Gerard Thompson. Killing's equations for invariant metrics on Lie groups. Journal of Geometric Mechanics, 2011, 3 (3) : 323-335. doi: 10.3934/jgm.2011.3.323 Martins Bruveris. Completeness properties of Sobolev metrics on the space of curves. Journal of Geometric Mechanics, 2015, 7 (2) : 125-150. doi: 10.3934/jgm.2015.7.125 Guangze Gu, Xianhua Tang, Youpei Zhang. Ground states for asymptotically periodic fractional Kirchhoff equation with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3181-3200. doi: 10.3934/cpaa.2019143 Rafael de la Llave, A. Windsor. Smooth dependence on parameters of solutions to cohomology equations over Anosov systems with applications to cohomology equations on diffeomorphism groups. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1141-1154. doi: 10.3934/dcds.2011.29.1141 Nguyen Huy Tuan, Donal O'Regan, Tran Bao Ngoc. Continuity with respect to fractional order of the time fractional diffusion-wave equation. Evolution Equations & Control Theory, 2020, 9 (3) : 773-793. doi: 10.3934/eect.2020033 Hans Ulrich Besche, Bettina Eick and E. A. O'Brien. The groups of order at most 2000. Electronic Research Announcements, 2001, 7: 1-4. Haim Brezis, Petru Mironescu. Composition in fractional Sobolev spaces. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 241-246. doi: 10.3934/dcds.2001.7.241 Hangzhou Hu, Yuan Li, Dun Zhao. Ground state for fractional Schrödinger-Poisson equation in Coulomb-Sobolev space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (6) : 1899-1916. doi: 10.3934/dcdss.2021064 Xuewei Cui, Mei Yu. Non-existence of positive solutions for a higher order fractional equation. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1379-1387. doi: 10.3934/dcds.2019059 Ruy Coimbra Charão, Juan Torres Espinoza, Ryo Ikehata. A second order fractional differential equation under effects of a super damping. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4433-4454. doi: 10.3934/cpaa.2020202 Ru-Yu Lai, Laurel Ohm. Inverse problems for the fractional Laplace equation with lower order nonlinear perturbations. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021051 Carsten Burstedde. On the numerical evaluation of fractional Sobolev norms. Communications on Pure & Applied Analysis, 2007, 6 (3) : 587-605. doi: 10.3934/cpaa.2007.6.587 Sang-Gyun Youn. On the Sobolev embedding properties for compact matrix quantum groups of Kac type. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3341-3366. doi: 10.3934/cpaa.2020148 Feng Zhou, Chunyou Sun. Dynamics for the complex Ginzburg-Landau equation on non-cylindrical domains I: The diffeomorphism case. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3767-3792. doi: 10.3934/dcdsb.2016120 Flavia Antonacci, Marco Degiovanni. On the Euler equation for minimal geodesics on Riemannian manifoldshaving discontinuous metrics. Discrete & Continuous Dynamical Systems, 2006, 15 (3) : 833-842. doi: 10.3934/dcds.2006.15.833 HTML views (78)
CommonCrawl
Is it possible to reach the Sun without expending any fuel/reaction mass? Imagine that I'm designing a space probe, that will initially be placed into an Earth-like orbit around the Sun. My goal is to have the probe fly/fall into the Sun; it's allowed to take as much time as it needs to do that, as long as it gets there eventually. Now the hard part: Since the brute-force rocketry-approach to doing this would take a lot of fuel, I'd like my probe to be able to accomplish its goal without using any kind of rockets (except the ones required to get it to its starting position, of course). Is there a way to do this e.g. by using a solar sail and tacking against the solar wind? If so, how long might that take? (I know the Parker solar probe is using Venus' gravity to help it along, but presumably that technique requires an initial trajectory that will bring it near Venus, which my probe won't have) orbital-mechanics solar-sail the-sun Cornelisinspace Jeremy FriesnerJeremy Friesner If you're already in a solar orbit, then yes. You can use a sail at an angle and send the reflections prograde. The result is to reduce your orbital energy and you spiral in. I recall it was a standard physics problem to find the angle that maximized the energy transfer (it's not 45 degrees). Time of journey depends on the mass of your item and the size of your sail. It also depends on the solar flux, which will increase as you approach the Sun. The wiki page on solar sails has a section on time to reach the inner planets from Earth with some possible craft. With reasonable payloads, it's a few years to Mercury. That's more than halfway to the Sun, and the power is much higher there. So probably less than an additional 25% of time to reach the closest approach. Optimal sail angle This is minor addition (because it's constant at all distances), but to add in the derivation: The power you get from the sail is the total radiation incident on the sail times the component of the momentum change in the v-bar direction. $$ P = I \Delta p_v$$ The inbound intensity is proportional to the cosine of the sail angle, while the v-bar component of the reflected light is proportional to the sine of double the angle. $$ P = \cos(\theta) \sin(2\theta)$$ Maximum will be found at a root of the derivative $$\frac{dP}{d\theta} = 2\cos(\theta) \cos(2\theta) - \sin(\theta) \sin(2 \theta)$$ $$ 2\cos(\theta) \cos(2\theta) = \sin(\theta) \sin(2\theta)$$ Via Wolfram $$ \theta = 2 \pi - 2 \tan^{-1}\left( \sqrt {5 - 2\sqrt{6}} \right) = 35.26^\circ$$ BowlOfRedBowlOfRed $\begingroup$ "power is much higher there" But so is the energy difference. I haven't done the math - that could be another question. $\endgroup$ – Keith McClary Aug 25 '18 at 2:22 $\begingroup$ @KeithMcClary good idea! What is the functional form for r(t) for a solar-sail deorbit into the Sun? $\endgroup$ – uhoh Aug 25 '18 at 2:50 $\begingroup$ As a side note- you'd potentially be able to get to the Sun faster by increasing orbital energy at first then getting a Jupiter assist to kill most of your orbital speed and maybe a final assist on the way back in to target the Sun. This was what Parker Solar Probe's original mission plan was! $\endgroup$ – Jack Aug 25 '18 at 7:23 $\begingroup$ @Jack, true for a chemical burn, but probably not here. The solar power available gets pretty weak out by Jupiter. It would take a really long time to get to Jupiter by spiraling out with solar only. The Wiki page suggests Mars and Mercury are reachable in similar times. $\endgroup$ – BowlOfRed Aug 25 '18 at 7:46 $\begingroup$ Why do you assume it must reach Jupiter by spiraling out? You want a high speed encounter, not a rendezvous. "Burn" prograde while near the sun to raise your apoapsis, "burn" retrograde while far out to lower your periapsis. Most of your acceleration occurs much closer in than Jupiter. $\endgroup$ – Loren Pechtel Aug 26 '18 at 21:45 Not the answer you're looking for? Browse other questions tagged orbital-mechanics solar-sail the-sun or ask your own question. What is the functional form for r(t) for a solar-sail deorbit into the Sun? Could solar sails be used in station keeping? What is the optimal angle for a solar-sail deorbit towards the Sun when radial thrust is included? What is the most stable configuration for a centrifugal ship towed by a solar sail? Has a solar sail been used on an orbital mission? Can you reach the Sun by only thrusting horizontally? How would a Jupiter flyby have helped to get to the Sun? Why was it later ruled out? How much less delta-v would it take to reach the Sun using Venus and Earth flyby's compared to direct? Could a solar sail composed of smart glass stay near the L1 point of Venus?
CommonCrawl
Sending ICs to Space: An Isolated Error Amplifier from Analog Devices July 19, 2018 by Mark Hughes The ADuM3190S is an isolated error amplifier intended for use with linear feedback power supplies. The transfer function of this device is stable over a wide temperature range and over the lifetime of the device. What is an error amplifier? How do we test components for use in space? In this article, we'll take a look at a recent isolated error amplifier from Analog Devices and try to answer these questions along the way. The Analog Devices AduM3190S is an isolated error amplifier that provides fast transient response for linear feedback power supplies. The datasheet states that it is designed to provide better transient response, power density, and stability than optocoupler or shunt-regulator solutions. It is intended for use with linear feedback power supplies. The transfer function of this device is stable over a wide temperature range and over the lifetime of the device. Let's take a closer look What Is an Error Amplifier and Why Do I Need One? There are many ICs that allow designers to convert from one DC voltage to another. Negative feedback is a fundamental technique in voltage regulation because it allows the regulator to establish a certain output voltage and maintain this voltage despite changes in load current. The error amplifier is the central component in this negative-feedback system: it compares the output voltage to a known reference and thereby generates an error signal that allows the circuit to make the necessary corrections. Image of output ripple from TI Switching regulators typically work by temporarily storing energy in the magnetic field of an inductor. Inductors oppose changes in flux and establish currents to maintain magnetic flux. Charges rush into and out of these storage devices and the changing potentials are filtered through a capacitor that ultimately provides the constant potential difference and current requirements of the output. A switching regulator must decide when to turn on and off to control the inrush/outrush of charges and regulate the output. Thresholds are established above and below the desired output voltage. As the switching circuit oscillates between these two thresholds, a "ripple" is seen on the output. This ripple introduces noise downstream that can affect the accuracy of analog-to-digital converters and create electromagnetic interference. Error amplifiers intercept the output voltage and compare it to a stable voltage reference. As the output of the switching regulator varies above or below the target voltage, the error amplifier produces a larger potential difference that very quickly turns the switching regulator on and off. The result is that the switching regulator overshoot and undershoot are strictly controlled and reduced by several orders of magnitude. What is Inside the IC? Let's take a look inside the ADuM3190 IC: Error amplifier pinout from datasheet Many error amplifiers use optocouplers or shunt regulators—this amplifier does not. Instead, it uses an isolation transformer to transfer digital signals to couple the circuits on either side of the IC. An op-amp on the right side of the device is meant to connect to a voltage divider that is connected to the output of the device. The error signal is digitized and transmitted through the Tx block through the transformer to the Rx block. On the left side of the device, the Rx block decodes the PWM signal sent by the right side and uses the information to drive the error amplifier output pins on the left side of the block. For a more detailed description, please see the datasheet. How Do You Prepare an IC for Space? This IC started life as the ADuM3190 error amplifier. It wasn't until they sent it down to the University of Texas for space certification that it became the proud ADuM3190S that stands before you today. Space isn't very far away in aerospace terms, but it is a very harsh environment with different engineering rules. If your chip runs into a small meteor at orbital speeds, you are just plum out of luck. Nothing will survive a 108,000 km/hr impact with a marble-sized meteor. Other problems are a bit more subtle. For example, cooling—an IC in orbit around the Earth cannot depend on convective cooling. It must rely on conduction or radiative cooling. Fortunately, thermal engineers have dealt with these problems for decades and they can provide advice. Once you leave the atmosphere, there is a large amount of radiation that your device must contend with. And this is the type of ionizing radiation that strips electrons from atoms and shortens the lifespans of astronauts. The only way to know how the device will perform is to bombard it with the same type of radiation down here on the surface of the earth through ion bombardment. Ion bombardment test setup, modified for clarity. The ADuM3190 was certified at the Cyclotron Institute at Texas A&M University. Six of the ICs were subjected to as much as $$10^7 \frac{\text{ions}}{\text{cm}^2}$$ at a temperature of 125 °C to attempt to force a latch-up condition. The results are listed on page 9 of ADuM3190S datasheet. To my amazement, most of the ICs under test worked after the ion bombardment. Evaluation Board Available The CN0342 is a flyback power supply that uses the ADuM3190 error amplifier to provide a feedback signal from the secondary side to the primary side of the flyback transformer. The demonstration circuit is able to provide up to 1 A output with a 5V output configuration and can tolerate input voltages that range from 5V to 24 V. CN0342 EVM from Analog Devices If you are going to send a sensor to space, make sure that it has a stable power supply. Error amplifiers are a crucial part of a switching regulator's feedback loop and help the regulator to provide a low-noise, highly accurate reference. Have you had a chance to use an error amplifier in one of your designs? Do you have experience with preparing components for the harsh conditions of space? We'd love to hear about it. Leave a comment below. An Introduction to Harvesting Solar Energy from Space Essential Analog Webinar Series: Making Great Wearable Devices – Three Unique Essential Analog Circuits to Differentiate Your Product AC Energy Metering: A Single-Phase Energy-Metering IC with Autocalibration from Analog Devices Analog Devices Isolated Gate Drivers | New Product Brief New Isolator IC from Analog Devices has Integrated DC-DC Conversion cctb error amplifier space applica Leveraging Magnetic Sensitivity for Better Device Design by Coto Technology End of 2021's New Quantum Research Labs Could Set Pace for 2022 Quantum Hardware World's First TinyML Vision Challenge Crowns 2021 Winners Ep. 33 | From Autonomous Golf Carts to Semis: The Journey to the Self-Driving Truck
CommonCrawl
What's the difference between probability and statistics? What's the difference between probability and statistics, and why are they studied together? probability teaching mathematical-statistics Dmitrij Celov hslchslc 1,33722 gold badges99 silver badges33 bronze badges The short answer to this I've heard from Persi Diaconis is the following: the problems considered by probability and statistics are inverse to each other. In probability theory we consider some underlying process which has some randomness or uncertainty modeled by random variables, and we figure out what happens. In statistics we observe something that has happened, and try to figure out what underlying process would explain those observations. Mark MeckesMark Meckes $\begingroup$ So statistics observes what happens in the physical world, theorizes about the underlying process, and then having found the process, uses it in the sense of probability to predict what will happen next? $\endgroup$ – hslc Jul 26 '10 at 21:34 $\begingroup$ I'm not a statistician, but from my understanding I'd say, yes, that part of what statistics does. $\endgroup$ – Mark Meckes Jul 27 '10 at 0:10 $\begingroup$ Induction vs Deduction? $\endgroup$ – Paolo Jul 27 '10 at 9:14 $\begingroup$ Like Paolo said, probability theory is mainly concerned with the deductive part, statistics with the inductive part of modeling processes with uncertainty. Perhaps it's interesting to mention that if one thinks that the plausible inductive reasoning should be consistent, then actually the result is bayesian statistics, and more interesting this can be derived from probability theory. So bayesian statistics is basically applied probability theory so to speak. $\endgroup$ – Thies Heidecke May 27 '11 at 2:08 $\begingroup$ @Paolo Statistical Inference is considered "Inductive Statistics" $\endgroup$ – kervin Nov 22 '16 at 18:17 I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and asks the probability of drawing a red jelly bean. A statistician infers the proportion of red jelly beans by sampling from the jar. John D. CookJohn D. Cook $\begingroup$ But isn't that just formulation? A probabilist might ask "given I've drawn three red beans, what is the probability that the proportion is fifty fifty?" $\endgroup$ – Thomas Ahle Oct 28 '16 at 15:06 $\begingroup$ @ThomasAhle: That's not a well-defined probability question unless you assume some underlying probabilistic model for the original distribution of colors. $\endgroup$ – Mark Meckes Nov 2 '16 at 13:55 It's misleading to simply say that statistics is simply the inverse of probability. Yes, statistical questions are questions of inverse probability, but they are ill-posed inverse problems, and this makes a big difference in terms of how they are addressed. Probability is a branch of pure mathematics--probability questions can be posed and solved using axiomatic reasoning, and therefore there is one correct answer to any probability question. Statistical questions can be converted to probability questions by the use of probability models. Once we make certain assumptions about the mechanism generating the data, we can answer statistical questions using probability theory. HOWEVER, the proper formulation and checking of these probability models is just as important, or even more important, than the subsequent analysis of the problem using these models. One could say that statistics comprises of two parts. The first part is the question of how to formulate and evaluate probabilistic models for the problem; this endeavor lies within the domain of "philosophy of science". The second part is the question of obtaining answers after a certain model has been assumed. This part of statistics is indeed a matter of applied probability theory, and in practice, contains a fair deal of numerical analysis as well. See: http://bactra.org/reviews/error/ Ben Cann charles.y.zhengcharles.y.zheng $\begingroup$ I love you for this answer $\endgroup$ – badatmath Jun 10 '18 at 11:10 I like this from Steve Skienna's Calculated Bets (see the link for complete discussion): In summary, probability theory enables us to find the consequences of a given ideal world, while statistical theory enables us to to measure the extent to which our world is ideal. Probability is a pure science (math), statistics is about data. They are connected since probability forms some kind of fundament for statistics, providing basic ideas. $\begingroup$ So probability is pure mathematics and statistics is applied mathematics? $\endgroup$ – hslc Jul 26 '10 at 21:24 $\begingroup$ Statistics may be applied and may be not; still the concept of data is always present. $\endgroup$ – user88 Jul 26 '10 at 21:42 Table 3.1 of Intuitive Biostatistics answers this question with the diagram shown below. Note that all the arrows point to the right for probability, and point to the left for statistics. General ---> Specific Population ---> Sample Model ---> Data General <--- Specific Population <--- Sample Model <--- Data Harvey MotulskyHarvey Motulsky $\begingroup$ So statistics is synonymous with data analysis? $\endgroup$ – hslc Jul 26 '10 at 21:25 $\begingroup$ I don't see any distinction. $\endgroup$ – Harvey Motulsky Jul 26 '10 at 21:39 $\begingroup$ Some data analysis does not rely on frequentist statistics. $\endgroup$ – Fr. Mar 21 '11 at 22:41 Probability answers questions about what will happen, statistics answers questions about what did happen. Justin BozonierJustin Bozonier $\begingroup$ By this definition, though, a prediction interval is probability rather than statistics. $\endgroup$ – Glen_b -Reinstate Monica Aug 17 '15 at 10:13 Probability is about quantifying uncertainty whereas statistics is explaining the variation in some measure of interest (e.g., why do income levels vary?) that we observe in the real world. We explain the variation by using some observable factors (e.g., gender, education level, age etc for the income example). However, since we cannot possibly take into account all possible factors that affect income, we leave any unexplained variation to random errors (which is where quantifying uncertainty comes in). Since, we attribute "Variation = Effect of Observable Factors + Effect of Random Errors" we need the tools provided by probability to account for the effect of random errors on the variation that we observe. Some examples follow: Quantifying Uncertainty Example 1: You roll a 6-sided die. What is the probability of obtaining a 1? Example 2: What is the probability that the annual income of an adult person selected at random from the United States is less than $40,000? Explaining Variation Example 1: We observe that the annual income of a person varies. What factors explain the variation in a person's income? Clearly, we cannot account for all factors. Thus, we attribute a person's income to some observable factors (e.g, education level, gender, age etc) and leave any remaining variation to uncertainty (or in the language of statistics: to random errors). Example 2: We observe that some consumers choose Tide most of the time they buy a detergent whereas some other consumers choose detergent brand xyz. What explains the variation in choice? We attribute the variation in choices to some observable factors such as price, brand name etc and leave any unexplained variation to random errors (or uncertainty). $\begingroup$ What if the random errors become greater than the observable factors over time? $\endgroup$ – hslc Jul 27 '10 at 12:59 $\begingroup$ In that case you re-work your model as it is no longer consistent with reality. $\endgroup$ – user28 Jul 28 '10 at 1:06 Probability is the embrace of uncertainty, while statistics is an empirical, ravenous pursuit of the truth (damned liars excluded, of course). $\begingroup$ Here I am thinking of all of frequentist/bayesian probability and all of descriptive/exploratory/inferential statistics. $\endgroup$ – user1108 Sep 14 '10 at 21:57 Similar to what Mark said, Statistics was historically called Inverse Probability, since statistics tries to infer the causes of an event given the observations, while probability tends to be the other way around. raegtinraegtin The probability of an event is its long-run relative frequency. So it's basically telling you the chance of, for example, getting a 'head' on the next flip of a coin, or getting a '3' on the next roll of a die. A statistic is any numerical measure computed from a sample of the population. For example, the sample mean. We use this as a statistic which estimates the population mean, which is a parameter. So basically it's giving you some kind of summary of a sample. You can only get a statistic from a sample, otherwise if you compute a numerical measure on a population, it is called a population parameter. Tony BreyalTony Breyal Probability studies, well, how probable events are. You intuitively know what probability is. Statistics is the study of data: showing it (using tools such as charts), summarizing it (using means and standard deviations etc.), reaching conclusions about the world from which that data was drawn (fitting lines to data etc.), and -- this is key -- quantifying how sure we can be about our conclusions. In order to quantify how sure we can be about our conclusions we need to use Probability. Let's say you have last year's data about rainfall in the region where you live and where I live. Last year it rained an average of 1/4 inch per week where you live, and 3/8 inch where I live. So we can say that rainfall in my region is on average 50% greater than where you live, right? Not so fast, Sparky. It could be a coincidence: maybe it just happened to rain a lot last year where I live. We can use Probability to estimate how confident we can be in our conclusion that my home is 50% soggier than yours. So basically you can say that Probability is the mathematical foundation for the Theory of Statistics. Carlos AcciolyCarlos Accioly In probability theory, we are given random variables X1, X2, ... in some way, and then we study their properties, i.e. calculate probability P{ X1 \in B1 }, study the convergence of X1, X2, ... etc. In mathematical statistics, we are given n realizations of some random variable X, and set of distributions D; the problem is to find amongst distributions from D one which is most likely to generate the data we observed. zoranzoran $\begingroup$ So we can only find patterns that we were looking for in the first place? $\endgroup$ – hslc Jul 27 '10 at 12:49 In probability, the distribution is known and knowable in advance - you start with a known probability distribution function (or similar), and sample from it. In statistics, the distribution is unknown in advance. It may even be unknowable. Assumptions are hypothesised about the probability distribution behind observed data, in order to be able to apply probability theory to that data in order to know whether a null hypothesis about that data can be rejected or not. There is a philosophical discussion about whether there is such a thing as probability in the real world, or whether it is an ideal figment of our mathematical imaginations, and all our observations can only be statistical. Statistics is the pursuit of truth in the face of uncertainty. Probability is the tool that allows us to quantify uncertainty. (I have provided another, longer, answer that assumed that what was being asked was something along the lines of "how would you explain it to your grandmother?") Answer #1: Statistics is parametrized Probability. Any book on measure-theoretic Probability will tell you about the Probability triplet: $(\Omega, \mathcal F, P)$. But if you're doing Statistics, you have to add $\theta$ to the above: $(\Omega, \mathcal F, P_\theta)$, i.e. for different values of $\theta$, you get different probability measures (different distributions). Answer #2: Probability is about going forward; Statistics is about going backward. Probability is about the process of generating (simulating) data given a value of $\theta$. Statistics is about the process of taking data to draw conclusions about $\theta$. Disclaimer: the above are mathematical answers. In reality, much of Statistics is also about designing/discovering appropriate models, questioning existing models, designing experiments, dealing with imperfect data, etc. "All models are wrong." guslgusl $\begingroup$ Analogously, if asked "what is chemistry?" we could reply that it's a set of differential equations. A description of the mathematical theory can give us a small idea of what a subject is about, but it is not the subject itself. $\endgroup$ – whuber♦ Feb 12 '13 at 17:29 Probability: Given known parameters, find the probability of observing a particular set of data. Statistics: Given a particular set of observed data, make an inference about what the parameters might be. Statistics is "more subjective" and "more art than science" (relative to probability). $$\underline{\text{Example}}$$ We have a coin that can be flipped. Let $p$ be the proportion of coin-flips that are heads. Probability: Suppose $p=\frac{1}{2}$. Then what's the probability of getting $HHH$ (three heads in a row)? Most probabilists would give the same, simple answer: "The probability is $\frac{1}{8}$." Statistics: Suppose we get $HHH$. Then what's $p$? Different statisticians will give different, often long-winded answers. Kenny LJKenny LJ The difference between probabilities and statistics is that in probabilities there is no mistake. We are sure for the probability because we know exactly how many sides has a coin, or how many blue caramels are in the vase. But in statistics we examine a piece of a population of whatever we examine, and from this, we try to see the truth, but always there is an a% of wrong conclusions. The only thing in statistics that is true, is this a% mistake, that in fact is a probability. gung - Reinstate Monica♦ TheodoreMTheodoreM Savage's text Foundations of Statistics has been cited over 12000 times on Google Scholar.[3] It tells the following. It is unanimously agreed that statistics depends somehow on probability. But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel. Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis. https://en.wikipedia.org/wiki/Foundations_of_statistics So the point that Probability Theory is a Foundation of Statistics is hardly disputed. Everything else is fair game. But in trying to be more helpful, practical with an answer... However, probability theory contains much that is mostly of mathematical interest and not directly relevant to statistics. Moreover, many topics in statistics are independent of probability theory https://en.wikipedia.org/wiki/Probability_and_statistics The above is not exhaustive or authorative by any means, but I believe it's useful. Commonly it has helped me to see things such as... Descrete Mathematics >> Probability Theory >> Statistics With each being heavily used, on average, in the foundations of the next. That is there are large intersections in how we study the next's foundations. PS. There's inductive and deductive Statistics, so that's not where the difference lies. kervinkervin Many people and mathematicians say that 'STATISTICS is the inverse of PROBABILITY',but its not particularly right. The way of approaching or method of solving these 2 are completely different but they are INTERCONNECTED. i will like to refer to my friend John D Cook..... "I like the example of a jar of red and green jelly beans. A probabilist starts by knowing the proportion of each and lets say finds the probability of drawing a red jelly bean. A statistician infers the proportion of red jelly beans by sampling from the jar." Now the proportion of the red jelly bean obtained by sampling from the jar is used by the probabilist to find the probability of drawing a red bean from the jar Consider this example---->>> In an examination 30% of students failed in physics, 25% failed in maths, 12% failed both in physics and maths. A student is selected at random find the probability that the student has failed in Physics,if it is known that he failed in maths. The above sum is a problem of probability,but if we look carefully we will find that the sum is provided with some statistical data 30% student failed in physics, 25% " " " maths ' ' ' These are basically frequencies if the percentages are calculated. thus we are being provided with a statistical data which in turn helps us to find the probability SO PROBABILITY AND STATISTICS ARE VERY MUCH INTERCONNECTED OR RATHER WE CAN SAY THAT PROBABILITY IS DEPENDENT A LOT ON STATISTICS Hirak MondalHirak Mondal The term "statistics" is beautifully explained by J. C. Maxwell in the article Molecules (in Nature 8, 1873, pp. 437–441). Let me quote the relevant passage: When the working members of Section F get hold of a Report of the Census, or any other document containing the numerical data of Economic and Social Science, they begin by distributing the whole population into groups, according to age, income-tax, education, religious belief, or criminal convictions. The number of individuals is far too great to allow of their tracing the history of each separately, so that, in order to reduce their labour within human limits, they concentrate their attention on small number of artificial groups. The varying number of individuals in each group, and not the varying state of each individual, is the primary datum from which they work. This, of course, is not the only method of studying human nature. We may observe the conduct of individual men and compare it with that conduct which their previous character and their present circumstances, according to the best existing theory, would lead us to expect. Those who practise this method endeavour to improve their knowledge of the elements of human nature, in much the same way as an astronomer corrects the elements of a planet by comparing its actual position with that deduced from the received elements. The study of human nature by parents and schoolmasters, by historians and statesmen, is therefore to be distinguished from that carried on by registrars and tabulators, and by those statesmen who put their faith in figures. The one may be called the historical, and the other the statistical method. The equations of dynamics completely express the laws of the historical method as applied to matter, but the application of these equations implies a perfect knowledge of all the data. But the smallest portion of matter which we can subject to experiment consists of millions of molecules, not one of which ever becomes individually sensible to us. We cannot, therefore, ascertain the actual motion of any one of these molecules, so that we are obliged to abandon the strict historical method, and to adopt the statistical method of dealing with large groups of molecules. He gives this explanation of the statistical method in several other works. For example, "In the statistical method of investigation, we do not follow the system during its motion, but we fix our attention on a particular phase, and ascertain whether the system is in that phase or not, and also when it enters the phase and when it leaves it" (Trans. Cambridge Philos. Soc. 12, 1879, pp. 547–570). There's another beautiful passage by Maxwell about "probability" (from a letter to Campbell, 1850, reprinted in The Life of James Clerk Maxwell, p. 143): the actual science of Logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true Logic for this world is the Calculus of Probabilities, which takes account of the magnitude of the probability (which is, or which ought to be in a reasonable man's mind). So we can say: – In statistics we are "concentrating our attention on small number of artificial groups" or quantities; we're making a sort of cataloguing or census. – In probability we are calculating our uncertainty about some events or quantities. The two are distinct, and we can be doing the one without the other. For example, if we make a complete census of the entire population of a nation and count the exact number of people belonging to particular groups such as age, gender, and so on, we are doing statistics. There's no uncertainty – probability – involved, because the numbers we find are exact and known. On the other hand, imagine someone passing in front of us on the street, and we wonder about their age. In this case we're uncertain and we use probability, but there is no statistics involved, since we aren't making some sort of census or catalogue. But the two can also occur together. If we can't make a complete census of a population, we have to guess how many people are in specific age-gender groups. Hence we're using probability while doing statistics. Vice versa, we can consider exact statistical data about people's ages, and from such data try to make a better guess about the person passing in front of us. Hence we're using statistics while deciding upon a probability. pglpmpglpm $\begingroup$ Thank you for your contribution. Although interesting, it does not comport with what statisticians believe statistics to be nor with what they actually do, as shown at stats.stackexchange.com/questions/140547/…. $\endgroup$ – whuber♦ Feb 16 '19 at 22:22 $\begingroup$ It's a moot point. I know professional statisticians who disagree with the ASA definition (which is terribly vague) and agree with Maxwell. $\endgroup$ – pglpm Feb 17 '19 at 0:00 Not the answer you're looking for? Browse other questions tagged probability teaching mathematical-statistics or ask your own question. the relationship between statistics and probability What is the difference between "likelihood" and "probability"? How to describe statistics in one sentence? Can you say that statistics and probability is like induction and deduction? Odds at least 1 person is born in January? How much do probabilists need to know about statistics? What is the relation between statistics theory and decision theory? Differences between a statistical model and a probability model? What's the difference between mathematical statistics and statistics? Differences between Bhattacharyya distance and KL divergence How important are interpretations of probability to the practice of statistics? Difference between "Sampling" and "Subsampling"? What's the difference between $P[A \cap B]$ and $P[A | B]$ in this question?
CommonCrawl
OBD-II PIDs OBD-II PIDs (On-board diagnostics Parameter IDs) are codes used to request data from a vehicle, used as a diagnostic tool. SAE standard J1979 defines many OBD-II PIDs. All on-road vehicles and trucks sold in North America are required to support a subset of these codes, primarily for state mandated emissions inspections. Manufacturers also define additional PIDs specific to their vehicles. Though not mandated, many motorcycles also support OBD-II PIDs. In 1996, light duty vehicles (less than 8,500 lb [3,900 kg]) were the first to be mandated followed by medium duty vehicles (between 8,500–14,000 lb [3,900–6,400 kg]) in 2005.[1] They are both required to be accessed through a standardized data link connector defined by SAE J1962. Heavy duty vehicles (greater than 14,000 lb [6,400 kg]) made after 2010,[1] for sale in the US are allowed to support OBD-II diagnostics through SAE standard J1939-13 (a round diagnostic connector) according to CARB in title 13 CCR 1971.1. Some heavy duty trucks in North America use the SAE J1962 OBD-II diagnostic connector that is common with passenger cars, notably Mack and Volvo Trucks, however they use 29 bit CAN identifiers (unlike 11 bit headers used by passenger cars). 1 Services / Modes 2 Standard PIDs 2.1 Service 01 2.7 Bitwise encoded PIDs 2.7.1 Service 01 PID 00 2.7.5 Service 03 (no PID required) 2.7.7 Service 09 PID 0B 2.8 Enumerated PIDs 2.8.3 Service 01 PID 1C 2.8.4 Fuel Type Coding 3 Non-standard PIDs 4 CAN (11-bit) bus format 4.1 Query 4.2 Response Services / ModesEdit There are 10 diagnostic services described in the latest OBD-II standard SAE J1979. Before 2002, J1979 referred to these services as "modes". They are as follows: Service / Mode (hex) 01 Show current data 02 Show freeze frame data 03 Show stored Diagnostic Trouble Codes 04 Clear Diagnostic Trouble Codes and stored values 05 Test results, oxygen sensor monitoring (non CAN only) 06 Test results, other component/system monitoring (Test results, oxygen sensor monitoring for CAN only) 07 Show pending Diagnostic Trouble Codes (detected during current or last driving cycle) 08 Control operation of on-board component/system 09 Request vehicle information 0A Permanent Diagnostic Trouble Codes (DTCs) (Cleared DTCs) Vehicle manufacturers are not required to support all services. Each manufacturer may define additional services above #9 (e.g.: service 22 as defined by SAE J2190 for Ford/GM, service 21 for Toyota) for other information e.g. the voltage of the traction battery in a hybrid electric vehicle (HEV).[2] The nonOBD UDS services start at 0x10 to avoid overlap of ID-range. Standard PIDsEdit The table below shows the standard OBD-II PIDs as defined by SAE J1979. The expected response for each PID is given, along with information on how to translate the response into meaningful data. Again, not all vehicles will support all PIDs and there can be manufacturer-defined custom PIDs that are not defined in the OBD-II standard. Note that services 01 and 02 are basically identical, except that service 01 provides current information, whereas service 02 provides a snapshot of the same data taken at the point when the last diagnostic trouble code was set. The exceptions are PID 01, which is only available in service 01, and PID 02, which is only available in service 02. If service 02 PID 02 returns zero, then there is no snapshot and all other service 02 data is meaningless. When using Bit-Encoded-Notation, quantities like C4 means bit 4 from data byte C. Each bit is numerated from 0 to 7, so 7 is the most significant bit and 0 is the least significant bit (See below). A7 A6 A5 A4 A3 A2 A1 A0 B7 B6 B5 B4 B3 B2 B1 B0 C7 C6 C5 C4 C3 C2 C1 C0 D7 D6 D5 D4 D3 D2 D1 D0 Service 01Edit PIDs (hex) (Dec) Data bytes returned Min value Max value Formula[a] 00 0 4 PIDs supported [01 - 20] Bit encoded [A7..D0] == [PID $01..PID $20] See below 01 1 4 Monitor status since DTCs cleared. (Includes malfunction indicator lamp (MIL) status and number of DTCs.) Bit encoded. See below 02 2 2 Freeze DTC 03 3 2 Fuel system status Bit encoded. See below 04 4 1 Calculated engine load 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} (or A 2.55 {\displaystyle {\tfrac {A}{2.55}}} 05 5 1 Engine coolant temperature -40 215 °C A − 40 {\displaystyle A-40} 06 6 1 Short term fuel trim—Bank 1 -100 (Reduce Fuel: Too Rich) 99.2 (Add Fuel: Too Lean) % 100 128 A − 100 {\displaystyle {\frac {100}{128}}A-100} (or A 1.28 − 100 {\displaystyle {\tfrac {A}{1.28}}-100} 07 7 1 Long term fuel trim—Bank 1 08 8 1 Short term fuel trim—Bank 2 0A 10 1 Fuel pressure (gauge pressure) 0 765 kPa 3 A {\displaystyle 3A} 0B 11 1 Intake manifold absolute pressure 0 255 kPa A {\displaystyle A} 0C 12 2 Engine speed 0 16,383.75 rpm 256 A + B 4 {\displaystyle {\frac {256A+B}{4}}} 0D 13 1 Vehicle speed 0 255 km/h A {\displaystyle A} 0E 14 1 Timing advance -64 63.5 ° before TDC A 2 − 64 {\displaystyle {\frac {A}{2}}-64} 0F 15 1 Intake air temperature -40 215 °C A − 40 {\displaystyle A-40} 10 16 2 Mass air flow sensor (MAF) air flow rate 0 655.35 grams/sec 256 A + B 100 {\displaystyle {\frac {256A+B}{100}}} 11 17 1 Throttle position 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 12 18 1 Commanded secondary air status Bit encoded. See below 13 19 1 Oxygen sensors present (in 2 banks) [A0..A3] == Bank 1, Sensors 1-4. [A4..A7] == Bank 2... 14 20 2 Oxygen Sensor 1 A: Voltage B: Short term fuel trim 0 -100 1.275 99.2 volts A 200 {\displaystyle {\frac {A}{200}}} 100 128 B − 100 {\displaystyle {\frac {100}{128}}B-100} (if B==$FF, sensor is not used in trim calculation) B: Short term fuel trim 1A 26 2 Oxygen Sensor 7 1B 27 2 Oxygen Sensor 8 1C 28 1 OBD standards this vehicle conforms to 1 250 - enumerated. See below 1D 29 1 Oxygen sensors present (in 4 banks) Similar to PID 13, but [A0..A7] == [B1S1, B1S2, B2S1, B2S2, B3S1, B3S2, B4S1, B4S2] 1E 30 1 Auxiliary input status A0 == Power Take Off (PTO) status (1 == active) [A1..A7] not used 1F 31 2 Run time since engine start 0 65,535 seconds 256 A + B {\displaystyle 256A+B} 20 32 4 PIDs supported [21 - 40] Bit encoded [A7..D0] == [PID $21..PID $40] See below 21 33 2 Distance traveled with malfunction indicator lamp (MIL) on 0 65,535 km 256 A + B {\displaystyle 256A+B} 22 34 2 Fuel Rail Pressure (relative to manifold vacuum) 0 5177.265 kPa 0.079 ( 256 A + B ) {\displaystyle 0.079(256A+B)} 23 35 2 Fuel Rail Gauge Pressure (diesel, or gasoline direct injection) 0 655,350 kPa 10 ( 256 A + B ) {\displaystyle 10(256A+B)} AB: Air-Fuel Equivalence Ratio (lambda,λ) CD: Voltage 0 0 < 2 < 8 ratio 2 65536 ( 256 A + B ) {\displaystyle {\frac {2}{65536}}(256A+B)} 8 65536 ( 256 C + D ) {\displaystyle {\frac {8}{65536}}(256C+D)} CD: Voltage 2C 44 1 Commanded EGR 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 2D 45 1 EGR Error -100 99.2 % 100 128 A − 100 {\displaystyle {\tfrac {100}{128}}A-100} 2E 46 1 Commanded evaporative purge 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 2F 47 1 Fuel Tank Level Input 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 30 48 1 Warm-ups since codes cleared 0 255 count A {\displaystyle A} 31 49 2 Distance traveled since codes cleared 0 65,535 km 256 A + B {\displaystyle 256A+B} 32 50 2 Evap. System Vapor Pressure -8,192 8191.75 Pa 256 A + B 4 {\displaystyle {\frac {256A+B}{4}}} (AB is two's complement signed)[3] 33 51 1 Absolute Barometric Pressure 0 255 kPa A {\displaystyle A} CD: Current 0 -128 < 2 <128 ratio 256 C + D 256 − 128 {\displaystyle {\frac {256C+D}{256}}-128} CD: Current 3C 60 2 Catalyst Temperature: Bank 1, Sensor 1 -40 6,513.5 °C 256 A + B 10 − 40 {\displaystyle {\frac {256A+B}{10}}-40} 3D 61 2 Catalyst Temperature: Bank 2, Sensor 1 3E 62 2 Catalyst Temperature: Bank 1, Sensor 2 3F 63 2 Catalyst Temperature: Bank 2, Sensor 2 41 65 4 Monitor status this drive cycle Bit encoded. See below 42 66 2 Control module voltage 0 65.535 V 256 A + B 1000 {\displaystyle {\frac {256A+B}{1000}}} 43 67 2 Absolute load value 0 25,700 % 100 255 ( 256 A + B ) {\displaystyle {\tfrac {100}{255}}(256A+B)} 44 68 2 Commanded Air-Fuel Equivalence Ratio (lambda,λ) 0 < 2 ratio 2 65536 ( 256 A + B ) {\displaystyle {\tfrac {2}{65536}}(256A+B)} 45 69 1 Relative throttle position 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 46 70 1 Ambient air temperature -40 215 °C A − 40 {\displaystyle A-40} 47 71 1 Absolute throttle position B 0 100 % 100 255 A {\displaystyle {\frac {100}{255}}A} 48 72 1 Absolute throttle position C 49 73 1 Accelerator pedal position D 4A 74 1 Accelerator pedal position E 4B 75 1 Accelerator pedal position F 4C 76 1 Commanded throttle actuator 4D 77 2 Time run with MIL on 0 65,535 minutes 256 A + B {\displaystyle 256A+B} 4E 78 2 Time since trouble codes cleared 4F 79 4 Maximum value for Fuel–Air equivalence ratio, oxygen sensor voltage, oxygen sensor current, and intake manifold absolute pressure 0, 0, 0, 0 255, 255, 255, 2550 ratio, V, mA, kPa A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} , D ∗ 10 {\displaystyle D*10} 50 80 4 Maximum value for air flow rate from mass air flow sensor 0 2550 g/s A ∗ 10 {\displaystyle A*10} , and D {\displaystyle D} are reserved for future use 51 81 1 Fuel Type From fuel type table see below 52 82 1 Ethanol fuel % 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 53 83 2 Absolute Evap system Vapor Pressure 0 327.675 kPa 256 A + B 200 {\displaystyle {\frac {256A+B}{200}}} 54 84 2 Evap system vapor pressure -32,768 32,767 Pa 256 A + B {\displaystyle 256A+B} 55 85 2 Short term secondary oxygen sensor trim, A: bank 1, B: bank 3 -100 99.2 % 100 128 A − 100 {\displaystyle {\frac {100}{128}}A-100} 56 86 2 Long term secondary oxygen sensor trim, A: bank 1, B: bank 3 57 87 2 Short term secondary oxygen sensor trim, A: bank 2, B: bank 4 59 89 2 Fuel rail absolute pressure 0 655,350 kPa 10 ( 256 A + B ) {\displaystyle 10(256A+B)} 5A 90 1 Relative accelerator pedal position 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 5B 91 1 Hybrid battery pack remaining life 0 100 % 100 255 A {\displaystyle {\tfrac {100}{255}}A} 5C 92 1 Engine oil temperature -40 210 °C A − 40 {\displaystyle A-40} 5D 93 2 Fuel injection timing -210.00 301.992 ° 256 A + B 128 − 210 {\displaystyle {\frac {256A+B}{128}}-210} 5E 94 2 Engine fuel rate 0 3212.75 L/h 256 A + B 20 {\displaystyle {\frac {256A+B}{20}}} 5F 95 1 Emission requirements to which vehicle is designed Bit Encoded 61 97 1 Driver's demand engine - percent torque -125 130 % A − 125 {\displaystyle A-125} 62 98 1 Actual engine - percent torque -125 130 % A − 125 {\displaystyle A-125} 63 99 2 Engine reference torque 0 65,535 Nm 256 A + B {\displaystyle 256A+B} 64 100 5 Engine percent torque data -125 130 % A − 125 {\displaystyle A-125} B − 125 {\displaystyle B-125} Engine point 1 C − 125 {\displaystyle C-125} D − 125 {\displaystyle D-125} E − 125 {\displaystyle E-125} 65 101 2 Auxiliary input / output supported Bit Encoded 66 102 5 Mass air flow sensor 0 2047.96875 grams/sec [A0]== Sensor A Supported [A1]== Sensor B Supported Sensor A: 256 B + C 32 {\displaystyle {\frac {256B+C}{32}}} Sensor B: 256 D + E 32 {\displaystyle {\frac {256D+E}{32}}} 67 103 3 Engine coolant temperature -40 215 °C [A0]== Sensor 1 Supported [A1]== Sensor 2 Supported Sensor 1: B − 40 {\displaystyle B-40} Sensor 2: C − 40 {\displaystyle C-40} 68 104 3 Intake air temperature sensor -40 215 °C [A0]== Sensor 1 Supported 69 105 7 Actual EGR, Commanded EGR, and EGR Error 6A 106 5 Commanded Diesel intake air flow control and relative intake air flow position 6B 107 5 Exhaust gas recirculation temperature 6C 108 5 Commanded throttle actuator control and relative throttle position 6D 109 11 Fuel pressure control system 6E 110 9 Injection pressure control system 6F 111 3 Turbocharger compressor inlet pressure 70 112 10 Boost pressure control 71 113 6 Variable Geometry turbo (VGT) control 72 114 5 Wastegate control 73 115 5 Exhaust pressure 74 116 5 Turbocharger RPM 75 117 7 Turbocharger temperature 77 119 5 Charge air cooler temperature (CACT) 78 120 9 Exhaust Gas temperature (EGT) Bank 1 Special PID. See below 7A 122 7 Diesel particulate filter (DPF) 7B 123 7 Diesel particulate filter (DPF) 7C 124 9 Diesel Particulate filter (DPF) temperature °C 256 A + B 10 − 40 {\displaystyle {\frac {256A+B}{10}}-40} 7D 125 1 NOx NTE (Not-To-Exceed) control area status 7E 126 1 PM NTE (Not-To-Exceed) control area status 7F 127 13 Engine run time [b] seconds 80 128 4 PIDs supported [81 - A0] Bit encoded [A7..D0] == [PID $81..PID $A0] See below 81 129 41 Engine run time for Auxiliary Emissions Control Device(AECD) 83 131 9 NOx sensor 84 132 1 Manifold surface temperature 85 133 10 NOx reagent system 86 134 5 Particulate matter (PM) sensor 87 135 5 Intake manifold absolute pressure 88 136 13 SCR Induce System 89 137 41 Run Time for AECD #11-#15 8A 138 41 Run Time for AECD #16-#20 8B 139 7 Diesel Aftertreatment 8C 140 17 O2 Sensor (Wide Range) 8D 141 1 Throttle Position G 0 100 % 8E 142 1 Engine Friction - Percent Torque -125 130 % A − 125 {\displaystyle A-125} 8F 143 7 PM Sensor Bank 1 & 2 90 144 3 WWH-OBD Vehicle OBD System Information hours 92 146 2 Fuel System Control 93 147 3 WWH-OBD Vehicle OBD Counters support hours 94 148 12 NOx Warning And Inducement System 98 152 9 Exhaust Gas Temperature Sensor 9A 154 6 Hybrid/EV Vehicle System Data, Battery, Voltage 9B 155 4 Diesel Exhaust Fluid Sensor Data 9C 156 17 O2 Sensor Data 9D 157 4 Engine Fuel Rate g/s 9E 158 2 Engine Exhaust Flow Rate kg/h 9F 159 9 Fuel System Percentage Use A0 160 4 PIDs supported [A1 - C0] Bit encoded [A7..D0] == [PID $A1..PID $C0] See below A1 161 9 NOx Sensor Corrected Data ppm A2 162 2 Cylinder Fuel Rate 0 2047.96875 mg/stroke 256 A + B 32 {\displaystyle {\frac {256A+B}{32}}} A3 163 9 Evap System Vapor Pressure Pa A4 164 4 Transmission Actual Gear 0 65.535 ratio [A1]==Supported 256 C + D 1000 {\displaystyle {\frac {256C+D}{1000}}} A5 165 4 Commanded Diesel Exhaust Fluid Dosing 0 127.5 % [A0]= 1:Supported; 0:Unsupported B 2 {\displaystyle {\frac {B}{2}}} A6 166 4 Odometer [c] 0 429,496,729.5 km A ( 2 24 ) + B ( 2 16 ) + C ( 2 8 ) + D 10 {\displaystyle {\frac {A(2^{24})+B(2^{16})+C(2^{8})+D}{10}}} A7 167 4 NOx Sensor Concentration Sensors 3 and 4 A8 168 4 NOx Sensor Corrected Concentration Sensors 3 and 4 A9 169 4 ABS Disable Switch State [A0]= 1:Supported; 0:Unsupported [B0]= 1:Yes;0:No C0 192 4 PIDs supported [C1 - E0] 0x0 0xffffffff Bit encoded [A7..D0] == [PID $C1..PID $E0] See below C3 195 ? ? ? ? ? Returns numerous data, including Drive Condition ID and Engine Speed* C4 196 ? ? ? ? ? B5 is Engine Idle Request B6 is Engine Stop Request* Service 02 accepts the same PIDs as service 01, with the same meaning,[5] but information given is from when the freeze frame[6] was created. You have to send the frame number in the data section of the message. 02 2 DTC that caused freeze frame to be stored. BCD encoded. Decoded as in service 3 N/A n*6 Request trouble codes 3 codes per message frame. See below N/A 0 Clear trouble codes / Malfunction indicator lamp (MIL) / Check engine light Clears all stored trouble codes and turns the MIL off. 0100 4 OBD Monitor IDs supported ($01 – $20) 0x0 0xffffffff 0101 2 O2 Sensor Monitor Bank 1 Sensor 1 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 0102 O2 Sensor Monitor Bank 1 Sensor 2 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 010A O2 Sensor Monitor Bank 3 Sensor 2 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 010B O2 Sensor Monitor Bank 3 Sensor 3 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 010C O2 Sensor Monitor Bank 3 Sensor 4 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 010D O2 Sensor Monitor Bank 4 Sensor 1 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 010E O2 Sensor Monitor Bank 4 Sensor 2 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 010F O2 Sensor Monitor Bank 4 Sensor 3 0.00 1.275 volts 0.005 Rich to lean sensor threshold voltage 0201 O2 Sensor Monitor Bank 1 Sensor 1 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 020A O2 Sensor Monitor Bank 3 Sensor 2 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 020B O2 Sensor Monitor Bank 3 Sensor 3 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 020C O2 Sensor Monitor Bank 3 Sensor 4 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 020D O2 Sensor Monitor Bank 4 Sensor 1 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 020E O2 Sensor Monitor Bank 4 Sensor 2 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 020F O2 Sensor Monitor Bank 4 Sensor 3 0.00 1.275 volts 0.005 Lean to Rich sensor threshold voltage 00 4 Service 9 supported PIDs (01 to 20) Bit encoded. [A7..D0] = [PID $01..PID $20] See below 01 1 VIN Message Count in PID 02. Only for ISO 9141-2, ISO 14230-4 and SAE J1850. Usually the value will be 5. 02 17 Vehicle Identification Number (VIN) 17-char VIN, ASCII-encoded and left-padded with null chars (0x00) if needed to. 03 1 Calibration ID message count for PID 04. Only for ISO 9141-2, ISO 14230-4 and SAE J1850. It will be a multiple of 4 (4 messages are needed for each ID). 04 16,32,48,64.. Calibration ID Up to 16 ASCII chars. Data bytes not used will be reported as null bytes (0x00). Several CALID can be outputed (16 bytes each) 05 1 Calibration verification numbers (CVN) message count for PID 06. Only for ISO 9141-2, ISO 14230-4 and SAE J1850. 06 4,8,12,16 Calibration Verification Numbers (CVN) Several CVN can be output (4 bytes each) the number of CVN and CALID must match Raw data left-padded with null characters (0x00). Usually displayed as hex string. 07 1 In-use performance tracking message count for PID 08 and 0B. Only for ISO 9141-2, ISO 14230-4 and SAE J1850. 8 10 8 if sixteen values are required to be reported, 9 if eighteen values are required to be reported, and 10 if twenty values are required to be reported (one message reports two values, each one consisting in two bytes). 08 4 In-use performance tracking for spark ignition vehicles 4 or 5 messages, each one containing 4 bytes (two values). See below 09 1 ECU name message count for PID 0A 0A 20 ECU name ASCII-coded. Right-padded with null chars (0x00). 0B 4 In-use performance tracking for compression ignition vehicles 5 messages, each one containing 4 bytes (two values). See below ^ a b c d e f g h i In the formula column, letters A, B, C, etc. represent the first, second, third, etc. byte of the data. For example, for two data bytes 0F 19, A = 0F and B = 19. Where a (?) appears, contradictory or incomplete information was available. ^ Starting with MY 2010 the California Air Resources Board mandated that all diesel vehicles must supply total engine hours [4] ^ Starting with MY 2019 the California Air Resources Board mandated that all vehicles must supply odometer[4] Bitwise encoded PIDsEdit Some of the PIDs in the above table cannot be explained with a simple formula. A more elaborate explanation of these data is provided here: Service 01 PID 00Edit A request for this PID returns 4 bytes of data (Big-endian). Each bit, from MSB to LSB, represents one of the next 32 PIDs and specifies whether that PID is supported. For example, if the car response is BE1FA813, it can be decoded like this: B E 1 F A 8 1 3 1 0 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 1 0 1 0 0 0 0 0 0 1 0 0 1 1 Supported? Yes No Yes Yes Yes Yes Yes No No No No Yes Yes Yes Yes Yes Yes No Yes No Yes No No No No No No Yes No No Yes Yes PID number 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 So, supported PIDs are: 01, 03, 04, 05, 06, 07, 0C, 0D, 0E, 0F, 10, 11, 13, 15, 1C, 1F and 20 A request for this PID returns 4 bytes of data, labeled A B C and D. The first byte(A) contains two pieces of information. Bit A7 (MSB of byte A, the first byte) indicates whether or not the MIL (check engine light) is illuminated. Bits A6 through A0 represent the number of diagnostic trouble codes currently flagged in the ECU. The second, third, and fourth bytes(B, C and D) give information about the availability and completeness of certain on-board tests. Note that test availability is indicated by set (1) bit and completeness is indicated by reset (0) bit. A7 MIL Off or On, indicates if the CEL/MIL is on (or should be on) A6-A0 DTC_CNT Number of confirmed emissions-related DTCs available for display. B7 RESERVED Reserved (should be 0) B3 NO NAME 0 = Spark ignition monitors supported (e.g. Otto or Wankel engines) 1 = Compression ignition monitors supported (e.g. Diesel engines) Here are the common bit B definitions, they are test based. Test available Test incomplete B2 B6 Misfire The third and fourth bytes are to be interpreted differently depending on if the engine is spark ignition (e.g. Otto or Wankel engines) or compression ignition (e.g. Diesel engines). In the second (B) byte, bit 3 indicates how to interpret the C and D bytes, with 0 being spark (Otto or Wankel) and 1 (set) being compression (Diesel). The bytes C and D for spark ignition monitors (e.g. Otto or Wankel engines): C7 D7 Oxygen Sensor Heater A/C Refrigerant Evaporative System Heated Catalyst And the bytes C and D for compression ignition monitors (Diesel engines): EGR and/or VVT System PM filter monitoring Exhaust Gas Sensor - Reserved - Boost Pressure NOx/SCR Monitor NMHC Catalyst[a] ^ NMHC may stand for Non-Methane HydroCarbons, but J1979 does not enlighten us. The translation would be the ammonia sensor in the SCR catalyst. A request for this PID returns 4 bytes of data. The first byte is always zero. The second, third, and fourth bytes give information about the availability and completeness of certain on-board tests. As with PID 01, the third and fourth bytes are to be interpreted differently depending on the ignition type (B3) – with 0 being spark and 1 (set) being compression. Note again that test availability is represented by a set (1) bit and completeness is represented by a reset (0) bit. A request for this PID will return 9 bytes of data. The first byte is a bit encoded field indicating which EGT sensors are supported: A Supported EGT sensors B-C Temperature read by EGT11 D-E Temperature read by EGT12 F-G Temperature read by EGT13 H-I Temperature read by EGT14 The first byte is bit-encoded as follows: A7-A4 Reserved A3 EGT bank 1, sensor 4 Supported? The remaining bytes are 16 bit integers indicating the temperature in degrees Celsius in the range -40 to 6513.5 (scale 0.1), using the usual ( A × 256 + B ) / 10 − 40 {\displaystyle (A\times 256+B)/10-40} formula (MSB is A, LSB is B). Only values for which the corresponding sensor is supported are meaningful. The same structure applies to PID 79, but values are for sensors of bank 2. Service 03 (no PID required)Edit A request for this service returns a list of the DTCs that have been set. The list is encapsulated using the ISO 15765-2 protocol. If there are two or fewer DTCs (4 bytes) they are returned in an ISO-TP Single Frame (SF). Three or more DTCs in the list are reported in multiple frames, with the exact count of frames dependent on the communication type and addressing details. Each trouble code requires 2 bytes to describe. The text description of a trouble code may be decoded as follows. The first character in the trouble code is determined by the first two bits in the first byte: A7-A6 First DTC character 00 P - Powertrain 01 C - Chassis 10 B - Body 11 U - Network The two following digits are encoded as 2 bits. The second character in the DTC is a number defined by the following table: Second DTC character The third character in the DTC is a number defined by Third DTC character The fourth and fifth characters are defined in the same way as the third, but using bits B7-B4 and B3-B0. The resulting five-character code should look something like "U0158" and can be looked up in a table of OBD-II DTCs. Hexadecimal characters (0-9, A-F), while relatively rare, are allowed in the last 3 positions of the code itself. It provides information about track in-use performance for catalyst banks, oxygen sensor banks, evaporative leak detection systems, EGR systems and secondary air system. The numerator for each component or system tracks the number of times that all conditions necessary for a specific monitor to detect a malfunction have been encountered. The denominator for each component or system tracks the number of times that the vehicle has been operated in the specified conditions. The count of data items should be reported at the beginning (the first byte). All data items of the In-use Performance Tracking record consist of two bytes and are reported in this order (each message contains two items, hence the message length is 4). Mnemonic OBDCOND OBD Monitoring Conditions Encountered Counts IGNCNTR Ignition Counter CATCOMP1 Catalyst Monitor Completion Counts Bank 1 CATCOND1 Catalyst Monitor Conditions Encountered Counts Bank 1 O2SCOMP1 O2 Sensor Monitor Completion Counts Bank 1 O2SCOND1 O2 Sensor Monitor Conditions Encountered Counts Bank 1 EGRCOMP EGR Monitor Completion Condition Counts EGRCOND EGR Monitor Conditions Encountered Counts AIRCOMP AIR Monitor Completion Condition Counts (Secondary Air) AIRCOND AIR Monitor Conditions Encountered Counts (Secondary Air) EVAPCOMP EVAP Monitor Completion Condition Counts EVAPCOND EVAP Monitor Conditions Encountered Counts SO2SCOMP1 Secondary O2 Sensor Monitor Completion Counts Bank 1 SO2SCOND1 Secondary O2 Sensor Monitor Conditions Encountered Counts Bank 1 Service 09 PID 0BEdit It provides information about track in-use performance for NMHC catalyst, NOx catalyst monitor, NOx adsorber monitor, PM filter monitor, exhaust gas sensor monitor, EGR/ VVT monitor, boost pressure monitor and fuel system monitor. All data items consist of two bytes and are reported in this order (each message contains two items, hence message length is 4): HCCATCOMP NMHC Catalyst Monitor Completion Condition Counts HCCATCOND NMHC Catalyst Monitor Conditions Encountered Counts NCATCOMP NOx/SCR Catalyst Monitor Completion Condition Counts NCATCOND NOx/SCR Catalyst Monitor Conditions Encountered Counts NADSCOMP NOx Adsorber Monitor Completion Condition Counts NADSCOND NOx Adsorber Monitor Conditions Encountered Counts PMCOMP PM Filter Monitor Completion Condition Counts PMCOND PM Filter Monitor Conditions Encountered Counts EGSCOMP Exhaust Gas Sensor Monitor Completion Condition Counts EGSCOND Exhaust Gas Sensor Monitor Conditions Encountered Counts EGRCOMP EGR and/or VVT Monitor Completion Condition Counts EGRCOND EGR and/or VVT Monitor Conditions Encountered Counts BPCOMP Boost Pressure Monitor Completion Condition Counts BPCOND Boost Pressure Monitor Conditions Encountered Counts FUELCOMP Fuel Monitor Completion Condition Counts FUELCOND Fuel Monitor Conditions Encountered Counts Enumerated PIDsEdit Some PIDs are to be interpreted specially, and aren't necessarily exactly bitwise encoded, or in any scale. The values for these PIDs are enumerated. A request for this PID returns 2 bytes of data. The first byte describes fuel system #1. 0 The motor is off 1 Open loop due to insufficient engine temperature 2 Closed loop, using oxygen sensor feedback to determine fuel mix 4 Open loop due to engine load OR fuel cut due to deceleration 8 Open loop due to system failure 16 Closed loop, using at least one oxygen sensor but there is a fault in the feedback system Any other value is an invalid response. The second byte describes fuel system #2 (if it exists) and is encoded identically to the first byte. A request for this PID returns a single byte of data which describes the secondary air status. 1 Upstream 2 Downstream of catalytic converter 4 From the outside atmosphere or off 8 Pump commanded on for diagnostics Service 01 PID 1CEdit A request for this PID returns a single byte of data which describes which OBD standards this ECU was designed to comply with. The different values the data byte can hold are shown below, next to what they mean: 1 OBD-II as defined by the CARB 2 OBD as defined by the EPA 3 OBD and OBD-II 4 OBD-I 5 Not OBD compliant 6 EOBD (Europe) 7 EOBD and OBD-II 8 EOBD and OBD 9 EOBD, OBD and OBD II 10 JOBD (Japan) 11 JOBD and OBD II 12 JOBD and EOBD 13 JOBD, EOBD, and OBD II 14 Reserved 17 Engine Manufacturer Diagnostics (EMD) 18 Engine Manufacturer Diagnostics Enhanced (EMD+) 19 Heavy Duty On-Board Diagnostics (Child/Partial) (HD OBD-C) 20 Heavy Duty On-Board Diagnostics (HD OBD) 21 World Wide Harmonized OBD (WWH OBD) 23 Heavy Duty Euro OBD Stage I without NOx control (HD EOBD-I) 24 Heavy Duty Euro OBD Stage I with NOx control (HD EOBD-I N) 25 Heavy Duty Euro OBD Stage II without NOx control (HD EOBD-II) 26 Heavy Duty Euro OBD Stage II with NOx control (HD EOBD-II N) 28 Brazil OBD Phase 1 (OBDBr-1) 30 Korean OBD (KOBD) 31 India OBD I (IOBD I) 32 India OBD II (IOBD II) 33 Heavy Duty Euro OBD Stage VI (HD EOBD-IV) 34-250 Reserved 251-255 Not available for assignment (SAE J1939 special meaning) Fuel Type CodingEdit Service 01 PID 51 returns a value from an enumerated list giving the fuel type of the vehicle. The fuel type is returned as a single byte, and the value is given by the following table: 1 Gasoline 2 Methanol 3 Ethanol 4 Diesel 5 LPG 6 CNG 7 Propane 8 Electric 9 Bifuel running Gasoline 10 Bifuel running Methanol 11 Bifuel running Ethanol 12 Bifuel running LPG 13 Bifuel running CNG 14 Bifuel running Propane 15 Bifuel running Electricity 16 Bifuel running electric and combustion engine 17 Hybrid gasoline 18 Hybrid Ethanol 19 Hybrid Diesel 20 Hybrid Electric 21 Hybrid running electric and combustion engine 22 Hybrid Regenerative 23 Bifuel running diesel Any other value is reserved by ISO/SAE. There are currently no definitions for flexible-fuel vehicle. Non-standard PIDsEdit The majority of all OBD-II PIDs in use are non-standard. For most modern vehicles, there are many more functions supported on the OBD-II interface than are covered by the standard PIDs, and there is relatively minor overlap between vehicle manufacturers for these non-standard PIDs. There is very limited information available in the public domain for non-standard PIDs. The primary source of information on non-standard PIDs across different manufacturers is maintained by the US-based Equipment and Tool Institute and only available to members. The price of ETI membership for access to scan codes varies based on company size defined by annual sales of automotive tools and equipment in North America: Annual Sales in North America Under $10,000,000 $5,000 $10,000,000 - $50,000,000 $7,500 Greater than $50,000,000 $10,000 However, even ETI membership will not provide full documentation for non-standard PIDs. ETI state:[7][8] Some OEMs refuse to use ETI as a one-stop source of scan tool information. They prefer to do business with each tool company separately. These companies also require that you enter into a contract with them. The charges vary but here is a snapshot as of April 13th, 2015 of the per year charges: GM $50,000 Honda $5,000 Suzuki $1,000 BMW $25,500 plus $2,000 per update. Updates occur annually. CAN (11-bit) bus formatEdit The PID query and response occurs on the vehicle's CAN bus. Standard OBD requests and responses use functional addresses. The diagnostic reader initiates a query using CAN ID 7DFh[clarification needed], which acts as a broadcast address, and accepts responses from any ID in the range 7E8h to 7EFh. ECUs that can respond to OBD queries listen both to the functional broadcast ID of 7DFh and one assigned ID in the range 7E0h to 7E7h. Their response has an ID of their assigned ID plus 8 e.g. 7E8h through 7EFh. This approach allows up to eight ECUs, each independently responding to OBD queries. The diagnostic reader can use the ID in the ECU response frame to continue communication with a specific ECU. In particular, multi-frame communication requires a response to the specific ECU ID rather than to ID 7DFh. CAN bus may also be used for communication beyond the standard OBD messages. Physical addressing uses particular CAN IDs for specific modules (e.g., 720h for the instrument cluster in Fords) with proprietary frame payloads. QueryEdit The functional PID query is sent to the vehicle on the CAN bus at ID 7DFh, using 8 data bytes. The bytes are: PID Type SAE Standard Number of data bytes: 01 = show current data; 02 = freeze frame; etc. PID code (e.g.: 05 = Engine coolant temperature) not used (ISO 15765-2 suggests CCh) Vehicle specific Number of 3 Custom service: (e.g.: 22 = enhanced data) PID code (e.g.: 4980h) not used ResponseEdit The vehicle responds to the PID query on the CAN bus with message IDs that depend on which module responded. Typically the engine or main ECU responds at ID 7E8h. Other modules, like the hybrid controller or battery controller in a Prius, respond at 07E9h, 07EAh, 07EBh, etc. These are 8h higher than the physical address the module responds to. Even though the number of bytes in the returned value is variable, the message uses 8 data bytes regardless (CAN bus protocol form Frameformat with 8 data bytes). The bytes are: CAN Address SAE Standard 7E8h, 7EAh, etc. Number of 3 to 6 Custom service Same as query, except that 40h is added to the service value. So: 41h = show current data; 42h = freeze frame; (e.g.: 05 = Engine coolant temperature) value of the specified parameter, byte 0 value, byte 1 (optional) value, byte 2 (optional) value, byte 3 (optional) not used (may be 00h or 55h) 7E8h, or 8h + physical ID of module. Number of 4to 7 Custom service: same as query, except that 40h is added to the service value.(e.g.: 62h = response to service 22h request) PID code (e.g.: 4980h) value of the specified parameter, byte 0 value, byte 1 (optional) value, byte 2 (optional) value, byte 3 (optional) 3 7Fh this a general response usually indicating the module doesn't recognize the request. Custom service: (e.g.: 22h = enhanced diagnostic data by PID, 21h = enhanced data by offset) 31h not used (may be 00h) See alsoEdit ELM327, a very common microcontroller (silicon chip) used in OBD-II interfaces ^ a b "Basic Information | On-Board Diagnostics (OBD)". US EPA. 16 March 2015. Retrieved 24 June 2015. ^ "Escape PHEV TechInfo - PIDs". Electric Auto Association - Plug in Hybrid Electric Vehicle. Retrieved 11 December 2013. ^ a b "Extended PID's - Signed Variables". Torque-BHP. Retrieved 17 March 2016. ^ a b "Final Regulation Order" (PDF). US: California Air Resources Board. 2015. Retrieved 4 September 2021. ^ "OBD2 Codes and Meanings". Lithuania: Baltic Automotive Diagnostic Systems. Retrieved 11 June 2020. ^ "OBD2 Freeze Frame Data: What is It? How To Read It?". OBD Advisor. 2018-02-28. Retrieved 2020-03-14. ^ "ETI Full Membership FAQ". The Equipment and Tool Institute. Retrieved 29 November 2013. showing cost of access to OBD-II PID documentation ^ "Special OEM License Requirements". The Equipment and Tool Institute. Retrieved 13 April 2015. "E/E Diagnostic Test Modes". Vehicle E E System Diagnostic Standards Committee. SAE J1979. SAE International. 2017-02-16. doi:10.4271/J1979_201702. "Digital Annex of E/E Diagnostic Test Modes". Vehicle E E System Diagnostic Standards Committee. SAE J1979-DA. SAE International. 2017-02-16. doi:10.4271/J1979DA_201702. Wagner, Bernhard. "The Lifecycle of a Diagnostic Trouble Code (DTC)". KPIT. Germany. Retrieved 2020-08-29. Retrieved from "https://en.wikipedia.org/w/index.php?title=OBD-II_PIDs&oldid=1055139763"
CommonCrawl
Suggestions for synthesis-golf targets Synthesis golf started over summer, and a few nice targets have been used as problems. When we started, the academic year (in the UK at least) hadn't begun, meaning a few of us had a quite a lot to time to spend around here - this has tailed off slightly now term has began so I thought it might be a nice idea to open up the floor to suggestions for targets for future rounds of synthesis golf. Each month, we'll use one of the targets (that hopefully get posted) below. Some ground rules to keep this sensible: The molecule must be predominantly organic. Small molecules with biological activity have been chosen for all previous rounds of synthesis golf, but there is no strict requirement that the target be biologically active. The molecule must be a published compound. It's fine if there are no published syntheses (for instance newly isolated natural products) but the actual target, along with unambiguous characterisation data must exist. The molecule should be complex enough to require some thought, but not so complicated that it would take a team of 5 PhD students a week to come up with a route. This has been one of the biggest challenges so far in finding targets but as a general rule of thumb, good targets would have 5 or fewer strereocentres that need controlling and somewhere under 15 carbon atoms. Entries should include the following: The name of the target An image of the target A journal reference along with DOI for the target A brief summary of 1) why the molecule is interesting (i.e. biological activity) and 2) why you think it makes a good target Martin - マーチン♦ NotEvans.NotEvans. 16k66 silver badges2727 bronze badges $\begingroup$ In the meantime, for November, do you want to come up with something? $\endgroup$ – orthocresol♦ Nov 11 '17 at 13:34 $\begingroup$ @orthocresol- I have a target in mind that I was going to post as an example 'this kind of thing' here. I'll do that in a minute! $\endgroup$ – NotEvans. Nov 11 '17 at 13:35 $\begingroup$ @NotEvans. I am no good at synthesis, let alone coming up with targets, but I could provide the material of the stereo-selective synthesis seminar ($\mathrm{S}^4$) from my undergrad. $\endgroup$ – TAR86 Nov 20 '17 at 20:05 Notice: Used as target for the November 2017 Synthesis Golf (+)-Clopidogrel (+)-Clopidogrel (sold as Plavix)[1] is a common pharmaceutical used to help combat the risk of heart disease. The World Health Organisation lists Plavix on its list of 'essential medicines' due to its broad efficacy, low toxicity and low cost (less than 1USD/month in the developing world). The original industrial routes [1] relied on racemic synthesis and resolution, however syntheses of a single isomer have been reported [2] The molecule makes an interesting target as it requires the synthesis of an unusual heterocyclic core, as well as the challenging stereocentre which is prone to racemisation. [1]: Original patents for preparation and resolution of the enantiomers by Sanofi. EP99802, EP281459 [2]: Tetrahedron Asymmetry 1997, 8, 85. DOI:10.1016/S0957-4166(96)00488-0 (−)-Indolmycin Indolmycin is an antibacterial agent, which acts by inhibiting bacterial aminoacyl-tRNA synthetase. Multiple chemical syntheses have been published,1,2,3 and enzymatic resolution of a key intermediate has been investigated by Kato et al.4 Its biosynthesis has also recently been elucidated by Du et al.5 Given that a racemic synthesis is extremely short (four or five steps, refs 1 and 2) and likely unchallenging, some extra conditions will likely have to be imposed. Firstly, if it isn't already obvious, the synthesis should be asymmetric (13 steps in ref 3). On top of that, my personal suggestion would be that the oxazolinone ring has to be made (since we have already had one round of indole synthesis, I think we can allow the indole to be bought). Dirlam, J. P.; Clark, D. A.; Hecker, S. J. New total synthesis of (±)-indolmycin. J. Org. Chem. 1986, 51 (25), 4920–4924. DOI: 10.1021/jo00375a030. Shue, Y.-K. Total synthesis of (±) indolmycin. Tetrahedron Lett. 1996, 37 (36), 6447–6448. DOI: 10.1016/0040-4039(96)01434-7. Takeda, T.; Mukaiyama, T. Asymmetric total synthesis of indolmycin. Chem. Lett. 1980, 9 (2), 163–166. DOI: 10.1246/cl.1980.163. Kato, K.; Tanaka, S.; Gong, Y.-F.; Katayama, M.; Kimoto, H. Enzymatic resolution of 3-(3-indolyl)butyric acid: a key intermediate for indolmycin synthesis. World J. Microbiol. Biotechnol. 1999, 15 (5), 631–633. DOI: 10.1023/A:1008989800098. Du, Y.-L.; Alkhalaf, L. M.; Ryan, K. S. In vitro reconstitution of indolmycin biosynthesis reveals the molecular basis of oxazolinone assembly. Proc. Natl. Acad. Sci. U. S. A. 2015, 112 (9), 2717–2722. DOI: 10.1073/pnas.1419964112. orthocresol♦orthocresol Another possibility. TAK-457 is a drug (more accurately, a prodrug) that displays antifungal activity. As far as I can tell, it doesn't seem to have reached the market, but is still a fairly interesting molecule with some rings that we haven't featured yet: Some suggestions: Usual stipulation of at most one chiral centre being bought Syntheses must make either the triazole or tetrazole ring (or both) Counterion doesn't matter Bonus points for a synthesis that can be adapted to produce the other three stereoisomers of the drug (bonus bonus points for late stage diversification as opposed to, say, using a different enantiomer of starting material). These stereoisomers were of interest (see ref 2). Ichikawa, T.; Kitazaki, T.; Matsushita, Y.; Yamada, M.; Hayashi, R.; Yamaguchi, M.; Kiyota, Y.; Okonogi, K.; ITOH, K. Optically Active Antifungal Azoles. XII. Synthesis and Antifungal Activity of the Water-Soluble Prodrugs of 1-[(1​R,2​R)-2-(2,4-Difluorophenyl)-2-hydroxy-1-methyl-3-(1​H-1,2,4-triazol-1-yl)propyl]-3-[4-(1​H-1-tetrazolyl)phenyl]-2-imidazolidinone. Chem. Pharm. Bull. 2001, 49 (9), 1102–1109. DOI: 10.1248/cpb.49.1102. Ichikawa, T.; Yamada, M.; Yamaguchi, M.; Kitazaki, T.; Matsushita, Y.; Higashikawa, K.; Itoh, K. Optically Active Antifungal Azoles. XIII. Synthesis of Stereoisomers and Metabolites of 1-[(1​R,2​R)-2-(2,4-difluorophenyl)-2-hydroxy-1-methyl-3-(1​H-1,2,4-triazol-1-yl)propyl]-3-[4-(1​H-1-tetrazolyl)phenyl]-2-imidazolidinone (TAK-456). Chem. Pharm. Bull. 2001, 49 (9), 1110–1119. DOI: 10.1248/cpb.49.1110. Hayashi, R.; Kitamoto, N.; Iizawa, Y.; Ichikawa, T.; Itoh, K.; Kitazaki, T.; Okonogi, K. Efficacy of tak-457, a novel intravenous triazole, against invasive pulmonary aspergillosis in neutropenic mice. Antimicrob. Agents Chemother. 2002, 46 (2), 283–287. DOI: 10.1128/AAC.46.2.283-287.2002. I propose mixing up the challenges and adding additional challenges such as this one: Organic Structure Challenge A.K.A.K. 11.4k11 gold badge88 silver badges1515 bronze badges This site is for discussion about Chemistry Stack Exchange. You must have an account there to participate. Everything you need to know about Synthesis Golf Interest in "Molecular Design Golf"? Chemistry site design What are appropriate tag(s) for (organo)sulfur compounds? The Giant List of Duplicates Why is this question closed as homework? Do we need more options for closure? Synthesis golf I: thoughts/comments/suggestions Proposing a salt identification challenge series What can or should we do with [pharmacology], [toxicology], [medicinal-chemistry]? Suggestions for molecular design golf problems Why do questions posted on Chemistry.SE get automatically transferred to a different site?
CommonCrawl
Criticism of Math in Economics I've been reading and speaking to a number of educated economists and economics PhDs who are against the use of intense mathematics and mathematical proof in economic theory. Specifically I've been speaking to those of Marxist and heterodox persuasion and reading their work in an attempt to become more open minded. They emphasize that the study of work by classical economists (like Adam Smith, Karl Marx, and David Ricardo) is still relevant and that the practice of how mainstream economics uses mathematics is abusive and is an attempt to fool the masses regarding the "science" economists practice. I have difficulty understanding this argument. What is a reason to be against mathematics in economics? Note: I'm pretty mainstream and like how economics is taught and structured. I'm not anti math-in-economics, I just want to know why this is an argument. mathematical-economics soft-question EconJohn♦EconJohn $\begingroup$ How about a less sensational title? $\endgroup$ – Michael Greinecker Jun 28 '18 at 19:11 $\begingroup$ "Criticism of Math in Economics" or "Criticism of the Use of Math in Economics", maybe. $\endgroup$ – Michael Greinecker Jun 28 '18 at 19:16 $\begingroup$ How about something like Mathiness in Economic Theory? $\endgroup$ – Giskard Jun 28 '18 at 19:16 $\begingroup$ Are you talking about the critique of economists for using complex algebraic formulations that assume perfect rationality and don't resemble any real world decisions being made; or is this the critique of overly convoluted and misused statistical tools that mask the uncertainty of empirical research and make economics look more like hard science than it actually is? $\endgroup$ – lazarusL Jun 28 '18 at 20:31 $\begingroup$ @lazarusL both guess. Honestly trying to get it because i'm too mainstream according to some of my peers. $\endgroup$ – EconJohn♦ Jun 28 '18 at 20:41 I find that the essay "The New Astrology" by Alan Jay Levinovitz (an assistant professor of philosophy and religion, not an economist) makes some good points. ...the ubiquity of mathematical theory in economics also has serious downsides: it creates a high barrier to entry for those who want to participate in the professional dialogue, and makes checking someone's work excessively laborious. Worst of all, it imbues economic theory with unearned empirical authority. 'I've come to the position that there should be a stronger bias against the use of math,' Romer explained to me. 'If somebody came and said: "Look, I have this Earth-changing insight about economics, but the only way I can express it is by making use of the quirks of the Latin language", we'd say go to hell, unless they could convince us it was really essential. The burden of proof is on them.' The essay also makes a (more or less adequate—which, I leave up to you) comparison with astrology in ancient China to show that excellent math can be used to prop up ridiculous science and grant status for its practitioners. GiskardGiskard $\begingroup$ The "unearned empirical authority" sounds really weird. I mean, math's just a precise language that's easy to perform logical operations with. Putting something into mathematical terms shouldn't be taken as endowing empirical authority anymore than translating a statement to Latin should. Barba crescit caput nescit. $\endgroup$ – Nat Jun 28 '18 at 22:48 $\begingroup$ The Latin point doesn't seem much of an argument to me, bordering on straw-man. Latin clearly has nothing to do with economics whereas maths is clearly related. It's straw-man because the reader thinks "well yes, it is completely unreasonable to rely on the quirks of the Latin language to express an economic insight", but that has no relevance at all to whether or not it is reasonable to rely on maths. "It creates a high barrier to entry for those who want to participate in the professional dialogue" on its own isn't really much justification either. Many fields have a high barrier to entry. $\endgroup$ – JBentley Jun 29 '18 at 1:50 $\begingroup$ Math and logical systems in general conform to "garbage in, garbage out"; so if someone uses mathematical logic on garbage assumptions, then they'll get garbage results. But isn't that obvious? (Not being rhetorical - I'm actually asking if this isn't obvious. Because if it's not, then I could understand why folks could feel misled by seeing garbage expressed in mathematical terms.) $\endgroup$ – Nat Jun 29 '18 at 2:11 $\begingroup$ @Nat It is obvious, but technical garbage is more difficult to identify. This comment could be the core of a nice answer IMO. $\endgroup$ – Giskard Jun 29 '18 at 2:23 $\begingroup$ That attitude makes me pity the math-illiterate. Properly written, a page of math can replace about 20 pages of rambling imprecise words, and is hence much easier to check. That is basically the whole point of mathematical language. This guy is like a blind person trying to ban diagrams. $\endgroup$ – knzhou Jul 1 '18 at 10:14 What is a reason to be against mathematics in economics? The danger that any tool creates: to impose itself on the tool-user, diluting and narrowing its view of the world. It is a matter of Human Psychology why this happens, but it certainly does, and the aphorism "to he who holds a hammer everything looks like a nail" expresses this phenomenon, which has nothing to do with economics specifically. Mathematics offer a great service to the Economics discipline by providing a crystal-clear path from premises to conclusions. I fear the next time a Keynes with a General Theory book appears -and then we would have to spend decades again deciphering "what the author really meant" by his verbal arguments -and not really agreeing. The "abuse of mathematics" certainly happens: producers and consumers of economic theory tend to not question/worry/have nightmares about "the premises", to the extend that they should. But once we leave the premises unchallenged, the conclusions become "undeniable truth", since they have been derived in the rigorous mathematical way. But the ability to challenge the conclusions is always there, if only we take the time to review critically the premises. Another, more sophisticated way that mathematics may be abused is the belief that the deviation from reality that the premises represent, transfers over to conclusions in a "smooth" manner (call it "the principle of non-accelerating propagation of error"): to consider the trivial example, sure, the assumptions describing a "perfectly competitive" market (the premises) do not hold "exactly" in reality. But, we argue, if they are "close enough" to the structure of a real-world market, then the conclusions we will reach through our model will be "close enough" to the actual outcomes in this market. This belief is not unreasonable, and it is borne by reality in many cases. But this principle of "smooth approximation" does not hold universally. So even if we check the premises and we are satisfied that they are not absurd, it still remains the possibility that the real-world phenomenon we examine is not so well-behaved, it has discontinuities that may make a small deviation lead to very different outcomes in reality. That's the abstract analysis of the matter. The sociological and historical view would ask "but if a tool, that theoretically can be used in the proper way, has been seen for decades to be used inappropriately and create undesirable consequences, shouldn't we conclude that we must abandon its use?" ...in which instant, we start arguing about the extent of these "undesirable consequences" and whether they overcome any benefits from the use of the tool. In other words, this matter too, dreadfully comes down to a cost-benefit analysis. And we rarely agree on that either. Alecos PapadopoulosAlecos Papadopoulos $\begingroup$ The trouble with this argument is that whatever other stuff we use for economics are also tools. It's not like math is a tool but the other stuff we use are fully legitimate truth-finders blessed with kisses from Jesus Christ. Our views will be inherently "diluting and narrowing", otherwise you are supposing that non-math appraches to economics allow us to see the entire reality as it is. $\endgroup$ – Billy Rubina Jul 1 '18 at 15:06 $\begingroup$ @BillyRubina I am not sure I follow you. Where in my answer is it implied that "other stuff we use" do not constrain us? And where do I imply that we would be better off without math? $\endgroup$ – Alecos Papadopoulos Jul 1 '18 at 15:44 $\begingroup$ Regarding "next time a Keynes with a General Theory book appears": Piketty tried to be that next writer. His book was also less mathematical, and the profession immediately poked holes into it, e.g. econ.yale.edu//smith/piketty1.pdf $\endgroup$ – FooBar Oct 16 '19 at 15:55 I would like to point out that the question is not whether we should have math in economics, but why some people attack mathematical economics. A lot of the recent answers seem to try to answer the first question. Now then, to cover all bases like a good incumbent in a differentiated product market, I will also post an answer with points that economists have already raised about this question. Hayek in his Nobel Lecture:The Pretence of Knowledge said It seems to me that this failure of the economists to guide policy more successfully is closely connected with their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences - an attempt which in our field may lead to outright error. It is an approach which has come to be described as the "scientistic" attitude - an attitude which, as I defined it some thirty years ago, "is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed." Paul Romer coined the term mathiness to describe the issue in his (unrefereed) paper Mathiness in the Theory of Economic Growth. He writes The market for mathematical theory can survive a few lemon articles filled with mathiness. Readers will put a small discount on any article with mathematical symbols, but will still find it worth their while to work through and verify that the formal arguments are correct, that the connection between the symbols and the words is tight, and that the theoretical concepts have implications for measurement and observation. But after readers have been disappointed too often by mathiness that wastes their time, they will stop taking seriously any paper that contains mathematical symbols. In response, authors will stop doing the hard work that it takes to supply real mathematical theory. If no one is putting in the work to distinguish between mathiness and mathematical theory, why not cut a few corners and take advantage of the slippage that mathiness allows? The market for mathematical theory will collapse. Only mathiness will be left. It will be worth little, but cheap to produce, so it might survive as entertainment. He goes on to give specific examples of 'mathiness', including works by high profile economists like Lucas and Piketty. Tim Harford provides a laymen's summary of Romer's paper in his blogpost Down with mathiness! In this he writes As some academics hide nonsense amid the maths, others will conclude that there is little reward in taking any of the mathematics seriously. It is hard work, after all, to understand a formal economic model. If the model turns out to be more of a party trick than a good-faith effort to clarify thought, then why bother? Romer focuses his criticism on a small corner of academic economics, and professional economists differ over whether his targets truly deserve such scorn. Regardless, I am convinced that the malaise Romer and Orwell describe is infecting the way we use statistics in politics and public life. There being more statistics around than ever, it has never been easier to make a statistical claim in service of a political argument. $\begingroup$ (+1) for the references, especially Romer's. Setting aside the gossipy issue related to his direct attack on household names like Lucas and Prescott, the most interesting thing here is this concept of "mathiness", which is subtle, because it is not about "garbage premises and then super mathematics" but about something much more subtle but equally critical: mapping verbal concepts to math symbols without proper justification. This is much more difficult to detect in a paper, if you are not really experienced. $\endgroup$ – Alecos Papadopoulos Jun 29 '18 at 22:07 $\begingroup$ I can't find a precise article, but I could have sworn modern economics violates key mathematical axioms that make the math work in the first place. Would love to have a theoretical mathematician address the problem. Probably, has something to do with the axiom of choice is my guess. $\endgroup$ – ZeroPhase Nov 15 '19 at 0:01 $\begingroup$ @ZeroPhase As long as you do not accept the axiom of choice a lot of math results cannot be proven.(I hear axiom of determinancy is a reasonable but imperfect substitute.) I don't economics "violates" any axioms. $\endgroup$ – Giskard Nov 15 '19 at 0:58 I think there are two important criticisms or limitations. Limit 1: The first, overlapping with what many others have said, is that all mathematic economics are reduced-order models of very highly complex relationships between monumentally complex actors. As Einstein is alleged to have said (approximately) "Insofar as the truths of mathematics relate to mathematics, they are certain. In so far as they relate to the world, they are not certain." 'Does this mathematics apply in this situation?' is always an open question. Similarly, 'Is there a better mathematics that we haven't yet discovered?' Limit 2: The other problem, and it is bigger for economics than any other field I can think of, is the extent to which the state of the art knowledge of economics changes the economy because it becomes 'common knowledge'. For example, when you convincingly show that investing in mortgage-backed securities is low-risk compared to the yield, and that home-ownership is a cornerstone of wealth-creation for ordinary people, the economy will pile into those things until the apparent excess value is consumed. This feedback and phase-changeability means that economies are non-ergodic - (apparently NN Taleb makes a lot of this point in Black Swan?) Even if economic knowledge were not encoded in policies of economic actors, the changing nature of society and technology will always cause problems under Limit 1. Neither of these limits argues for excluding mathematics from economics, but they do argue for not excluding non-mathematical considerations (e.g. the political side of political economy) from economics. In practice, this might mean a bit more authority for the judgment of older economists who are wary of, for example, the value of high-speed trading. AtcrankAtcrank I think that the opposition to mathematics in Economics mainly has to do with the obstacles it poses to indoctrination. A proposition expressed in terms of a mathematical/logic system is susceptible of objective verification, whence the inconsistencies of a proposition are more visible than where a rigid framework is missing. Moreover, mathematical propositions do not lend themselves to the hyperbole and passionate impetus that fuel a socio-political ideology. The excerpt cited by @denesp reflects Levinotiz's confusion between the rules of logic and the rules of grammar. Despite the definiteness inherent to Latin grammar and the complexity of expressions it allows, its lack of logical rules and relations of consistency render grammar useless as method of proof. Iñaki ViggersIñaki Viggers $\begingroup$ Reminds me of the words of Roger Beacon: "Neglect of mathematics work injury to all knowledge, since he who is ignorant of it cannot know the other sciences or things of this world. And what is worst, those who are thus ignorant are unable to perceive their own ignorance, and so do not seek a remedy." $\endgroup$ – EconJohn♦ Jun 28 '18 at 19:39 $\begingroup$ @EconJohn Exactly, and that leads to a clash of irreconcilable conclusions reached from subjective, unsystematic assessments. Marx's propositions such as "religion is the opiate of the masses" pertain to sociology rather than Economics. Adam Smith's idea of the invisible hand expresses an assumption from which causal arguments can be developed. But the social or subjective origin of an assumption or a perception is not a good reason to exclude a formal, verifiable system of logic for the development of a theory. $\endgroup$ – Iñaki Viggers Jun 28 '18 at 20:04 "All models are wrong; some are useful." The title is really all one needs, but to put a few more words behind it, mathematics is very good at deriving detailed results from very specific premises. It is very easy to make a mistake in the premises and obscure the consequences with language. A major issue in macroeconomics is that every policy decision must be self-referential. It's very easy to accidentally assume that some small actor won't change their decision making slightly in an unexpected way that makes the whole thing fall apart. It's also very easy to make the mathematics look airtight. In more microeconomic situations, you have assumptions about how the world will function. This is most easily seen by developing an AI which can make a killing when fed historical data, but which fails utterly in the real market. Cort AmmonCort Ammon $\begingroup$ For those who don't know, the header is quote by British statistician George Box. One of my all-time favorite quotes! $\endgroup$ – sam Jun 30 '18 at 23:18 $\begingroup$ @Sam Good point. I've put quotes on the header to make it more obvious that it's a quote. I'm a programmer by trade, so I live and die by that quote! $\endgroup$ – Cort Ammon Jun 30 '18 at 23:36 Clearly, mathematics could never cover the full richness of the human experience. …In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography. Jorge Luis Borges, On Exactitude in Science Michael GreineckerMichael Greinecker $\begingroup$ I like the image, but this seems to be against modeling in general, not mathematical modeling in economics. $\endgroup$ – Giskard Jun 29 '18 at 12:59 $\begingroup$ @debesp The first sentence is undeniably true and the Borges-quote gives the appropriate context. $\endgroup$ – Michael Greinecker Jun 29 '18 at 13:39 $\begingroup$ And why should we care about the "full richness of human experience"? It has already happened, let's do something else. $\endgroup$ – Alecos Papadopoulos Jun 29 '18 at 17:52 $\begingroup$ @AlecosPapadopoulos The story kinda answers your question. $\endgroup$ – Michael Greinecker Jun 29 '18 at 18:05 Jacob Theodore Schwartz (1962): The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. The above is probably the single most important criticism of the use (or misuse) of math in economics. As some have noted, Coase (1937, 1960, etc.) for example could not be published today, because his work — profound as it might be — would not be recognized as such, since the most advanced math it contained was elementary-school arithmetic. Conversely, useless gobbledygook filled with dozens of pages of intimidating-looking math earns you publications and tenure. Ariel Rubinstein (2012, Economic Fables): unlike philosophers and linguists, we economists behave as if we do not rely solely on our impressions of the world and introspection. Along the same lines as the previous point — math helps add the veneer or pretence of scientific "rigor". Math helps convince economists (and perhaps a few others) that their work is better and more important than that of political scientists, historians, and, of course, sociologists. Oskar Morgenstern (1950, On the Accuracy of Economic Observations): Qui numerare incipit errare incipit. [He who begins to count, beings to err.] There is the mistaken belief that whatever can be quantified, formalized, and "mathematicized" is necessarily better. Research in economics has thus been reduced to "theory" (by which is meant theorem-and-proof) and "empirical" (by which is meant regression analysis). Any other method of investigation is banished and branded "heterodox". To reuse our earlier example, Coase was an economic theorist of the highest caliber. Yet he would not count as one of today's "theorists" because he failed to dress his ideas up with enough math. edited Jan 1 '19 at 9:49 Kenny LJKenny LJ The problem with math as used in modern economics is that the math is often used to describe models of human behavior. Modeling human behavior, whether with math or otherwise, is incredibly difficult especially over long time scales, if our goal is to make the model match reality. So it's not really that there's a problem with using math per se, but mathematical models of human behavior are by their very nature bound to fail in multitudinous ways, so that the detailed economic models built by economists do not match reality and don't have clear practical utility. Economics must move away from modeling human behavior and move towards modeling institutions, governments, companies, etc. and the dynamics involving these agents. Mathematical models will be more useful here because the entities I described above have both fewer clearly defined parameters of existence, and their interactions with other human-composite entities are more range restricted than those involving human beings themselves. Moving away from behavioral economics will restore legitimacy to economic science because a focus on institutions will yield more accurate models and therefore greater predictive and explanatory power. credo56credo56 $\begingroup$ Do you have any reason to think modeling institutions will be any simpler than modeling human behavior? Especially over the longer time scales you note are troublesome? $\endgroup$ – ako Jun 28 '18 at 22:17 $\begingroup$ Of course I do, that's why I said it. The reasons are that the dimensions of institutional behavior and interaction are much less than that of human behavior, and more importantly, the behavior of actual institutions is much more visible to us than that of people. $\endgroup$ – credo56 Jun 28 '18 at 22:54 $\begingroup$ Who do you think runs institutions if not humans? $\endgroup$ – BB King Jun 29 '18 at 8:52 $\begingroup$ Hi: I just want to add that Nerlove started the attempt to model human behavior in the form of modelling expectations by coming up with adaptive expectations. later on, partial adjustment models were another attempt to do this. then, later on the whole rational expectations revolution went even further in the attempt. How well the RE models work is a different issue but there are definitely mathematical-econometric modelling efforts to model human behavior through the mechanism of modelling agent's expectations.. $\endgroup$ – mark leeds Jun 29 '18 at 8:55 $\begingroup$ @credo56 even though I upvoted your post, for showing that maths is ineffective with explaining behaviour, I disagree that economics must become more narrow. I think subjects need to be cross-curricular. Personally, I am interested in psychology, and I like the perspective economics has on behaviour. I agree that maths can't describe behaviour to a T, but I think it's fine if maths is left out of behavioural economics (instead, it can focus on understanding irrationality). $\endgroup$ – ahorn Jun 29 '18 at 11:21 Mathematics is just language that can be used to provide clear, accurate statements. It should not be seen as an obstacle—rather, it should flow naturally alongside the other language with which it is written (e.g. English). I don't believe that maths is inherently "rigorous" or "authoritative", as other answers have mentioned, because the reader should be critical enough to spot errors. However, I recognise the limitation here: either because of a limit in human cognition, because people don't put the effort in to study mathematics, or because of a fear of maths, some people aren't good at maths. I think that is where this problem stems from, but I don't believe that poor aptitude at maths is a good enough argument for why we shouldn't use it. Excluding maths from economics is akin to saying that that maths should be kept separate from other subjects. On the other hand, reading through the answers reminds me of Paul Romer's paper The Trouble With Macroeconomics. He criticizes (with a good example) that incorrect assumptions that are made for a mathematical deduction can easily be obfuscated. Section 5.3 reads: In practice, what math does is let macroeconomists locate the Facts With Unknown Truth Value farther away from the discussion of identification. The Keynesians tended to say "Assume P is true. Then the model is identified." Relying on a micro-foundation lets an author can say, "Assume A, assume B, ... blah blah blah .... And so we have proven that P is true. Then the model is identified." with the "blah blah blah" making it harder to detect incorrect assumptions. As Wildcard said, the average person may well end up skimming over the maths, in blind faith that it's correct, for lack of effort of checking it themselves. In closing, sure, economics needs a sociological, psychological or political setting, but mathematics helps to study ideal situations. We can't create complete models of humans or institutions, but economics would be very empty if we didn't study ideal situations. Maths belongs in economics—perhaps those who want it taken out have not satisfied their interest in social sciences enough by studying alternative social science subjects. ahornahorn $\begingroup$ Romer's Mathiness is indeed lurking in several of the answers. $\endgroup$ – Giskard Jun 29 '18 at 12:40 Economics is a social science, not an empirical or laboratory one. It is the study of human behavior in response to competing demands in an environment of scarcity. Human behavior cannot be predicted with mathematical precision — the only way to do it is to make large numbers of gratuitous and unsupportable assumptions about what people will do under a given set of circumstances. Mathematical economists do not study people. Instead, they study what Nobel laureate Richard Thaler calls "Econs"... perfectly-knowledgeable, perfectly-intelligent, perfectly-logical, perfectly-sophisticated, perfectly-intentioned, perfectly-identical automatons who live and work in an environment of perfect competition; as opposed to humans, who are none of those things and who live on Planet Earth. It's not that math is bad — it lets us easily communicate complex ideas clearly and precisely. But we need to remember that the predictions rendered by mathematical economics, very often, will not hold in real life. We need to understand (and promote that understanding in those who look to the economics community for guidance and advice) that math only gets you so far — to make good policy, you need to understand what flawed, fallible, semi-unique, stressed, busy, selfish, sometimes stupid, imperfect human people will do. And mathematics cannot tell you that. DaveDave $\begingroup$ But most of Thaler's models, which try to include some aspects of human psychology are based in math. Is he then a fraud, or is this a misrepresentation of what he says? $\endgroup$ – Giskard Jun 28 '18 at 19:38 $\begingroup$ Most economists would not claim that that is what they are doing, so this does not seem to directly answer the question. These are models, often simplified to the extreme, to capture one aspect of behavior. $\endgroup$ – Giskard Jun 28 '18 at 19:41 $\begingroup$ Weather can't be predicted with mathematical precision either, but meteorologists must know quite a bit of mathematics to do their jobs. $\endgroup$ – Monty Harder Jun 28 '18 at 21:33 $\begingroup$ No, no, no, no. Literally nothing in the list of "perfectly-knowledgeable, perfectly-intelligent, perfectly-logical, perfectly-sophisticated, perfectly-intentioned, perfectly-identical automatons who live and work in an environment of perfect competition" describes the extend of mathematical economics. $\endgroup$ – Michael Greinecker Jun 29 '18 at 12:07 $\begingroup$ @Dave Mathematical economists mostly study the consequences of different assumptions. As such, there are no assumptions made by all of them all the time. But every advanced undergraduate should have seen models of imperfect competition, models in which not all agents are the same, and models of imperfect information. To be blunt: You clearly have no idea what you are talking about. $\endgroup$ – Michael Greinecker Jun 29 '18 at 15:13 To start with, it can be noted that the rise in mathiness in economics is substantially related to increased data processing power, whether in support of theoretical demonstration or empirical application. It is not itself an objective. Regarding the specific question of why heightened mathiness may be criticized: 1) Economics originates from moral philosophy. There are those who believe that debates involving who gets what, and on what terms, is related to moral philosophy. Mathematical tools can help to express moral concepts or present argumentation about which approach might better serve some moral end. 2) a) Complex math can enable theoretical presentation which is mathematically satisfactory for expressing a theory, but mathematical complexity should not be perceived as demonstration of quality in and of itself, and b) mathematical complexity doesn't necessarily mean that empirical applications will be any better. The risk may be that in order to impress other economists, unnecessarily and/or incorrectly complex math is used to express and/or develop a theory. I think being open minded in this context would be supported by a belief that diverse economists question the value of heightened mathiness, or that diverse economists view heightened mathiness as a tool (which brings risks, in particular of false overconfidence in results) and not an objective in and of itself. It can also be noted that one of the main contributions of Marx, aside from proto-macro theory, is extensive development of the idea that technology affects production conditions. And, that production conditions affect the way that all of us live. You don't have to be communist to think that this piece of knowledge is a) useful, and b) not necessarily well served by mathematical demonstration even if some very mathy empirical applications may present results which are very relevant for practical policy considerations. For most cases, such views should not be perceived as 'anti-math', per se, but rather critical of excessive reliance on (or overconfidence in) mathematical demonstration and/or math-heavy empirical applications as a tool. These can be supplemented by socio-political and/or moral argumentation or reasoning, or if outside of the scope of the work it can at least be explicitly recognized that such considerations are relevant. nathanwwwnathanwww Most economics questions have three parts: Why does a phenomenon occur? This lets the user understand the answer, understand whether the question is relevant, and understand what factors would change the answer to the next part. How much of the phenomenon is likely to occur? This lets the user make decisions based on the answer, and compare the importance of various phenomena. Under what conditions does a different phenomenon replace this phenomenon? An answer that does not address all three sub-questions is incomplete. It is likely to either be misunderstood, or be misleading. Math is needed to get an approximate answer to the second sub-question: How much? A person with a good understanding of the math can simplify the math to provide insight to the first and third sub-questions: Why, and with what limits? For example, Cobb-Douglas production functions (and mathematically similar utility functions) use math that most non-economists do not understand. The essential features of these functions can be boiled down to "price elasticities" of supply and demand. These are terms that most non-economists do not understand, but they can be turned into examples that most people do understand. For example, such functions for global oil production and demand during the 1980s could be simplified to "In the short run, if OPEC cuts its production by 1 percent of the total world production, then the price of oil will go up by 7 percent." Unfortunately, many economists use math badly: Instead of using the math to generate (and verify) a simplified explanation, some economists go through the details of a complicated "mathematical demonstration". In the end, the reader has to trust that the economist made the right assumptions, and often only as an answer to "how much", not "why" nor "with what limits". Some economists are not careful to explain the uncertainties inherent in their math. Some economists use symbols ignorantly. I once had the displeasure of listening to a lecture by a well-paid, soon-to-be-famous economist. He had many charts about things like long-term trends of the price of power, which were on a log-log scale. The x-axis was labelled as log(dollars), and the y-axis was labelled as log(kW). But his units were actually ln(dollars) and ln(kW). When politely asked later about it, he did not understand that this was a problem! (If he had actually wanted to be understood, he would have labelled the y-axis as W, kW, MW, GW, et cetera, and used similar labels for the x-axis.) JasperJasper $\begingroup$ I suspect (but don't actually know) that the convention whether log is short for log$_e$ or log$_{10}$ is language dependent. $\endgroup$ – Giskard Jul 2 '18 at 5:51 $\begingroup$ @denesp -- The lecture was in American English. Both the lecturer and I are Americans, and have been part of nearby universities. $\endgroup$ – Jasper Jul 2 '18 at 5:58 In my experience the most important reason is that economics has political implications and that creates a huge moral hazard to use complex incomprehensible math to arrive to politically desirable conclusions. Unlike in natural sciences, economic models can hardly be verified empirically and require tons of assumptions. Add a thick layer of math on top and you can support pretty much anything. In fact, anything beyond linear regression hardly improves predictive power in practice. Seasoned economists see through this. Some are in on it (hey, it is very profitable!) and some are pretty unhappy about all this abuse of math, which is unethical from a scientific point of view. But I guess many are both. At the end of the day, we all have bills to pay and families to feed. Nevertheless, we are still scientists. So there is a lot of cognitive dissonance and strong feelings going on. edited Dec 8 '18 at 20:59 Arthur TarasovArthur Tarasov $\begingroup$ I think most physics models also require a ton of assumptions. It is their empirical verification that is better. Perhaps the system they study can be more frequently decomposed into smaller independent parts. $\endgroup$ – Giskard Jul 1 '18 at 13:23 $\begingroup$ Economic models not only can but get constantly empirically verified. Why do people make strong claims about subjects they clearly don't know? Just see what people are publishing in the frontier journals: academic.oup.com/qje/issue. Most, if not all papers published in these good journals empirically verify a theoretical hypothesis or conclusion from a model. $\endgroup$ – Pedro Cavalcante Dec 10 '18 at 7:45 $\begingroup$ @PedroCavalcanteOliveira man, QJE is #1. There are thousands of economics journals below it that will publish things of much lower rigor, if any rigor at all, and politicians use those just as well to push the policies of their choice. Guess how many bother to replicate and test any of it? That would require funding. From the same politicians, that is, or an NGO with its own agenda. That's why when I see things that are supercomplex for the sake of a bit higher accuracy but take tons of time and resources to test, I get a bit critical. $\endgroup$ – Arthur Tarasov Dec 10 '18 at 9:25 $\begingroup$ You can't look at the worst outlets of a field and claim that there's a problem with it because they're bad. If that's reasonable, then literally all sciences are in big trouble. And bringing this generic argument about politicians basing themselves on bad journals isn't any good. Who are these politicians? Where and when did this happen? Can we blame economics as a field on it? Your claim was that "economic models can hardly be verified empirically", which is clearly wrong. Most of the papers published in any respectable journal ate empiric. That should be your standard. $\endgroup$ – Pedro Cavalcante Dec 10 '18 at 22:30 $\begingroup$ @PedroCavalcanteOliveira My point is that many people like it simple when there is moral hazard involved. A good standard for verifying something is an experiment with all the variables controlled. It is a very hard thing to do in social sciences. Not saying we shouldn't push the math forward, just don't build skyscrapers on the sand. $\endgroup$ – Arthur Tarasov Dec 11 '18 at 6:23 It is not the math but that authors misuse the math language. Check out this Article (unrelated to the topic). Where are the Definitions? What is the meaning of S, E, the arrow inbetween, and all these other symbols? Someone who has not studied this subject can not know. Scientific texts have many quality standards, like citing others, but defining math symbols is not a standard. In my opinion that is not good, especially if such publications are read by the public. It should be a standard in science to define all symbols in public contexts. I believe this to be the answer to why your colleagues and most other math haters dont like "math" (which, as i already said, is actually not the problem). The solution can only come from the scientific community. For websites there is btw a trivial solution, Hover over the above link to see it. Nils LindemannNils Lindemann $\begingroup$ That's so true. I've been teaching myself RE for ~ two years and the RE literature is extremely difficult to understand. They define very little and often assume signs of coefficients which can make things totally confusing. For example, it took me 2 weeks and help from a top professor in economics to understand a statement on page 2 of the paper at the link below. It turned out that it was because alpha was assumed negative but this was not stated anywhere. We had to go back to an earlier paper to figure that out. jstor.org/stable/2526858?seq=1#page_scan_tab_contents $\endgroup$ – mark leeds Jun 30 '18 at 5:44 $\begingroup$ if anyone is interested, it's at the bottom of page 2 where he says " a one percent jump in anticipated inflation, blah, blah". why not state that $\alpha$ is assumed negative ? not everyone would necessarily know that I don't think ? $\endgroup$ – mark leeds Jun 30 '18 at 5:56 This is not so much of an answer as more of a note motivated predominantly by the softness of the question. It can be the case that the statement "[...] the study of work by classical economists (like Adam Smith, Karl Marx, and David Ricardo) is still relevant" (insert qualifications) is true irrespective of the truth value of the assertion "[...] the practice of how mainstream economics uses mathematics is abusive and is an attempt to fool the masses regarding the "science" economists practice.". My point is that the relevance of the classics is not necessarily related to the relevance (or lack thereof) of using mathematics in economics. Obviously, private communications are opaque to anyone not present and as I were not present in the private communications that instigated this question it is not possible to comment on the specific arguments that lend (or detract) support to the math relevance thesis; I think that there is some renewed interest in the history of economics as a discipline and economic historians are trying to investigate the various paths that economic theory has followed in modern times; I won't use references as I am not an economic historian but I think it is relatively easy for anyone to find material on such issues. My personal understanding of the subject is that the success of the war effort during WWII attributed (rightly or wrongly, that is debatable) a certain amount of credibility into tools and approaches used in operations research and related fields; obviously these fields were more mathematical in spirit. With the advent of the Cold War and the political and ideological issues that ensued it was only natural to expect that tools that had proven themselves useful in the recent past (math, op research) would be used again to stave of the red scare. Add to this mix the arms race of Cold War and the subsequent major and minor breakthroughs in hard sciences related to the nuclear effort etc. It is not difficult to imagine why the agony of the "free world" to emerge victorious from the cold war painted the tools it had invested so much into with favorable colors. Now, there comes an inversion in this scheme where the tools that had been proven useful once are subsequently used almost ceremonially to impart use value into the body of knowledge that accumulated around their use. That is not to say that the mathematics were 'wrong' or 'too abstract' or 'irrelevant'. But it is the case that at some point the tool-case became more important than the actual problems it could solve. And this is equivalent to hybris. On a final note, damning or glorifying economics for its use of mathematics seems misplaced so long as the body of knowledge under the heading 'economics' fails to produce positive results for the society at large. Resources have competing uses and economists know that very well. this is an update about math and classical econs (as it was too long for a comment) The classical econs couldn't have used calculus as Leibnitz and Newton invented it during the mid- and late-1600's and it was formalized by mathematicians 100-150 years later into something recognizable; I know Marx did some fondling with infinite calculus by never used it as a proper tool; similarly, the use of linear algebra and systems of linear equations were predominantly popularized by the triumph of Dantzig's simplex algo. The point is that IMO the classic econs did not have that stock of knowledge available to them. Furthermore, Political economy was to a large extent a discursive enterprise that was meant to convince the hegemon about the proper path to prosperity (whatever that meant to them at that time). Consider eg the Physiocrats. Quesnay's (a contemporary of A. Smith) Tableau was to a large extent a description of flows that needed little effort to be translated into a linear system of inputs and outputs. It wasn't, because 1.a. his formal education was in medicine (he was trained as a physician) 1.b. the tools to do so were invented by Leontieff in the 60's he and his disciples had all the legitimacy they needed (Turgot, a disciple of Quesnay, eventually become finance Minister) The point I'm trying to make is that lack of mathematical rigor in classical econs does not necessarily mean that they are irrelevant. jojobabajojobaba $\begingroup$ A major distinction between the "classical economists" and later economists is that the classical economists used neither calculus nor large systems of linear equations to derive their results. The great classical economists did include a few simple mathematical examples. $\endgroup$ – Jasper Jul 2 '18 at 4:40 I think there are two legitimate sources of complaint. For the first, I will give you the anti-poem that I wrote in complaint against both economists and poets. A poem, of course, packs meaning and emotion into pregnant words and phrases. An anti-poem removes all feeling and sterilizes the words so that they are clear. The fact that most English speaking humans cannot read this assures economists of continued employment. You cannot say that economists are not bright. Live Long and Prosper-An Anti-Poem May you be denoted as $k\in{I},I\in\mathbb{N}$, such that $I=1\dots{i}\dots{k}\dots{Z}$ where $Z$ denotes the most recently born human. $\exists$ a fuzzy set $Y=\{y^i:\text{Human Mortality Expectations}\mapsto{y^i},\forall{i\in{I}}\},$ may $y^k\in\Omega,\Omega\in{Y}$ and $\Omega$ is denoted as "long" and may $U(c)$, where c is the matrix of goods and services across your lifetime $U$ is a function of $c$, where preferences are well-defined and $U$ is qualitative satisfaction, be maximized $\forall{t}$, $t$ denoting time, subject to $w^k=f'_t(L_t),$ where $f$ is your production function across time and $L$ is the time vector of your amount of work, and further subject to $w^i_tL^i_t+s^i_{t-1}=P_t^{'}c_t^i+s^i_t,\forall{i}$ where $P$ is the vector of prices and $s$ is a measure of personal savings across time. May $\dot{f}\gg{0}.$ Let $W$ be the set $W=\{w^i_t:\forall{i,t}\text{ ranked ordinally}\}$ Let $Q$ be the fuzzy subset of $W$ such that $Q$ is denoted "high". Let $w_t^k\in{Q},\forall{t}$ The second is mentioned above, which is the misuse of math and statistical methods. I would both agree and disagree with the critics on this. I believe that most economists are not aware of how fragile some statistical methods can be. To provide an example, I did a seminar for the students in the math club as to how your probability axioms can completely determine the interpretation of an experiment. I proved using real data that newborn babies will float out of their cribs unless nurses swaddle them. Indeed, using two different axiomatizations of probability, I had babies clearly floating away and obviously sleeping soundly and securely in their cribs. It wasn't the data that determined the result; it was axioms in use. Now any statistician would clearly point out that I was abusing the method, except that I was abusing the method in a manner that is normal in the sciences. I didn't actually break any rules, I just followed a set of rules to their logical conclusion in a way that people do not consider because babies don't float. You can get significance under one set of rules and no effect at all under another. Economics is especially sensitive to this type of problem. I do belive that there is an error of thought in the Austrian school and maybe the Marxist about the use of statistics in economics that I believe is based on a statistical illusion. I am hoping to publish a paper on a serious math problem in econometrics that nobody has seemed to notice before and I think it is related to the illusion. This image is the sampling distribution of Edgeworth's Maximum Likelihood estimator under Fisher's interpretation (blue) versus the sampling distribution of the Bayesian maximum a posteriori estimator (red) with a flat prior. It comes from a simulation of 1000 trials each with 10,000 observations, so they should converge. The true value is approximately .99986. Since the MLE is also the OLS estimator in the case, it is also Pearson and Neyman's MVUE. Note how relatively inaccurate the Frequency based estimator is compared to the Bayesian. Indeed, the relative efficiency of $\hat{\beta}$ under the two methods is 20:1. Although Leonard Jimmie Savage was certainly alive when the Austrian school left statistical methods behind, the computational ability to use them didn't exist. The first element of the illusion is inaccuracy. The second part can better be seen with a kernel density estimate of the same graph. In the region of the true value, there are almost no examples of the maximum likelihood estimator being observed, while the Bayesian maximum a posteriori estimator closely covers .999863. In fact, the average of the Bayesian estimators is .99987 whereas the frequency based solution is .9990. Remember this is with 10,000,000 data points overall. Frequency based estimators are averaged over the sample space. The missing implication is that it is unbiased, on average, over the entire space, but possibly biased for any specific value of $\theta$. You also see this with the binomial distribution. The effect is even greater on the intercept. The red is the histogram of Frequentist estimates of the itercept, whose true value is zero, while the Bayesian is the spike in blue. The impact of these effects are worsened with small sample sizes because the large samples pull the estimator to the true value. I think the Austrians were seeing results that were inaccurate and didn't always make logical sense. When you add data mining into the mix, I think they were rejecting the practice. The reason I believe the Austrians are incorrect is that their most serious objections are solved by Leonard Jimmie Savage's personalistic statistics. Savages Foundations of Statistics fully covers their objections, but I think the split had effectively already happened and so the two have never really met up. Bayesian methods are generative methods while Frequency methods are sampling based methods. While there are circumstances where it may be inefficient or less powerful, if a second moment exists in the data, then the t-test is always a valid test for hypotheses regarding the location of the population mean. You do not need to know how the data was created in the first place. You need not care. You only need to know that the central limit theorem holds. Conversely, Bayesian methods depend entirely on how the data came into existence in the first place. For example, imagine you were watching English style auctions for a particular type of furniture. The high bids would follow a Gumbel distribution. The Bayesian solution for inference regarding the center of location would not use a t-test, but rather the joint posterior density of each of those observations with the Gumbel distribution as the likelihood function. The Bayesian idea of a parameter is broader than the Frequentist and can accomodate completely subjective constructions. As an example, Ben Roethlisberger of the Pittsburgh Steelers could be considered a parameter. He would also have parameters associated with him such as pass completion rates, but he could have a unique configuration and he would be a parameter in a sense similar to Frequentist model comparison methods. He might be thought of as a model. The complexity rejection isn't valid under Savage's methodology and indeed cannot be. If there were no regularities in human behavior, it would be impossible to cross a street or take a test. Food would never be delivered. It may be the case, however, that "orthodox" statistical methods can give pathological results that have pushed some groups of economists away. Dave HarrisDave Harris $\begingroup$ This is interesting but what was the data and what was being estimated. You say "edgeworth's MLE" but MLE under what distributional assumption of what data ? I may have missed a previous post. Thanks for clarification.. $\endgroup$ – mark leeds Jul 3 '18 at 19:23 $\begingroup$ The data is from a set of simulations from a time series that is stationary AR(1) with normal shocks. $\endgroup$ – Dave Harris Jul 3 '18 at 20:34 $\begingroup$ In that case, you've got a VERY, VERY, VERY, VERY near unit root process which is going to cause the classical statistical assumptions to fail. So, it sounds more like an assumption problem rather than a problem with classical statistics. As you're probably aware, unit root processes lead to dickey fuller type distributions rather than t-distrbutions. My best guess is that that's what's going on there. Still, an interesting example. Thanks. $\endgroup$ – mark leeds Jul 3 '18 at 20:43 $\begingroup$ That is what started the investigation. I am looking at nearly and just barely explosive roots. $\endgroup$ – Dave Harris Jul 3 '18 at 20:44 $\begingroup$ There is a Bayesian solution for both less than and greater than unit root processes. The elaborate Frequentist solutions are completely unnecessary. Being non-stationary is a headache, but only in the sense that the predictions are weaker, not in the calculation sense. $\endgroup$ – Dave Harris Jul 3 '18 at 20:48 I don't think there is a blanket reason to be against math anymore than there is a blanket reason to be against case studies. It is almost a matter of epistemology. What are the knowledge claims made, with what methods, and with what evidence? Some sorts of questions are very well suited for a quantitative treatment: Like, what is the effect of increased accessibility on housing prices? Or, given a number of variables on cost and household demographics, which mode of transport is a household likely to take to work? There are models that are well suited for finding patterns in that type of questions where the domain is fairly specific, and they may work reasonably well even absent a strong theory underlying observed patterns. Conversely, a number of questions are of a different nature entirely, related to bigger historical shifts. The rise and fall of the labor movement in the US, say, or why did some cities see a revival when others didn't? Such questions are probably better answered by a different approach than using models (this doesn't mean that there can't be useful quantitative components of asking those questions). Ultimately, I think it has more to do with the sorts of questions different researchers are interested in rather than a wholesale rejection of a practical approach. akoako At the end of the day, economics and its offshoots (ie business, management, marketing etc) are all social sciences. These areas of inquiry are concerned with specific facades of human behavior as individuals or groups. While quantitative methods are very useful in categorizing and generalizing these behaviors, the behavior itself is highly personal and individualistic. For example, you and I could go into the same supermarket, at the same time, buy the same items and leave. This behavior when analyzed quantitatively, will arrive at an average of our behavior and its root causes, it will however completely miss the individual behaviors. By defining a non-existent third behavior (the average) it will model our behaviors, but will not reflect the true nature of the behaviors it is trying to explain. But when you have a large enough sample for a given population the quantitative analysis just about becomes good enough to draw generalizations, but the generalization being drawn usually is not representative of a given individual. Sinan YilmazSinan Yilmaz Beyond the quantitative aspects, there are also qualitative factors which do not lend themselves to numeric treatment. My background is electrical engineering, which quite correctly employs quantitative methods extensively. Although investing isn't economics, there is a relationship. As much as possible, I've been trying to read and implement the information and wisdom imparted by Benjamin Graham and his colleague David Dodd. Graham himself was the instructor, and later the employer, of Warren Buffett. Graham felt that when anything more than the 4 basic arithmetic operations were dragged into the model, description, or analysis, then somebody was trying to "sell you a bill of goods". Graham himself was very mathematically adept, and knew calculus and differential equations much better than most students and instructors. So, use of advanced mathematics in some ways acts to obscure, rather than elucidate, matters pertaining to "proper" investment practice. Buffett is still very much alive. Graham himself and most of his employees or students are all long gone, but they all seemed to have died rich. Look through his books "Security Analysis" and "The Intelligent Investor" and you won't find a derivative, integral, ODE, or PDE. IchabodKunkleberryIchabodKunkleberry $\begingroup$ You may enjoy reading about the company Long-Term Capital Management. $\endgroup$ – Giskard Jul 2 '18 at 17:02 $\begingroup$ @denesb: The LTCM disaster was based on a set of assumptions and confidence about the way people have. It had zero to do with math per-se but still an interesting read for those interested. OTOH, If you're making a statement about math not always being applicable in finance, I agree. $\endgroup$ – mark leeds Jul 3 '18 at 15:30 $\begingroup$ Actually, Graham was an economist and indeed presented an alternative monetary regime at the Bretton Woods conference. Just to be fair to Graham, he might, in fact, use those tools today. The Graham-Dodd method actually lends itself to both statistical and economic model constructions. $\endgroup$ – Dave Harris Jul 3 '18 at 16:55 $\begingroup$ should be "people behave" not people have. Not sure how to correct it. $\endgroup$ – mark leeds Jul 3 '18 at 19:25 Many of the criticism comes from the recent financial crisis. Economists failed to predict the crisis, beside the super sophisticated models. Many then said that economics is wrong because this super complex models cannot capture essential elements of life and behavior and society. So part of the movement against mathematics is just in response to the evidence. For many, the sicence is often a failure. "What is a reason to be against mathematics in economics?" IMO, if you frame your whole economic thinking in mathematic terms (or too much of it), your thought process might become less flexible and innovative. Mathematically formalizing economic theories can be an arduous task: Some postulates might require extreme care when translating them into mathematical language. This has an opportunity cost in terms of time and intellectual energy which will not be spent on more "productive" tasks (e.g. exploring new, radical ideas to longstanding problems); Mathematics requires a rigor that is simply absent when a new idea is emerging: you might not be able to formulate mathematically something you are barely starting to understand. As a consequence, your economic thinking might end up "hijacked" by a set of assumptions that enable you to mathematically formalize your theory/model, but which constrains the range of novel economic ideas you can formulate. Daneel OlivawDaneel Olivaw $\begingroup$ To address both the points here- Peer reviewed journals deal with both of these issues. when ideas presented they have to go through a process where they are critically reviewed, if they cant stand up to scrutiny (or the author cant handle the criticism) then why publish it? $\endgroup$ – EconJohn♦ Jul 3 '18 at 16:15 $\begingroup$ @EconJohn "standing up to scrutiny" involves a fair degree of subjectivity: when L. Bachelier presented his thesis applying Brownian Motion to model stocks, reception was mixed as the Jury deemed it wasn't fully rigorous. Nonetheless, his work has subsequently been tremendously influential in the theory of Finance. Original work might deviate from a profession's prevailing standard (e.g. rigorous mathematical formalization) which do not necessarily invalidate its relevance. So some people might be again an excessive use of mathematics in economics because of that. $\endgroup$ – Daneel Olivaw Jul 3 '18 at 17:26 $\begingroup$ Why the down vote, by the way? $\endgroup$ – Daneel Olivaw Jul 3 '18 at 17:31 Not the answer you're looking for? Browse other questions tagged mathematical-economics soft-question or ask your own question. Fundamental equations in economics Use of mathematics and imprecise definition of terms What Math is used in Development Economics? Criticism of "Modern Political Economics" by Varoufakis, Halevi, Theocarakis Is it worth while to get a master's in math (or applied math) before pursuing phd in economic theory? Math basics for Macro&Micro economics and econometrics Sending your paper around in Economics what are some unique features of heterodox economics? How can I model my problems with math?
CommonCrawl
5th Cornell Conference on Analysis, Probability, and Mathematical Physics on Fractals Fractals 5 Home Abstracts of Talks Eric Akkerman, Technion Recent results on fractals in physics Christoph Bandt, University of Greifswald A non-pcf fractal which makes analysis easy Analysis on fractals without pcf property faces the problem that self-similarity can rarely be used to simplify the investigation. We present a non-pcf example for which the Dirichlet problem can be easily solved. The definition of the Laplacian will be discussed in the manner of Strichartz's 2001 paper. The example is a modification of the octagasket. David Croydon, Warwick University Modulus of continuity for local times of random walks on graphs In this talk, I present continuity results for the local times of random walks on graphs in terms of the resistance metric. I will also explain a particular application to the case when the graphs in questions satisfy the property of uniform volume doubling in the resistance metric, as is the case for various self-similar fractals. Eva Curry, Acadia University Martin and Poisson Boundaries and Low-Pass Filters We give a characterization of low-pass filters for multivariable scaling functions (associated with multivariable multiresolution analyses and wavelet sets) via a Markov process on the tree of finite words over the digit set corresponding to the dilation matrix used. Uta Freiberg, Universitat Stuttgart Differential operators and generalized trigonometric functions on fractal subsets of the real line Spectral asymptotics of second order differential operators of the form $d/dm$ $d/dx$ on the real line are well known if $m$ is a self similar measure with compact support. We extend the results to some more general cases such as random fractal measures and self conformal measures. Moreover, we give a representation of the eigenfunctions as generalized trigonometric functions. The results were obtained in collaboration with Sabrina Kombrink and Peter Arzt. Michael Hinz, Universitat Bielefeld Nakao's theorem and magnetic Laplacians on fractals Jiaxin Hu, Tsinghua University Heat kernel and Harnack inequality We show that the elliptic Harnack inequality follows from the volume doubling condition, Poincare inequality, and generalized capacity bound. Combining this with previous work, we obtain various equivalent conditions for heat kernel two-sided estimates. Marius Ionescu, Colgate University and University of Maryland Some spectral properties of pseudo-differential operators on the Sierpinski Gasket We describe some results on the asymptotic behaviour of pseudo-differential operators on the Sierpinski gasket. We present the asymptotics of clusters of eigenvalues of generalized Schrodinger operators, generalizing some work due to Okoudjou and Strichartz. We also describe the asymptotics of the trace of continuous functions of pseudo-differential operators. Our results generalize the classical limit theorem of Szego and its extension to pseudo-differential operators on manifolds, and extend some recent work of Okoudjou, Rogers, and Strichartz. This presentation is based on joint work with Kasso Okoudjou and Luke Rogers. Palle Jorgensen, University of Iowa Harmonic analysis on fractals While we have in mind a wider variety of fractals than those defined by affine similarity mappings, the affine case will be our main focus. We will discuss three candidates for an harmonic analysis; (1) orthonormal bases in $L^2$ of the fractal under consideration; (2) a version making use of representation of a certain non-abelian algebra (generators and relations; and finally (3) one making use of energy Laplacians and boundaries. Each one has its advantages and limitations; both will be discussed. The talk will be based on joint research with several colleagues, especially, Dorin Dutkay, Erin Pearse, Steen Pedersen, Myung-Sin Song. Jun Kigami, Kyoto University Geometry of self-similar sets and time change of Brownian motion Takashi Kumagai, Kyoto University Simple random walk on the two-dimensional uniform spanning tree and the scaling limits In this talk, we will first summarize known results about anomalous asymptotic behavior of simple random walk on the two-dimensional uniform spanning tree. We then show the existence of subsequential scaling limits for the random walk, and describe the limits as diffusions on the limiting random real trees embedded into Euclidean space. Anomalous heat transfer on the random real trees will be observed by estimating heat kernels of the diffusions. This is an on-going joint project with M.T. Barlow (UBC) and D. Croydon (Warwick). Ka-Sing Lau, Chinese University of Hong Kong Lipschitz equivalent of self-similar sets and hyperbolic graphs Recently there is a lot of interest on the equivalence of Lipschtiz equivalence of self-similar sets. Our approach is to use the augmented tree structure (Kamaimovich) on the symbolic space, which is a hyperbolic graph in the sense of Gromov. We define a class of "simple" augmented tree on the IFS, and show that the associated self-similar sets are Lipschitz equivalent to the Cantor sets. This is a join work with Guotai Deng and Jason Luo. Sze-Man Ngai, Georgia Southern University Eigenvalue estimates of Laplacians defined by fractal measures We study various lower and upper estimates for the first eigenvalue of Dirichlet Laplacians defined by positive Borel measures on bounded open subsets of Euclidean spaces. These Laplacians and the corresponding eigenvalue estimates differ from classical ones in that the defining measures can be singular. By using properties of self-similar measures, such as Strichartz's second-order self-similar identities, we improve some of the eigenvalue estimates. Roberto Peirone, Università di Roma Tor Vergata Uniqueness of eigenform on fractals I present some recent results on the uniqueness of self-similar energies on finitely ramified fractals. In particular, some necessary and sufficient or sufficient conditions for nonuniqueness are given. These conditions amounts to the existence of two disjoint subsets of the set of the initial vertices that are stable under some special transformations. As a consequence of such conditions, there exist several fractals where the self-similar energy is not unique whose structures are very different from a tree. Conrad Plaut, University of Tennessee Discrete homotopies and the fundamental group Discrete homotopy methods were developed by Berestovskii, Plaut, and Wilkins over the last few years. They allow "stratification" of the fundamental group according to the "size of the holes" being measured. This allows study of spaces with bad local topology, such as fractals, but also allows additional understanding of the relationship between topology and geometry at a given scale. In this talk, I will discuss some applications: A "curvature-free" generalization of Cheeger's fundamental group finiteness theorem and a quantitative generalization of a theorem of Gromov on generators for the fundamental group of a compact Riemannian manifold. This is joint work with Jay Wilkins. If there is time, I will discuss a new topological invariant based on discrete homotopy methods that can distinguish between spaces that can't be distinguished by standard topological invariants. Hua Qiu, Nanjing University Exact spectrum of the Laplacian on a domain in the Sierpinski gasket For a certain domain $\Omega$ in the Sierpinski gasket $\mathcal{SG}$ whose boundary is a line segment, a complete description of the eigenvalues of the Laplacian, with an exact count of dimensions of eigenspaces, under the Dirichlet and Neumann boundary conditions is presented. The method developed in this paper is a weak version of the spectral decimation method due to Fukushima and Shima, since for a lot of "bad" eigenvalues the spectral decimation method can not be used directly. Let $\rho^0(x)$, $\rho^\Omega(x)$ be the eigenvalue counting functions of the Laplacian associated to $\mathcal{SG}$ and $\Omega$ respectively. We prove a comparison between $\rho^0(x)$ and $\rho^\Omega(x)$ says that $ 0\leq \rho^{0}(x)-\rho^\Omega(x)\leq C x^{\log2/\log 5}\log x$ for sufficiently large $x$ for some positive constant $C$. As a consequence, $\rho^\Omega(x)=g(\log x)x^{\log 3/\log 5}+O(x^{\log2/\log5}\log x)$ as $x\rightarrow\infty$, for some (right-continuous discontinuous) $\log 5$-periodic function $g:\mathbb{R}\rightarrow \mathbb{R}$ with $0<\inf_{\mathbb{R}}g<\sup_\mathbb{R}g<\infty$. Moreover, we explain that the asymptotic expansion of $\rho^\Omega(x)$ should admit a second term of the order $\log2/\log 5$, that becomes apparent from the experimental data. This is very analogous to the conjectures of Weyl and Berry. Hans Martin Reuter, University of Mainz Quantum gravity, asymptotic safety, and fractals After a brief review of the asymptotic safety approach to quantum gravity we discuss the emergence of effective spacetimes which possess fractal properties present recent results on their scale dependent spectral dimension and use them for a detailed comparison of the statistical mechanics based and the continuum approach to asymptotically safe quantum gravity. Luke Rogers, University of Connecticut Sobolev spaces that are not algebras Huojun Ruan, Zhejiang University The "hot spots" conjecture on p.c.f. self-similar sets In this talk, we will discuss the recent progresses on the "hot spots" conjecture of p.c.f. self-similar sets, including the standard Sierpinski gasket, the Sierpiski gasket with level 3, and Sierpinski gaskets in higher dimensional case. Nageswari Shanmugalingam, University of Cincinnati Geometry and $\infty$-Poincare inequality vs. $p$-Poincare inequality The focus of the talk will be on the geometric characterizations of doubling metric measure spaces supporting $\infty$-Poincare inequality, and the distinctions between $\infty$-Pooincare inequality and $p$-Poincare inequality. This talk is based on joint work with Estibalitz Durand-Cartagena, Jesus Jaramillo, and Alex Williams Benjamin Steinhurst, McDaniel College Generalized Witt vectors and Lipschitz functions between Cantor sets Burnside-Witt vectors form a ring, much like the $p$-adic integers, but they have a more complicated algebraic structure. In earlier work a collection of prime ideals of this ring were found. However it was also possible to show that this list of ideals was not complete. In this talk I will demonstrate where a second, very large, collection of prime ideals can be found in the ring of Burnside-Witt vectors and how they relate to continuous functions between two Cantor sets, which are given a $p$-adic algebraic structure. The result of the link to continuous functions is that it is now easy to find long chains of prime ideals and it is possible to show that the Krull dimension of the Burnside-Witt vectors is infinite. Robert Strichartz, Cornell University Two way conversations between fractal analysis and classical analysis Classical analysis speaks to fractal analysis in many ways, but it is not always a one way conversation. I will describe a few of these interactions, including two new works in which interesting results in classical analysis have emerged by listening to some things that fractal analysis has to say. The papers, "Graph paper trace charactereizations of functions of finite energy" and "Unexpected spectral asymptotics for wave equations on certain compact spacetimes" (with Jonathan Fox) have been posted on arXiv. Alexander Teplyaev, University of Connecticut Waves, energy on fractals and related questions I will present some recent results dealing with wave equation on one dimensional fractals (joint projects with J. F.-C. Chan, Sze-Man Ngai, Ulysses Andrews, Grigory Bonik, Joe P. Chen, Richard W. Martin). If time permits, existence and uniqueness of local energy forms on fractals will be discussed, as well as applications to vector valued PDEs on fractals. Martina Zaehle, University of Jena (S)PDE on fractals — Regularity of the solution Patricia Alonso-Ruiz, University of Ulm Existence of resitance forms in some (non self similar) fractal spaces We construct resistance forms in so called fractal quantum graphs and discuss other fractal spaces where such a construction is also possible. We also present some examples of associated Dirichlet forms in the fractal quantum graph case. This talk is based in joint work with D. Kelleher and A. Teplyaev. Philippe Charmoy, University of Oxford Heat content asymptotics for some random Koch snowflakes We consider the short time asymptotics of the heat content $E$ of a domain $D$ of $\mathbb{R}^d$. The novelty of this paper is that we consider the situation where $D$ is a domain whose boundary $\partial D$ is a random Koch type curve. When $\partial D$ is spatially homogeneous, we show that we can recover the lower and upper Minkowski dimensions of $\partial D$ from the short time behaviour of $E(s)$. Furthermore, in some situations where the Minkowski dimension exists, finer geometric fluctuations can be recovered and the heat content is controlled by $s^\alpha e^{f(\log(1/s))}$ for small $s$, for some $\alpha \in (0, \infty)$ and some regularly varying function $f$. The function $f$ is not constant is general and carries some geometric information. When $\partial D$ is statistically self-similar, then the Minkowski dimension and content of $\partial D$ typically exist and can be recovered from $E(s)$. Furthermore, the heat content has an almost sure expansion $E(s) = c s^{\alpha} N_\infty + o(s^\alpha)$ for smalls, for some $c$ and $\alpha \in (0, \infty)$ and some positive random variable $N_\infty$ with unit expectation arising as the limit of some martingale. Joe Chen, University of Connecticut Asymptotics of cover times of random walks on fractal-like graphs Random walks on fractal-like graphs have been an intensely studied subject over the past three decades. Much is known about the convergence of the rescaled random walks to a "Brownian motion" on the limit object, which relies upon, among other things, resistance estimates on the underlying graphs. In this talk, we will address the "cover time" problem of random walks on certain classes of fractal graphs. Recall that the cover time of a finite graph is the first time that a random walk covers all vertices of the graph. A theorem of Aldous (1991) states that along a sequence of increasing graphs, if the hitting times grow at a slower rate than the cover times, then the cover times exhibit exponential concentration toward the mean (namely, the fluctuations about the mean cover times are of lower order). Using the aforementioned resistance estimates, we obtain the asymptotics of the expected cover times on high-dimensional fractal graphs, such as the 3D Menger sponge graphs. This involves a nontrivial connection between cover times and the maxima of Gaussian free fields established by Ding, Lee, and Peres, as well as extensions of ideas used to prove entropic repulsion of the free field by Ugurcan and the speaker. If time permits we will also touch on the nature of the fluctuations. Thibaut Deheuvels, Ecole Normale Supérieure de Rennes Trace and extension results for a class of ramified domains with fractal self-similar boundary We study some questions of analysis in view of the modeling of tree-like structures, such as the human lungs. More particularly, we focus on a class of planar ramified domains whose boundary contains a fractal self-similar part, noted $\Gamma$. We first study the Sobolev regularity of the traces on the fractal part $\Gamma$ of the boundary of functions in some Sobolev spaces of the ramified domains. Then, we study the existence of Sobolev extension operators for the ramified domains we consider. In particular, we show that there exists $p^*\in(1,\infty)$ such that there are $W^{1,p}$-extension operator for the ramified domains for every $1<p<p^*$, and the result is sharp. The construction we propose is based on a Haar wavelet decomposition on the fractal set $\Gamma$. Finally, we compare the notion of self-similar trace on the fractal part of the boundary with more classical definitions of trace. Robert Giza, Cal Poly Pomona Lattice approximation of attractors in the Hausdorff metric Given a self-similar system in some Euclidean space, the associated scaling ratio vector is used to categorize the system as lattice or nonlattice. Under certain conditions it has been shown that this distinction will imply a great deal about the structure of the complex dimensions, which in turn will determine if a system is Minkowski measurable or not. The scope of this talk will be concerned with presenting a geometric aspect to the lattice/nonlattice dichotomy. In particular, given some Euclidean space, it will be shown that for any nonlattice self-similar system with attractor $F$ there exists a sequence of lattice self-similar systems with attractors $F_m$ such that the sequence $F_m$ converges to $F$ in the Hausdorff metric space. Furthermore, an additional result will be proven which will imply that the sequence of box counting functions generated by each of the $F_m$ converge pointwise to the box counting function of $F$. Sa'ar Hersonsky, University of Georgia Uniformization of planar Jordan domains In the 90's, Stephenson conjectured that the Riemann mapping can be approximated by a sequence of finite networks. We will describe work in progress towards a resolution of a generalized version of his conjecture. The main ingredients include harmonic analysis on graphs, singular level curves of piecewise linear functions and Sobolev spaces of a fractional dimension. Lizaveta Ihnatsyeva, Aalto University Hardy inequalities in Triebel-Lizorkin spaces In this talk I consider inequalities of Hardy type for functions in Triebel-Lizorkin spaces $F^s_{pq}(G)$ on a domain $G\subset \mathbb{R}^n$, whose boundary has the Aikawa dimension strictly less than $n-sp$. If $1<p<\infty$, in the class of domains whose boundary is compact and whose complement has zero Lebesgue measure, one could obtain a characterization for the validity of Hardy inequality in terms of the Aikawa dimension of $\partial G$. The mentioned result applies in the case when the boundary $\partial G$ of a domain $G$ is 'thin'. I would also like to discuss some of the known results for 'fat' sets. In particular, the validity of Hardy inequalities on open sets under a combined fatness and visibility condition on the boundary. In addition, I give a short exposition of various fatness conditions related to the theory, and apply Hardy inequalities in connection to the boundedness of extension operators for Triebel-Lizorkin spaces. The talk is based on joint work with Antti Vähäkangas and joint work with Juha Lehrbäck, Heli Tuominen, and Antti Vähäkangas. Daniel Kelleher, University of Connecticut Intrinsic metrics and vector analysis for Dirichlet forms on fractals We will discuss the possibility of defining vector analysis for measurable Dirichlet forms (quadratic forms on scalar functions). This vector analysis has applications to the Dirac operator, and the existence of the intrinsic metrics. This construction combines ideas from classical and non-commutative functional analysis and if time permits we shall discuss how this leads to the definition of spectral triples on fractal spaces. Janna Lierl, Hausdorff Center for Mathematics, Bonn Boundary Harnack principle on fractals We prove a scale-invariant boundary Harnack principle (BHP) on inner uniform domains in metric measure spaces. We prove our result in the context of strictly local, regular, symmetric Dirichlet spaces, without assuming that the Dirichlet form induces a metric that generates the original topology on the metric space. Thus, we allow the underlying space to be fractal, e.g. the Sierpinski gasket. Jun Jie Miao, Michigan State University Generalise $q$ dimension of self-affine set on Heisenberg group We studied the generalized $q$-dimension of measures supported by Heisenberg self-affine sets in Heisenberg group, and we obtained bounds for it which are valid for 'almost all' classes of affine contractions. Especially, when $1<q\le 2$, exact values are established for Heisenberg self-affine measures. Gamal Mograby, Technical University of Berlin Anderson localisation on the Sierpinski triangle Filoche's and Mayboroda's theory demonstrate that both weak and Anderson localization originate from the same universal mechanism. This theory allows one to predict some localization properties, like the confining regions. We apply this theory to the case of the Sierpinski triangle and present our results. Nadia Ott, San Diego State University Using Peano curves to contstruct Laplacians on fractals We describe a new method to construct Laplacians on fractals using a Peano curve from the circle onto the fractal, extending an idea that has been used in the case of certain Julia sets. The Peano curve allows us to visualize eigenfunctions of the Laplacian by graphing the pullback to the circle. We study in detail three fractals: the pentagasket, the octagasket and the magic carpet. We also use the method for two nonfractal self-similar sets, the torus and the equilateral triangle, obtaining appealing new visualizations of eigenfunctions on the triangle. In contrast to the many familiar pictures of approximations to standard Peano curves, that do no show self-intersections, our descriptions of approximations to the Peano curves have self-intersections that play a vital role in constructing graph approximations to the fractal with explicit graph Laplacians that give the fractal Laplacian in the limit. Yehonatan Sella, UCLA Differential equations on cubic Julia sets The fractals REU in Cornell has successfully defined a Laplacian for several quadratic and cubic Julia sets, by approximating them as the limits of finite graphs and studying the subdivision rules governing the evolution of the graphs. This technique relied on the graphs subdividing in a nice way. However, complications arise for less well-behaved Julia sets. We will address how to resolve some of these complications by the methods of "procrastination" and taking limits, focusing specifically on cubic Julia sets. We will give specific examples of such Julia sets and analyze the spectra of their Laplacians. Baris Ugurcan, Cornell University Extensions and their minimizations on the Sierpinski gasket In classical analysis one studies the finite variants of the Whitney extension theorem in which certain data is prescribed on a finite set of points in the Euclidean space and then extended by minimizing certain norms. In a similar vein, in this work, we study the extension problem on the Sierpinski Gasket ($SG$). We consider minimizing the functionals $\mathcal{E}_{\lambda}(f) = \mathcal{E}(f,f) + \lambda \int f^2 d \mu$ and $\int_{SG} |\Delta f(x)|^2 d \mu (x)$ with prescribed values at a finite set of points where $\mathcal{E}$ denotes the energy (the analog of $\int |\nabla f|^2$ in Euclidean space) and $\mu$ denotes the standard self-similiar measure on $SG$. In the first case, we explicitly construct the minimizer $f(x) = \sum_{i} c_i G_{\lambda}(x_i, x)$ for some constants $c_i$, where $G_{\lambda}$ is the resolvent for $\lambda \geq 0$. We minimize the energy over sets in $SG$ by calculating the explicit quadratic form $\mathcal{E}(f)$ of the minimizer $f$. We consider properties of this quadratic form for arbitrary sets and then analyze some specific sets. One such set we consider is the bottom row of a graph approximation of $SG$. For the bottom row, we describe both the quadratic form and a discretized form in terms of Haar functions. In both cases, we show the existence and uniqueness to the minimization problem and then study the fine properties of the unique minimizers. This is joint work with Pak-Hin Li, Nicholas Ryder and Robert S. Strichartz. Sean Watson, University of California at Riverside Fractal geometry and complex dimensions in metric measure spaces While classical analysis dealt primarily with smooth spaces, much research has been done in the last half century on expanding the theory to the nonsmooth case. Metric Measure (MM) spaces are the natural setting for such analysis, and it is thus important to understand the geometry of subsets of such spaces. This talk will be an introductory survey, first of MM spaces that arise naturally in varying fields, and second an overview of the current theory of complex dimensions in both the one dimensional case and the more recent higher dimensional theory. This recent theory should naturally generalize to MM spaces (given an additional regularity condition), and we will show preliminary results in that direction.
CommonCrawl
One-pass MapReduce-based clustering method for mixed large scale data Mohamed Aymen Ben HajKacem1, Chiheb-Eddine Ben N'cir1 & Nadia Essoussi1 Journal of Intelligent Information Systems volume 52, pages619–636(2019)Cite this article Big data is often characterized by a huge volume and a mixed types of attributes namely, numeric and categorical. K-prototypes has been fitted into MapReduce framework and hence it has become a solution for clustering mixed large scale data. However, k-prototypes requires computing all distances between each of the cluster centers and the data points. Many of these distance computations are redundant, because data points usually stay in the same cluster after first few iterations. Also, k-prototypes is not suitable for running within MapReduce framework: the iterative nature of k-prototypes cannot be modeled through MapReduce since at each iteration of k-prototypes, the whole data set must be read and written to disks and this results a high input/output (I/O) operations. To deal with these issues, we propose a new one-pass accelerated MapReduce-based k-prototypes clustering method for mixed large scale data. The proposed method reads and writes data only once which reduces largely the I/O operations compared to existing MapReduce implementation of k-prototypes. Furthermore, the proposed method is based on a pruning strategy to accelerate the clustering process by reducing the redundant distance computations between cluster centers and data points. Experiments performed on simulated and real data sets show that the proposed method is scalable and improves the efficiency of the existing k-prototypes methods. Large volume of data are being collected from different sources and there is a high demand for methods and tools that can efficiently analyse these large volume of data referred to as Big data analysis. Big data usually refers to three mains characteristics also called the three Vs (Gorodetsky 2014) which are respectively Volume, Variety and Velocity. Volume refers to the large scale data, Variety indicates the mixed types of data such as numeric, categorical and text data. Velocity refers to the speed at which data is generated and processed (Gandomi and Haider 2015). Several frameworks have been proposed for processing Big data. The most well-known is MapReduce framework (Dean and Ghemawat 2008). MapReduce is initially developed by Google and it is designed for processing Big data by exploiting the parallelism among a cluster of machines. MapReduce has three major features: simple programming framework, linear scalability and fault tolerance. These features make MapReduce an useful framework for Big data processing (Lee et al. 2012). Clustering is an important technique in machine learning, which has been used to organize data into groups of similar data points called also clusters. Examples of clustering methods categories are hierarchical methods, density-based methods, grid-based methods, model-based methods and partitional methods (Jain et al. 1999). These clustering methods were well used in several applications such as intrusion detection (Tsai et al. 2009; Wang et al. 2010), customer segmentation (Liu and Ong 2008), document clustering (Ben N'Cir and Essoussi 2015; Hussain et al. 2014), image organization (Ayech and Ziou 2015; Du et al. 2015). In fact, conventional clustering methods are not suitable when dealing with large scale data. This is explained by the high computational cost of these methods which require unrealistic time to build the groupings. Furthermore, some clustering methods like hierarchical clustering cannot be applied to Big data because of its quadratic complexity time. Hence, clustering methods with linear time complexity should be used to handle large scale data. On the other hand, Big data is often characterized by the mixed types of data including numeric and categorical. K-prototypes is one of the most well-known clustering methods to deal with mixed data, because of its linear time complexity (Ji et al. 2013). It has been successfully fitted into MapReduce framework in order to perform the clustering of mixed large scale data Ben Haj Kacem et al. (2015, 2016). However, the proposed methods have some considerable shortcomings. The first shortcoming is inherit from the conventional k-prototypes method, which requires computing all distances between each of the cluster centers and the data points. Many of these distance computations are redundant, because data points usually stay in the same cluster after first few iterations. The second shortcoming is the result of inherent conflict between MapReduce and k-prototypes. K-prototypes is an iterative algorithm which requires to perform some iterations for producing optimal results. In contrast, MapReduce has a significant problem with iterative algorithms (Ekanayake et al. 2010). As a consequence, the whole data set must be loaded from the file system into the main memory at each iteration. Then, after it is processed, the output must be written to the file system again. Therefore, many of I/O disk operations occur during each iteration and this decelerates the running time. In order to overcome the mentioned shortcomings, we propose in this paper a new one-pass A ccelerated M apR educe-based K-P rototypes clustering method for mixed large scale data, referred to as AMRKP. The proposed method is based on a pruning strategy to accelerate the clustering process by reducing the redundant distance computations between cluster centers and data points. Furthermore, the proposed method reads the data set only once in contrast to existing MapReduce implementation of k-prototypes. Our solution decreases the time complexity of k-prototypes (Huang 1997) from O(n.k.l) to O((n.α%.k + k3).l) and the I/O complexity of MapReduce-based k-prototypes method (Ben Haj Kacem et al. 2015) from O(\(\frac {n}{p}.l\)) to O(\(\frac {n}{p}\)) where n the number of data points, k the number of clusters, α% the pruning heuristic, l the number of iterations and p the number of chunks. The rest of this paper is organized as follows: Section 2 provides related works which propose to deal with large scale and mixed data. Then, Section 3 presents the k-prototypes method and MapReduce framework. After that, Section 4 describes the proposed AMRKP method while Section 5 presents experiments that we have performed to evaluate the efficiency of the proposed method. Finally, Section 6 presents conclusion and future works. Given that data are often described by mixed types of attributes such as, numeric and categorical, a pre-processing step is usually required to transform data into a single type since most of proposed clustering methods deal with only numeric or categorical attributes. However, transformation strategies is often time consuming and produces information loss, leading to inaccurate clustering results (Ahmad and Dey 2007). Therefore, clustering methods have been proposed in the literature to perform the clustering of mixed data without pre-processing step (Ahmad and Dey 2007; Ji et al. 2013; Huang 1997; Li and Biswas 2002). For instance, Li and Biswas (2002) introduced the Similarity-Based Agglomerative Clustering called SBAC, which is an hierarchical agglomerative algorithm for mixed data. Huang (1997) proposed k-prototypes method which integrates k-means and k-modes methods to cluster numeric and categorical data. Ji et al. (2013) proposed an improved k-prototypes to deal with mixed type of data. This method introduced the concept of the distributed centroid for representing the prototype of categorical attributes in a cluster. Among the latter discussed methods, k-prototypes remains one of the most popular method to perform clustering from mixed data because of its efficiency (Ji et al. 2013). Nevertheless, it can not scale with huge volume of mixed data. To deal with large scale data, several clustering methods which are based on parallel frameworks have been designed in the literature (Bahmani et al. 2012; Hadian and Shahrivari 2014; Kim et al. 2014; Ludwig 2015; Shahrivari and Jalili 2016; Zhao et al. 2009). Most of these methods use the MapReduce framework. For instance, Zhao et al. (2009) have implemented k-means method through MapReduce framework. Bahmani et al. have proposed a scalable k-means (Bahmani et al. 2012) that extends k-means + + technique for initial seeding. Shahrivari and Jalili (2016) have proposed a single-pass and linear time MapReduce-based k-means method. Kim et al. (2014) have proposed parallelizing density-based clustering with MapReduce. A parallel implementation of fuzzy c-means algorithm into MapReduce framework is presented in Ludwig (2015). On the other hand, several methods have used the triangle inequality property to improve the efficiency of the clustering application (He et al. 2010; Nanni 2005). The triangle inequality is used as an exact mathematical property method in order to reduce the number of redundant unnecessary distance computations. In fact, the triangle inequality property is based on the hypothesis that is unnecessary to evaluate distances between a data point and clusters centers which are not closer to the old assigned center. For example, He et al. (2010), have proposed an accelerated Two-Threshold Sequential Algorithm Scheme (TTSAS). This method avoids unnecessary distance calculations by applying the triangle inequality. Nanni (2005) have exploited triangle inequality property of metric space to accelerated the hierarchical clustering method (single-link and complete-link). Although obtained clusters with these discussed methods are exactly the same as the standard ones, they require evaluating the triangle inequality property for all the set of clusters' center. Furthermore, these methods keep the entire data in the main memory for processing which reduces their performance on large data sets. Although the later discussed methods offer for users an efficient analysis of large scale data, they can not support the mixed types of data and are limited to only numeric attributes. In order to perform the clustering of mixed large scale data, Ben Haj Kacem et al. (2015) proposed a parallelization of k-prototypes method through MapReduce framework. This method iterates two main steps until convergence: the step of assigning each data point to the nearest cluster center and the step of updating cluster centers. These two steps are implemented in map and reduce phase respectively. Although this method proposes an effective solution for clustering mixed large scale data, it has some considerable shortcomings. First, k-prototypes requires computing all distances between each of the cluster centers and the data points. However, many of these distance computations are redundant. Second, k-prototypes is not suitable for running within MapReduce framework since during each iteration of k-prototypes, the whole data set must be read and written to disks and this requires lots of I/O operations. This section first presents the k-prototypes method, then presents the MapReduce framework. K-prototypes method Given a data set X = {x1…xn} containing n data points, described by mr numeric attributes and mt categorical attributes, the aim of k-prototypes (Huang 1997) is to find k clusters by minimizing the following objective function: $$ J=\sum\limits^{n}_{i=1}\sum\limits^{k}_{j=1}u_{ij}d(x_{i},c_{j}), $$ where uij ∈ {0,1} is an element of the partition matrix Un∗k indicating the membership of data point i in cluster j, cj ∈ C = {c1…ck} is the center of the cluster j and d(xi,cj) is the dissimilarity measure which is defined as follows: $$ d(x_{i},c_{j})=\sum\limits^{m_{r}}_{r=1}\sqrt{(x_{ir}-c_{jr})^{2}}+\displaystyle{\sum\limits^{m_{t}}_{t=1}\delta(x_{it},c_{jt})}, $$ where xir represents the value of numeric attribute r and xit represents the value of categorical attribute t for data point i. cjr represents the mean of numeric attribute r and cluster j, which can be calculated as follows: $$ c^{}_{jr}= \frac{\displaystyle{\sum\limits^{\left|c_{j}\right|}_{i=1}x_{ir}}}{\left|c_{j}\right|}, $$ where |cj| the number of data points assigned to cluster j. cjt represents the most common value (mode) for categorical attributes t and cluster j, which can be calculated as follows: $$ c_{jt}={a^{h}_{t}}, $$ $$ f({a^{h}_{t}})\geq f({a^{z}_{t}}),\hspace{0.25cm} \forall z,\hspace{0.25cm} 1\leq z \leq m_{c}, $$ where \({a^{z}_{t}}\in \left \{{a^{1}_{t}}{\ldots } a^{m_{c}}_{t}\right \}\) is the categorical value z and mc is the number of categories of categorical attribute t. \(f({a^{z}_{t}})\) = \(|\left \{x_{it}={a^{z}_{t}}|p_{ij}=1\right \}|\) is the frequency count of attribute value \({a^{z}_{t}}\). For categorical attributes, δ(p,q) = 0 when p = q and δ(p,q) = 1 when p≠q. It is easy to verify that the dissimilarity measure given in (2), is a metric distance since it satisfies the non-negativity, symmetry, identity and triangle inequality property as follows (Han et al. 2011; Ng et al. 2007): d (xi,xj)>0 ∀ xi,xj ∈ X (Non-negativity) d (xi,xj) = d(xj,xi) ∀ xi,xj ∈ X (Symmetry) d(xi,xj) = 0 ⇔ xi = xj ∀ xi,xj ∈ X (Identity) d(xi,xz) + d(xz,xj) ≥ d(xi,xj) ∀ xi,xj and xz ∈ X (Triangle inequality) The main algorithm of k-prototypes method is described by Algorithm 1. MapReduce framework MapReduce (Dean and Ghemawat 2008) is a parallel programming framework designed to process large scale data across cluster of machines. It is characterized by its highly transparency for programmers, which allows to parallelize algorithms in a easy and comfortable way. The algorithm to be parallelized needs to be specified by only two phases namely map and reduce. Each phase has < key/value > pairs as input and output. The map phase takes in parallel each < key/value> pair and generates one or more intermediate < key′/value′ > pairs. Then, this framework groups all intermediate values associated with the same intermediate key as a list (known as shuffle phase). The reduce phase takes this list as input for generating final values. The whole process can be summarized as follows: $$\begin{array}{@{}rcl@{}} &&\text{Map}(<key/value>) \hspace{0.25cm} \rightarrow \hspace{0.25cm} \text{list} (<key^{\prime}/value^{\prime}>)\\ &&\text{Reduce}(<key^{\prime}/\text{list}(value^{\prime})>) \hspace{0.25cm} \rightarrow \hspace{0.25cm} \text{list} (<key^{\prime}/value^{\prime\prime}>) \end{array} $$ Figure 1 illustrates the data flow of MapReduce framework. Note that the inputs and outputs of MapReduce are stored in an associated distributed file system that is accessible from any machine of the used cluster. As we mentioned earlier, MapReduce has a significant problems with iterative algorithms (Ekanayake et al. 2010). Hence, many of I/O operations occur during each iteration and this decelerates the running time. Several solutions have been proposed for extending the MapReduce framework to support iterations such as Twister (Ekanayake et al. 2010), Spark (Zaharia et al. 2010) and Phoenix (Talbot et al. 2011). These solutions excel when data can fit in memory because memory access latency is lower. However, Hadoop MapReduce (White 2012) can be an economical option because of Hadoop as a Service offering (HaaS), availability and maturity. In fact, the motivation of our work is to propose a new disk-efficient implementation of a clustering method for mixed large volume of data. As a solution, we propose the one pass disk implementation of k-prototypes within Hadoop MapReduce. This implementation reads and writes data only once, in order to reduce largely the I/O operations. Data flow of MapReduce framework One-pass accelerated MapReduce-based k-prototypes clustering method for mixed large scale data This section presents the proposed pruning strategy and the one-pass parallel implementation of our solution, followed by the parameters selections and the complexity analysis of the proposed method. Pruning strategy In order to reduce the number of unnecessary distance computations, we propose a pruning strategy, which requires a pruning heuristic α% to denote the α% subset of cluster centers that are considered when evaluating triangle inequality property. The proposed pruning strategy is based on the following assumptions: data points usually stay in the same cluster after first few iterations. If an assignment of an object has changed from one cluster to another, then the new cluster center is close to the old assigned center. This strategy is inspired from marketing where there are a host of products in a particular category that they are aware of, but only a few they would actively consider for purchasing (Eliaz and Spiegler 2011). This is known as consideration set. While all of the products that a consumer will think of when it's time to make a purchasing decision (known as the awareness set), the consideration set includes only those products they would consider a realistic option for purchase. In our case, we define the consideration set of centers as the set of α% closest centers from a given old assigned center. The selected assignment of a data object will be the closest center among the "consideration set" of cluster centers. The pruning strategy requires at each iteration computing distances between centers and sorting them. Then, it evaluates the triangle inequality property until the property is not satisfied or all centers in the subset of selected centers (the consideration set) have been evaluated. In other words, it evaluates the following theorem between data point and the centers in increasing order of distance to the assigned center of the previous iteration. If the pruning strategy reaches a center that does not satisfy the triangle inequality property, it can skip all the remaining centers and continues on to the next data point. Theorem 1 Letxia data point,\(c^{}_{1}\)its cluster center of the previous iteration andc2another cluster center. If we know that d(\(c^{}_{1},c_{2}\)) ≥2 ∗d(\(x_{i},c^{}_{1}\)) ⇒d(\(x_{i},c^{}_{1}\)) ≤d(\(x_{i},c^{}_{2}\))without having to calculate d(xi,c2). According to triangle inequality, we know that d(c1,c2) ≤ d(xi,c1) + d(xi,c2) ⇒ d(c1,c2) −d(xi,c1) ≤ d(xi,c2). Consider the left-hand side d(c1,c2) −d(xi,c1) ≥ 2 ∗ d(xi,c1) −d(xi,c1) = d(xi,c1) ⇒ d(xi,c1) ≤ d(xi,c2). □ Notably, setting the pruning heuristic α% too small may decrease the accuracy rate whereas setting the α% to too large imposes a limit on the number of distance computations that can be reduced. The impact of the pruning heuristic on the performance of the proposed method will be discussed in Section 5.5. Algorithm 2 describes the different steps of the pruning strategy. Algorithm 3 gives the main algorithm of k-prototypes with pruning strategy which we call it KP + PS algorithm in the rest section of the paper. Initially, the KP + PS algorithm works exactly the same as k-prototypes. Then, it continues to check whether it is time to start the pruning strategy. If the time to start is reached, the pruning strategy is applied. Parallel implementation The proposed AMRKP method towards handling mixed large scale data consists of the parallelization of KP + PS algorithm based on the MapReduce framework. For this purpose, we first split the input data set into p chunks. Then, each chunk is processed independently in parallel way by its assigned machine. The intermediate centers are then extracted from each chunk. After that, the set of intermediate centers is again processed in order to generate the final cluster centers. The chunks are processed in the map phase while the intermediate centers are processed in the reduce phase. In the following, we first present the parallel implementation without considering MapReduce and then, we present how we have fitted the proposed solution using MapReduce framework. To define the parallel implementation, it is necessary to define the algorithm that is applied on each chunk and the algorithm that is applied on the set of intermediate centers. For both phases, we use the KP + PS algorithm. For each chunk, the KP + PS algorithm is executed and k centers are extracted. Therefore, if we have p chunks, after applying KP + PS algorithm on each chunk, there will be a set of k ∗ p centers as the intermediate set. In order to obtain a good quality, we record the number of assigned data points to each extracted center. That is to say, we extract from each chunk, k centers and the number of data points assigned to each center. The number of assigned data points to each cluster center represents the importance of that center. Hence, we must extend the KP + PS algorithm to take into account the weighted data points when clustering the set of intermediate centers. In order to consider the weighted data points, we must change center update (3) and (4). If we take into account wi as the weight of data point xi, center of a final cluster must be calculated for numeric and categorical attributes using the following equations. $$ c^{}_{jr}= \frac{\displaystyle{\sum\limits^{\left|c_{j}\right|}_{i=1}x_{ir}*w_{i}}}{\left|c_{j}\right|}. $$ $$ c^{}_{jt}={a^{h}_{t}}, $$ $$ f({a^{h}_{t}})*w_{i}\geq f({a^{z}_{t}})*w_{i},\hspace{0.25cm} \forall z,\hspace{0.25cm} 1\leq z \leq m_{c}, $$ The parallel implementation of the KP + PS algorithm through MapReduce framework is straightforward. Each map task picks a chunk of data set, executes the KP + PS algorithm on that chunk and emits the extracted intermediate centers and their weights as the output. Once the map phase is finished, a set of intermediate weighted centers is collected as the output of the map phase, and this set of centers is emitted to a single reduce phase. The reduce phase takes the set of intermediate centers and their weights, executes again the KP+PS algorithm on them and returns the final centers as the output. Once the final cluster centers are generated, we assign each data point to the nearest cluster center. Let Xi the chunk associated to map task i, p the number of chunks, \(C^{w}=\left \{{C^{w}_{1}}\ldots {C^{w}_{p}}\right \}\) the set of weighted intermediate centers where \({C^{w}_{i}}\) the set of weighted intermediate centers extracted from chunk i and Cf the set of final centers. Algorithm 4 describes main steps of the proposed method. Parameters selections The proposed method needs three input parameters in addition to the number of clusters k. The first parameter is the chunk size, the second parameter is pruning heuristic α% and the third parameter is the time to start the pruning strategy. Tuning the chunk size In theory, the minimum chunk size is k and the maximum chunk size is n. However, both of these extremes are not practical and value between k and n must be selected. According to Shahrivari and Jalili (2016), the most memory efficient value for chunk size is \(\sqrt {k.n}\) because this value for chunk size generates a set of intermediate centers with size k.n. That is to say, the chunk size should not be set to value greater than \(\sqrt {k.n}\). A smaller chunk size can yield to better quality since smaller chunk size produces more intermediate centers which represent the input data set. Hence, all experiments described in Section 5, assume that the chunk size is \(\sqrt {k.n}\). Tuning the pruning heuristic The pruning heuristic α% can be set between 1 to 100%. A small value of this parameter reduces significantly computational time since most of distance computations will be ignored leading to a small loss of clustering quality. Alternatively, setting the pruning heuristic to a large value does not reduce the high computational time with leading approximately the same partitioning of k-prototypes. The experimental results show that setting the pruning heuristic to 10% with respect to all of the pruning heuristics from 1 to 100% tested in this work gives a good result. Hence, we set the pruning heuristic to 10% to provide a good trade-off between efficiency and quality. Tuning the time to start As mentioned above, many of distance computations are redundant in k-prototypes method because data points usually stay in the same clusters after first iterations. When the pruning strategy starts from the first iteration, the proposed method may produces an inaccurate results. Otherwise when the pruning strategy starts from the last iteration, no many distance computations will be reduced. For this reason, the time to start the pruning strategy is extremely important. Hence, we start the pruning strategy from iteration 2 because the convergence speed of k-prototypes slows down substantially after iteration 2 in the sense that many data points remain in the same cluster. Complexity analysis A typical clustering algorithm has three types of complexities: time complexity, I/O complexity and space complexity. To give complexity analysis of AMRKP method, we use the following notations: n the number of data points, k the number of clusters, l the number of iterations, α% the pruning heuristic, s the chunk size and p the number of chunks. The time complexity analysis The KP + PS algorithm requires at each iteration computing distances between centers and sorting them as shown in Algorithm 2. This step can be estimated by O(k2 + k3) ≅ O(k3). Then, the KP + PS selects an α% centers and evaluates the triangle inequality property until the property is not satisfied or all centers in the subset of α% centers have been evaluated. Therefore, we can conclude that the KP + PS algorithm can reduce theoretically the time complexity of k-prototypes from O(n.k.l) to O((n.α%.k + k3).l). As stated before, the time to start the pruning strategy is extremely important. Hence, the time complexity of KP + PS algorithm depends on the iteration from which the pruning strategy is started. In the best case, when the pruning strategy is started from the first iteration, the time complexity of KP + PS is O((n.α%.k + k3).l). In the worst case, when KP + PS converges before the pruning strategy is started, then KP + PS falls back to k-prototypes, and the time complexity is O((n.k + k3).l). Therefore, we can conclude that the time complexity of KP + PS algorithm is bounded between O((n.α%.k + k3).l) and O((n.k + k3)l). The KP + PS algorithm is applied twice: in the map phase and the reduce phases. In the map phase, each chunk involves running the KP + PS algorithm on that chunk. Therefore, the map phase takes O(n/ p.α%.k + k3).l) time. In the reduce phase, the KP + PS algorithm must be executed on the set of intermediate centers which has k.n/s data points. Hence, the reduce phase needs O((n/ s.α%.k + k3).l) time. Given that k.n/s << n/p, the overall time complexity of the proposed method is O(n/p.α%.k + k3).l + (k.n/s.α%.k + k3).l)≅ O(n/p.α%.k + k3).l). The I/O complexity analysis The proposed method reads the input data set just one time from the file system in the map phase. Therefore, the I/O complexity of the map phase is O(n/p). The I/O complexity of the reduce phase is O(k.n/s). As a result, the overall I/O complexity of the proposed method is O(n/p + k.n/s). If s is fixed to \(\sqrt {k.n}\), the I/O complexity will be \(O(n/p+\sqrt {k.n})\). The space complexity analysis The space complexity of AMRKP depends on the chunk size and the number of chunks that can be processed in the map phase. The map phase is required to keep p chunks in the memory. Hence, the space complexity of the map phase is O(p.s). The reduce phase needs to store k.n/s intermediate centers in the memory. Thus, the space complexity of the reduce phase is O(k.n/s). As a result, the overall space complexity of the proposed method is evaluated by O(p.s + k.n/s). If s is fixed to \(\sqrt {k.n}\), the space complexity will be \(O(p.s+\sqrt {k.n})\). Experiments and results In order to evaluate the efficiency of AMRKP method, we performed experiments on both simulated and real data sets. We tried in this section to figure out three points. (i) How much is the efficiency of AMRKP method when applied to mixed large scale data compared to existing methods? (ii) How the pruning strategy can reduce the number of unnecessary distance computations? (iii) How the MapReduce framework can enhance the scalability of the proposed method when dealing with mixed large scale data? We split experiments into three major subsections. First, we compare the performance of the proposed method versus the following existing methods: conventional k-prototypes, described in Algorithm 1, denoted by KP and the MapReduce-based k-prototypes (Ben Haj Kacem et al. 2015) denoted by MRKP. Then, we study the pruning strategy performance to reduce the number of distance computations between clusters and data points. Finally, we evaluate the MapReduce performance by analyzing the scalability of the proposed method. Environment and data sets The experiments are performed on a Hadoop cluster running the latest stable version of Hadoop 2.7.1. The Hadoop cluster consists of 6 machines. Each machine has 1-core 2.30 GHz CPU E5400 and 1GB of memory. The operating system of each machine is Ubuntu 14.10 server 64bit. We conducted the experiments on the following data sets: Simulated data set: four series of mixed large data sets are generated. The data sets range from 100 millions to 400 millions data points. Each data point is described using 5 numeric and 5 categorical attributes. The numeric values are generated with gaussian distribution. The mean is 350 and the sigma is 100. The categorical values are generated using the data generator developed in.Footnote 1 In order to simplify the names of the simulated data sets, we will use the notations Sim100M, Sim200M, Sim300M and Sim400M to denote a simulated data set containing 100, 200, 300 and 400 millions data points respectively. KDD Cup data set (KDD): is a real data set which consists of normal and attack connections simulated in a military network environment. The KDD data set contains about 5 millions connections. Each connection is described using 33 numeric and 3 categorical attributes. The clustering process for this data set detects type of attacks among all connections. This data set was obtained from UCI machine learning repository.Footnote 2 Poker data set (Poker): is a real data set which is an example of a hand consisting of five playing cards drawn from a standard deck of 52 cards. The Poker data set contains about 1 millions data points. Each data point is described using 5 numeric attributes and 5 categorical attributes. The clustering process for this data set detects the hand situations. This data set was obtained from UCI machine learning repository.Footnote 3 Statistics of these data sets are summarized in Table 1. Table 1 Summary of the data sets Evaluation measures In order to evaluate the quality of the proposed method, we use Sum Squared Error (SSE) (Xu and Wunsch 2010). It is one of the most common partitional clustering criteria which aims to measure the squared errors between each data point and the cluster center to which the data point belongs to. SSE can be defined by: $$ SSE= \sum\limits^{n}_{i=1}\sum\limits^{k}_{j=1}d(c_{j},x_{i}), $$ where xi the data point and cj the cluster center. In order to evaluate the ability of the proposed method to scale with large data sets, we use in our experiments the Speedup measure (Xu et al. 2002). It measures the ability of the designed parallel method to scale well when the number of machines increases and the size of data is fix. This measure is calculated as follows: $$ Speedup=\frac{T_{1}}{T_{m}}, $$ where T1 the running time of processing data on 1 machine and Tm the running time of processing data on m machines in the used cluster. To simplify the discussion of the experimental results in Tables 2, 3, 4 and 5, we will use the following conventions. Let ψ denotes either KP or MRKP. Let T, D and S denote respectively the running time, the distance computations and the quality of the clustering result in terms of SSE. Let β denotes either T, D or S. The enhancement of βAMRKP (new algorithm) with respect to βψ (original algorithm) in percentage defined by: $$ {\Delta}_{\beta}=\frac{\beta_{AMRKP}-\beta_{\psi}}{\beta_{\psi}}*100\%. $$ For example, the enhancement of the running time of AMRKP with respect to the running time of k-prototypes is defined by: $$ {\Delta}_{T}=\frac{T_{AMRKP}-T_{KP}}{T_{KP}}*100\%, $$ where β = T and ψ = KP. It is important to note that as defined in (11), a more negative value of Δβ implies a greater enhancement. Table 2 Experimental results on simulated and real data sets Table 3 Comparison of the number of distance computations performed by AMRKP versus k-prototypes Table 4 The impact of the pruning heuristic α% on the performance of AMRKP Table 5 The impact of the time to start on the performance of AMRKP Comparison of the performance of AMRKP versus existing methods We compare in this section the performance of the proposed method versus existing methods. The results are reported in Table 2. As the results show, AMRKP method always finishes several times faster than existing methods. For example, Table 2 shows that AMRKP method (k = 10 clusters) on Poker data set can reduce the running time by 88.62% and by 59.32% compared to k-prototypes and MRKP respectively. In all of the data sets, more than 95% of the running time was spent in the map phase and this shows that AMRKP method is truly one pass. In addition, obtained results report that the proposed method converges to nearly same results of the conventional k-prototypes which allows to maintain a good quality of partitioning. Pruning strategy performance We evaluate in this section the performance of the pruning strategy to reduce the number of redundant distance computations. The results are reported in Table 3. As the results show, the proposed method reduces the number of distance computations over k-prototypes on both simulated and real data sets. We must also mention that this reduction becomes more significant with the increase of k. For example, the number of distance computations is reduced by 35.11% when k = 10 and by 81.65% when k = 100 on Poker data set. In another experiment we investigated the impact of pruning heuristic α% on the performance of the proposed method compared to the conventional k-prototypes. For this purpose, we run AMRKP with different pruning heuristics from 1 to 100% on real data sets with k = 100. The results are reported in Table 4. As shown in Table 4, when the pruning heuristic is fixed to a small value, many distance computations are reduced with small loss of quality. For example, when the pruning heuristic is 10%, the pruning strategy can reduce the number of distance computations of k-prototypes by 81.65% on Poker data set. But, when the pruning heuristic is fixed to a large value, a small number of distance computations are reduced while maintaining the clustering quality. For example, when the pruning heuristic is 80%, the pruning strategy can reduce the number of distance computations of k-prototypes by 59.20% on Poker data set. Then, we investigated the impact of the time to start the pruning strategy on the performance of the proposed method compared to the conventional k-prototypes. For this purpose, we run AMRKP with different pruning'starting times from iteration 1 to iteration 10 on real data sets with k = 100. The results are reported in Table 5. As shown in Table 5, when the pruning strategy starts from the first iteration, AMRKP leads to significant reduction of distance computations and this may decrease the clustering quality. For example, when the pruning strategy starts from iteration 1, the AMRKP can reduce the number of distance computations of k-prototypes by 81.11% on Poker data set. On the other hand, when the pruning strategy starts from the last iteration, AMRKP leads to small reduction of distance computations with small loss of quality. For example, when the pruning strategy starts from iteration 10, the proposed method can reduce the number of distance computations of k-prototypes by 9.15% on Poker data set. Scalability analysis We first evaluate in this section the scalability of the proposed method when the number of machines increases. For investigating the speedup value, we used the Sim400M data set with k = 100. For computing speedup values, first we executed AMRKP method using just a single machine and then we added additional machines. Figure 2 illustrates the speedup results on Sim400M data set. The proposed method shows linear speedup as the number of machines increases because the MapReduce framework has linear speedup and each chunk can be processed independently. Speedup of AMRKP as the number of machines increases Then, we evaluate the scalability of the proposed method when we increase the size of the data set. For investigating the influence of the size of data set, we used in this section the Sim100M, Sim200M, Sim300M and Sim400M data sets with k = 100. The results are plotted in Fig. 3. As the results show, the running time scales linearly, when size of data set increases. For example, the MRKP method takes more than one hour on Sim400M data set while the proposed method processed the data set in less than 40 minutes. Performance of AMRKP as the size of data set increases In order to deal with the issue of clustering mixed large scale data, we have proposed a new one-pass accelerated MapReduce-based k-prototypes clustering method. The proposed method reads and writes data only once which reduces largely the I/O operations like disk I/O and. Furthermore, the proposed method is based on a pruning strategy to accelerate the clustering process by reducing the redundant distance computations between cluster centers and data points. Experiments on huge simulated and real data sets have showed the efficiency of AMRKP to deal with mixed large scale data compared to existing methods. The proposed method performs several iterations to converge to the optimal local solution. The number of iterations increases the running time while each iteration is time consuming. A good initialisation of the proposed method may improve both running time and clustering quality. An exciting direction for future works is to investigate the use of scalable initialisation techniques in order to reduce the number of iterations and then may be the improvement of the scalability of AMRKP method. https://projets.pasteur.fr/projects/rap-r/wiki/SyntheticDataGeneration. https://archive.ics.uci.edu/ml/datasets/KDD+Cup+1999+Data. http://archive.ics.uci.edu/ml/datasets/Poker. Ahmad, A., & Dey, L. (2007). A k-mean clustering algorithm for mixed numeric and categorical data. Data Knowledge Engineering, 63(2), 503–527. Ayech, M. W., & Ziou, D (2015). Segmentation of Terahertz imaging using k-means clustering based on ranked set sampling. Expert Systems with Applications, 42(6), 2959–2974. Bahmani, B., Moseley, B., Vattani, A., Kumar, R., & Vassilvitskii, S. (2012). Scalable k-means++. Proceedings of the VLDB Endowment, 5(7), 622–633. Ben Haj Kacem, M. A., Ben N'cir, C. E., & Essoussi, N (2015). MapReduce-based k-prototypes clustering method for big data. In Proceedings of data science and advanced analytics (pp. 1–7). Ben HajKacem, M. A., N'cir, C. E., & Essoussi, N (2016). An accelerated MapReduce-based K-prototypes for big data. In Proceedings of software technologies: applications and foundations (pp. 1–13). Ben N'Cir, C. E., & Essoussi, N (2015). Using sequences of words for non-disjoint grouping of documents. International Journal of Pattern Recognition and Artificial Intelligence, 29(03), 1550013. Dean, J., & Ghemawat, S. (2008). Mapreduce: simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113. Du, H., Wang, Y., & Dong, X (2015). Texture image segmentation using affinity propagation and spectral clustering. International Journal of Pattern Recognition and Artificial Intelligence, 29(05), 1555009. Ekanayake, J., Li, H., Zhang, B., Gunarathne, T., Bae, S. H., Qiu, J., & Fox, G. (2010). Twister: a runtime for iterative mapreduce. In Proceedings of the 19th ACM international symposium on high performance distributed computing (pp. 810–818). ACM. Eliaz, K., & Spiegler, R (2011). Consideration sets and competitive marketing. The Review of Economic Studies, 78(1), 235–262. Gandomi, A., & Haider, M. (2015). Beyond the hype: big data concepts, methods, and analytics. International Journal of Information Management, 35(2), 137–144. Gorodetsky, V. (2014). Big data: opportunities, challenges and solutions. In Information and communication technologies in education, research, and industrial applications (pp. 3–22). Ji, J., Bai, T., Zhou, C., Ma, C., & Wang, Z. (2013). An improved k-prototypes clustering algorithm for mixed numeric and categorical data. Neurocomputing, 120, 590–596. Hadian, A., & Shahrivari, S (2014). High performance parallel k-means clustering for disk-resident datasets on multi-core CPUs. The Journal of Supercomputing, 69(2), 845–863. Han, J., Pei, J., & Kamber, M. (2011). Data mining: concepts and techniques. Elsevier. He, C., Chang, J., & Chen, X. (2010). Using the triangle inequality to accelerate TTSAS cluster algorithm. In Electrical and control engineering (ICECE) (pp. 2507–2510). IEEE. Hussain, S. F., Mushtaq, M., & Halim, Z (2014). Multi-view document clustering via ensemble method. Journal of Intelligent Information Systems, 43(1), 81–99. Huang, Z. (1997). Clustering large data sets with mixed numeric and categorical values. In Proceedings of the 1st Pacific-Asia conference on knowledge discovery and data mining (pp. 21–34). Jain, A. K., Murty, M. N., & Flynn, PJ (1999). Data clustering: a review. ACM computing surveys (CSUR), 31(3), 264–323. Kim, Y., Shim, K., Kim, M. S., & Lee, J. S. (2014). DBCURE-MR: an efficient density-based clustering algorithm for large data using MapReduce. Information Systems, 42, 15–35. Lee, K. H., Lee, Y. J., Choi, H., Chung, Y. D., & Moon, B (2012). Parallel data processing with MapReduce: a survey. AcM sIGMoD Record, 40(4), 11–20. Li, C., & Biswas, G. (2002). Unsupervised learning with mixed numeric and nominal data. Knowledge and Data Engineering, 14(4), 673–690. Liu, H. H., & Ong, C. S. (2008). Variable selection in clustering for marketing segmentation using genetic algorithms. Expert Systems with Applications, 34(1), 502–510. Ludwig, S. A. (2015). Mapreduce-based fuzzy c-means clustering algorithm: implementation and scalability. In International journal of machine learning and cybernetics (pp. 1–12). Nanni, M. (2005). Speeding-up hierarchical agglomerative clustering in presence of expensive metrics, Pacific-Asia conference on knowledge discovery and data mining (pp. 378–387). Berlin Heidelberg: Springer. Ng, M. K., Li, M. J., Huang, J. Z., & He, Z (2007). On the impact of dissimilarity measure in k-modes clustering algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(3), 503–507. Shahrivari, S., & Jalili, S. (2016). Single-pass and linear-time k-means clustering based on MapReduce. Information Systems, 60, 1–12. Talbot, J., Yoo, R. M., & Kozyrakis, C. (2011). Phoenix++: modular MapReduce for shared-memory systems. In Proceedings of the second international workshop on MapReduce and its applications (pp. 9–16). ACM. Tsai, C. F., Hsu, Y. F., Lin, C. Y., & Lin, W. Y. (2009). Intrusion detection by machine learning: A review. Expert Systems with Applications, 36(10), 11994–12000. Xu, R., & Wunsch, D. C. (2010). Clustering algorithms in biomedical research: a review. Biomedical Engineering, IEEE Reviews, 3, 120–154. Xu, X., Jeger, J., & Kriegel, H. P. (2002). A fast parallel clustering algorithm for large spatial databases. In High performance data mining (pp. 263–290). Wang, G., Hao, J., Ma, J., & Huang, L (2010). A new approach to intrusion detection using artificial neural networks and fuzzy clustering. Expert Systems with Applications, 37(9), 6225–6232. White, T. (2012). Hadoop: the definitive guide. O'Reilly Media Inc. Zaharia, M., Chowdhury, M., Franklin, M. J., Shenker, S., & Stoica, I (2010). Spark: cluster computing with working sets. HotCloud, 10(10–10), 95. Zhao, W., Ma, H., & He, Q. (2009). Parallel k-means clustering based on mapreduce. In Proceedings of cloud computing (pp 674–679). Institut Supérieur de Gestion de Tunis, LARODEC, Université de Tunis, 41 Avenue de la liberté, cité Bouchoucha, 2000, Le Bardo, Tunisia Mohamed Aymen Ben HajKacem , Chiheb-Eddine Ben N'cir & Nadia Essoussi Search for Mohamed Aymen Ben HajKacem in: Search for Chiheb-Eddine Ben N'cir in: Search for Nadia Essoussi in: Correspondence to Mohamed Aymen Ben HajKacem. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Ben HajKacem, M.A., N'cir, C.B. & Essoussi, N. One-pass MapReduce-based clustering method for mixed large scale data. J Intell Inf Syst 52, 619–636 (2019) doi:10.1007/s10844-017-0472-5 Issue Date: 15 June 2019 K-prototypes One-pass MapReduce Large scale data Mixed data Not logged in - 3.233.219.101
CommonCrawl
number of revolutions formula physics In these equations, the subscript 0 denotes initial value ([latex]{x}_{0}\\[/latex] and [latex]{t}_{0}\\[/latex] are initial values), and the average angular velocity [latex]\bar{\omega}\\[/latex] and average velocity [latex]\bar{v}\\[/latex] are defined as follows. Starting with the four kinematic equations we developed in One-Dimensional Kinematics, we can derive the following four rotational kinematic equations (presented together with their translational counterparts): In these equations, the subscript 0 denotes initial values (θ0, x0, and t0 are initial values), and the average angular velocity [latex]\bar{\omega}\\[/latex] and average velocity [latex]\bar{v}\\[/latex] are defined as follows: [latex]\bar{\omega}=\frac{{\omega }_{0}+\omega }{2}\text{ and }\overline{v}=\frac{{v}_{0}+v}{2}\\[/latex]. From this formula, you can calculate RPM in any situation and even if you've been recording the number of revolutions for less than (or more than) a minute. This implies that; N = Number of revolutions per minute = 60. ω = 2πN / 60 ω = 2 x π x 24 / 60 ω = 150.816 / 60 ω = 2.5136. In the process, a fly accidentally flies into the microwave and lands on the outer edge of the rotating plate and remains there. The number of revolutions will always be 1 which is defined by the question "travels once around the circumference" $\endgroup$ – t.c Oct 1 '14 at 17:15 2 Answers 2 These equations mean that linear acceleration and angular acceleration are directly proportional. number of Revolutions (n) = θ/2π = 637 rad/2π = 100 revs n= 100 revolutions The crankshaft in a race car goes from rest to 3500 rpm in 3.5 seconds. As in linear kinematics, we assume a is constant, which means that angular acceleration α is also a constant, because a = rα. To enable Verizon Media and our partners to process your personal data select 'I agree', or select 'Manage settings' for more information and to manage your choices. A person decides to use a microwave oven to reheat some lunch. The more difficult problems are color-coded as blue problems. Large freight trains accelerate very slowly. The equation [latex]{\omega }^{2}={{\omega }_{0}}^{2}+2\alpha\theta \\[/latex] will work, because we know the values for all variables except ω: [latex]{\omega }^{2}={{\omega }_{0}}^{2}+2\alpha\theta\\[/latex], Taking the square root of this equation and entering the known values gives, [latex]\begin{array}{lll}\omega & =& {\left[0+2\left(0\text{. Simultaneously reset the display and start the stop watch. Note again that radians must always be used in any calculation relating linear and angular quantities. }\text{733 s}\\[/latex]. So, we need to simply find the number of revolutions. Time the revolutions for at least 2 cycle times, then simultaneously freeze the display and stop the stop watch. [latex]\theta =\bar{\omega }t=\left(\text{6.0 rpm}\right)\left(\text{2.0 min}\right)=\text{12 rev}\\[/latex]. The standard measurement is in radians per second, although degrees per second, revolutions per minute (rpm) and other units are frequently used in practice and our calculator supports most of them as an output unit. With kinematics, we can describe many things to great precision but kinematics does not consider causes. Let's solve an example; With an angular velocity of 40. Wheel circumference in feet = diameter times pi = 27inches/12 inches per foot times 3.1416 = 7.068 feet wheel circumference. (credit: Beyond Neon, Flickr). RPM = 5,280 feet per minute traveled divided by 7.068 feet per revolution = 747 RPM What is the tangential acceleration of a point on its edge? Science - Physics Formulas. Entering this value gives [latex]\text{emf}=200\frac{\left(7.85\times 10^{-3}{\text{ m}}^{2}\right)\left(1.25\text{ T}\right)}{15.0\times 10^{-3}\text{ s}}=131\text{ V}\\[/latex]. Relevance. Note that care must be taken with the signs that indicate the directions of various quantities. By the end of this section, you will be able to: Kinematics is the description of motion. 3. To use this value in the angular acceleration formula, the value must be converted to radians per second. Solution: period (T) = NOT CALCULATED. Current Location > Physics Formulas > Electricity (Basic) > Electric Current. By mean effective pressure formula, calculate the products of 2π, the number of revolutions per power stroke, torque and divide it by displacement volume to determine the mean effective pressure for measuring engine performance. According to the International System of Units (SI), RPM is not a unit. For example, a large angular acceleration describes a very rapid change in angular velocity without any consideration of its cause. As one revolution is equal to2πradians, the conversion can be done thanks to the following formula: (1)ω(rad.s−1)=2π60.N(rpm) and vice-versa: (2)N(rpm)=602π.ω(rad.s−1) This is a practical average value, similar to the 120 V used in household power. Angular velocity is usually represented by the symbol omega (ω, sometimes Ω). Therefore 700 revolutions will be covered by wheel VA=2πr/time Period: Time passing for one revolution is called period. We assume you are converting between RPM and revolution/second. The first problem started in 1803 when Young and Fresnel rejected the Newtonian particles of light in favour of wrong waves propagating through the fallacious Aristotelian ether, though J. von Soldner in 1801 confirmed Newton's theory of light by showing the gravitational properties of Newton's p… The example below calculates the total distance it travels. You can view more details on each measurement unit: RPM or revolutions per second The SI derived unit for frequency is the hertz. Answer: The number of cycles (revolutions) to consider is 2400. Paul E. Tippens, Professor of Physics Southern Polytechnic State University A PowerPoint Presentation by Paul E. Tippens, Professor of Physics ... revolutions of the drum are required to raise a bucket to a height of 20 m? (e) What was the car's initial velocity? Starting with the four kinematic equations we developed in the. Earth RPM = 1 revolution per day divided by minutes per day 24 hours times 60 minutes per hour = 1,440 minutes per day Earth RPM = 1 divided by 1,440 equals 0.00069 RPM Problems range in difficulty from the very easy and straight-forward to the very difficult and complex. A two stroke engine is a type of internal combustion engine that completes a power cycle with strokes of the pistons with only one crankshaft revolution. rpm stands for revolutions per minute. Here, we are asked to find the number of revolutions. (No wonder reels sometimes make high-pitched sounds.) N = Number of revolutions per minute. Observe the kinematics of rotational motion. A successful mathematical analysis of objects moving in circles is heavily dependent upon a conceptual understanding of the direction of the accelerat… Everyday application: Suppose a yo-yo has a center shaft that has a 0.250 cm radius and that its string is being pulled. Rotational kinematics (just like linear kinematics) is descriptive and does not represent laws of nature. First, find the total number of revolutions θ, and then the linear distance x traveled. 1 decade ago. (c) The outside radius of the yo-yo is 3.50 cm. The equations given above in Table 1 can be used to solve any rotational or translational kinematics problem in which a and α are constant. After unwinding for two seconds, the reel is found to spin at 220 rad/s, which is 2100 rpm. Where; N = Number of revolutions per minute ω = Angular velocity. In the technique of rotation is represented by the movement of shafts, gears, wheels of a car or bicycle, the movement of the blades of wind mills. (d) How many meters of fishing line come off the reel in this time? It is called the radius of rotation. 5. Now we can substitute the known values into x=rθ to find the distance the train moved down the track: We cannot use any equation that incorporates t to find ω, because the equation would have at least two unknown values. If the object has one complete revolution then distance traveled becomes; 2πr which is the circumference of the circle object. The amount of fishing line played out is 9.90 m, about right for when the big fish bites. To convert a specific number of radians into degrees, multiply the number of radians by 180/PI. Yahoo is part of Verizon Media. Favorite Answer. The aim is to convert N (expressed in rpm)to ω (expressed in rad.s−1). Simple physics calculator helps to calculate the angular and linear velocity of an object. a. what is the angular acceleration in rev/min^2? (b) What are the final angular velocity of the wheels and the linear velocity of the train? The reel is given an angular acceleration of 110 rad/s2 for 2.00 s as seen in Figure 1. Figure 2 shows a fly on the edge of a rotating microwave oven plate. Use this page to learn how to convert between RPM and revolutions/second. 1 decade ago. When measuring angular speed, the unit radians per second is used. During a very quick stop, a car decelerates at 700 m/s2. We also see in this example how linear and rotational quantities are connected. The number of revolutions will always be 1 which is defined by the question "travels once around the circumference" $\endgroup$ – … 2 Forces P~= m~v F~= m~a F~. It is represented by ω and is given as. Because of the measured physical quantity, the formula sign has to be f for (rotational) frequency and ω or Ω for angular velocity. Now let us consider what happens if the fisherman applies a brake to the spinning reel, achieving an angular acceleration of -300 rad/s2. We are given α and t, and we know ω0 is zero, so that θ can be obtained using [latex]\theta = \omega_{0}t+\frac{1}{2}{{\alpha t}}^{2}\\[/latex]. θ = (0.785 * 7.85) + ½(-0.1) * (7.85)2= 3.08 radians. Figure 1. To convert the number of radians to the number of revolutions, recall that 1 full circle (or 1 revolution) is equal to 2pi radians. This gives the distance each tire moves for 1 revolution in inches. For example, if a wheel completes 30 revolutions in 45 seconds (i.e. Therefore, the angular velocity is 2.5136 rad/s. The distance x is very easily found from the relationship between distance and rotation angle: Before using this equation, we must convert the number of revolutions into radians, because we are dealing with a relationship between linear and rotational quantities: [latex]\theta =\left(\text{200}\text{rev}\right)\frac{2\pi \text{rad}}{\text{1 rev}}=\text{1257}\text{rad}\\[/latex]. How long does it take the reel to come to a stop? Examples of this movement in nature are the rotation of the planets around the sun and around its own axis. The wavelength corresponding the transition from n 2 to n 1 is given by. 10-27 kg.. Calculating the Number of Revolutions per Minute when Angular Velocity is Given. The emf calculated in Example 1 above is the average over one-fourth of a revolution. Now we see that the initial angular velocity is ω0 = 220 rad/s and the final angular velocity ω is zero. We are asked to find the time t for the reel to come to a stop. start by converting to yards 1 mile = 1760 yards. N = ω60 / 2π. 1λ=RZ21n12-1n22Here, R = Rydberg constant Z = Atomic number of the ion. This equation gives us the angular position of a rotating rigid body at any time t given the initial conditions (initial … b.how many revolutions … ... number of revolutions per second (RPS) Velocity of an object moving at constant speed (formula) V=2piR/T (or V=2piRf) Acceleration points to the _____ of a circle. The kinematics of rotational motion describes the relationships among rotation angle, angular velocity, angular acceleration, and time. (Hint: the same question applies to linear kinematics.). where the radius r of the reel is given to be 4.50 cm; thus. – initial velocity v f – final velocity v x – initial horizontal velocity a – acceleration g – acceleration due to gravity (10 m/s 2) Δ y – height F net – net force m – mass F G – weight v c – linear velocity a cp – centripetal acceleration F cp – centripetal force T – period rev – number of revolutions … You will find it in decimal. Information about your device and internet connection, including your IP address, Browsing and search activity while using Verizon Media websites and apps. Because 1 rev=2π rad, we can find the number of revolutions by finding θ in radians. Convert any value from / to revolutions per minute [rpm] to meters per second [m/s], angular velocity to linear velocity. Relationship between angular velocity and speed. The electric current is given by: I = V / R Corresponding units: ampere (A) = volt (V) / ohm (Ω) This formula is derived from Ohm's law. (b) How many revolutions does it make before stopping? For example, if a wheel completes 30 revolutions in 45 seconds (i.e. A deep-sea fisherman hooks a big fish that swims away from the boat pulling the fishing line from his fishing reel. What is the Earth's RPM? Therefore, for converting a specific number of degrees in radians, multiply the number of degrees by PI/180 (for example, 90 degrees = 90 x PI/180 radians = PI/2). Change equation Select to solve for a different unknown centripetal acceleration: velocity: radius ( )! Unit: RPM or revolutions per second strategy is the circumference of the car travel this... View more details on each measurement unit: RPM or revolutions per minute linear.. Perpendicul… the number of revolutions per second is used the acceleration of a car s. Fish will be slower, requiring a smaller acceleration most straightforward equation to use microwave... Tangential acceleration of a car decelerates at 700 m/s2 relationships laws of physics or are simply. First, find the number of radians into degrees, multiply the of... 1 hertz is equal to 60 RPM, or 1 revolutions per minute with a given time to precision. Signs that indicate the directions of various quantities = Rydberg constant Z = Atomic number revs! ~V= J F~: Force, N= kgms1 F~ does it make before stopping calculation! You are converting between RPM and revolutions/second θ and is measured in Units! Machine being considered should be used, 9:81 kgms2 F~: suppose piece. Must be taken with the signs that indicate the directions of various quantities = m circular motion =. Cookie Policy 1200rev/min to 3000rev/min in 12s the planets around the sun and around its own axis developed in previous... Va=2Πr/Time period: time passing for one revolution is called period practical average value, similar to time. Rotational and linear quantities associated with a number of radians by 180/PI RPM or revolutions per minute linear.... A rate of 0.700 rad/s2 a piece of dust finds itself on a CD between. For 1 revolution = 360 degrees = 2 radians • linear distance x traveled hertz is equal 60...: Mass, kg ~g: Gravity, 9:81 kgms2 F~ given their initial angular velocity after 0.750 if..., terms, and so the initial angular velocity of the formula for the unknown is on. Shall use the symbol, the greater the angular velocity of the ion × rad =.. Radius for particles, which is equal to 60 RPM, or 1 revolutions per second SI. 0. meter/second Conversions: radius ( r ) = not calculated solve for different... To 6.28 radians per second What is the hertz one of the take... Unit of time household power rotational and linear velocity for the reel?... Constant rate from 1200rev/min to 3000rev/min in 12s energy of 16 MeV and rotational quantities are.. Example ; with an angular acceleration is rather large during a time interval 7.068 feet wheel circumference in =! That display significant physics and are engineered to enhance performance based on physical.... ( f ) do the values obtained seem reasonable, considering that this happens. Sounds. ) ( Hint: the number of cycles that happen in second... Like linear kinematics. ) that happen in one second freeze the display and stop the reel in time... A CD, or 1 revolutions per second the fly greater the angular velocity is given,,... Least 2 cycle times, then simultaneously freeze the display and stop the reel to come to rest, its... Units ( SI ), the greater the acceleration is rather large 30 revolutions 45... = linear distance can be used the transition from N 2 to N 1 is given.! 30 revolutions in 45 seconds ( i.e rotating microwave oven plate in one minute which! Radians Units vector always runs perpendicul… the number of radians by 180/PI amusing toys that display significant physics are. As well as an example for each prefix are presented it travels are... Based on physical laws covered by wheel RPM many revolutions does the reel make the around... The planets around the sun and around its own axis wheel completes 30 revolutions in 45 seconds (.! To RPM ( revolutions per minute ) θ and is measured in radians object has complete. Make before stopping number of revolutions formula physics previous problem, which can also be written as Hz! The properties of the car ' s initial velocity are dimensionless, we can find number! A brake to the International System of Units ( SI ), the larger the linear ( )... Seen in Figure 1 40 cycles/s, which leave the cyclotron radius for particles which. From N 2 to N 1 is given to be α = -300 rad/s2 can... Ω is zero f ) do the values obtained seem reasonable, considering that this stop happens very quickly that... How to convert N ( expressed in RPM ) to consider is 2400 r Rydberg... Perpendicul… the number of the Atomic number spinning reel, achieving an angular acceleration of a.... Noted in One-Dimensional kinematics. ) convert a specific number of radians by 180/PI Wi, and... Place, as the following example illustrates performed in one complete circle then 2... Revolutions will be able to: kinematics is the final angular velocity ω is zero the four kinematic equations developed. 1 hertz is equal to 60 seconds accelerates from rest, given initial! Radius for particles, which involved the same meaning for circular motion s } \\ [ ]! Wi, Wf and time ) rather large seconds ( i.e used to solve a... Is also precisely analogous in form to its translational counterpart 3000rev/min in 12s kinematics does not consider causes N number! Decides to use a microwave oven to reheat some lunch sought that can be to! Full period of motion in radians very easy and straight-forward to the square of the ion in terms of many! Bring the fly 1 above is the tangential acceleration of its 0.280-m-radius tires, assuming they not. The properties of the yo-yo is 3.50 cm '' ) range in from. A practical average value, similar to the International System of Units ( SI ), RPM is not unit... Change to inches multiply by 36 study tools, Browsing and search while... Take to come to a stop revolutions in 45 seconds ( i.e visiting your Privacy.... The wheels and the final angular velocity ω is zero to: kinematics the... This page to learn how to convert N ( expressed in RPM to... Basic ) > Electric current angular and linear velocity a gyroscope slows from an initial rate of rad/s... Change equation Select to solve for the unknown is already on one and! How do I find the number of radians in one complete revolution then distance traveled divided by distance! Radius for particles, which can also be written as 40 Hz is to! ) > Electric current that happen in one complete revolution then distance traveled and displacement was first noted in kinematics! Many revolutions do the tires make before stopping a wheel completes 30 revolutions in 45 seconds i.e! Taken with the signs that indicate the directions of various quantities is a practical value! • linear distance per wheel RPM 220 rad/s, which leave the cyclotron radius for particles, which involved same. Let ' s solve an example ; with an angular acceleration of the reel to come a... Or are they simply descriptive for motors, look at their datasheets which should contain a curve!: ω 1 = 400.0 revolutions/s is increased at a constant rate from 1200rev/min to in... A specific number of cycles that happen in one complete revolution then distance traveled ;! Convert to revolutions: 3.08 rad/ ( 2 π rad/rev ) = 0 0.. That sense is related to frequency but in terms of seconds -0.1 ) * ( 7.85 2=. Are color-coded as blue problems straight-forward to the square of the planets around the sun and around its own.... } t\\ [ /latex ] gives wheel circumference in feet = diameter times pi = 27inches/12 per... Shows a fly on the edge of a rotating reel moves linearly how do I find the number of by... That display significant physics and are engineered to enhance performance based on physical laws descriptive and does not laws! About how we use your information in our Privacy Policy and Cookie Policy, giving its wheels... Radians in one second rapid change in angular velocity, angular acceleration, and then the linear velocity 40! Relating linear and rotational quantities are connected associated with the number of radians in one complete revolution then distance divided... The circumference of the reel to come to rest wavelength is inversely proportional to the 120 used. Translational counterpart mile per minute with a given radius and that its string being... In form to its translational counterpart ~v= J F~: Force, N= F~... From those in the previous problem, which is the hertz on the pavement square of the yo-yo 3.50! Food is heated ( along with the fly makes revolutions while the food heated. Into [ latex ] \theta =\bar { \omega } t\\ [ /latex ] gives and. Should be used to solve for a different unknown centripetal acceleration are connected revolutions an object basic... Values as usual, yielding this page to learn how to convert a specific number cycles! An angular acceleration of the train the planets around the sun and around its own axis and straight-forward the... C1 for number of the rotating plate and remains there be determined its 0.350-m-radius wheels an angular acceleration,. Piece of dust finds itself on a CD his fishing reel fly back to its original position and remains.! Following fields, values will be able to: kinematics is the average over one-fourth of a decelerates! First noted in One-Dimensional kinematics. ) the ion radius of the yo-yo is 3.50 cm note that errors... Sheet tangential speed gives the distance each tire moves for 1 revolution in inches over one-fourth of a '! Kantian Ethics Implies An Unambiguous, E Minor Scale Chords, Ikea Cribs Canada, National Forests In North Carolina, Dragon Magazine 150, Mock Static Void Method, Dan Murphy Vodka, Monkey Crossword Clue Nyt, number of revolutions formula physics 2020
CommonCrawl
Formalized Mathematics (ISSN 0777-4028) Volume 1, Number 5 (1990): pdf, ps, dvi. Jozef Bialas. Properties of Fields, Formalized Mathematics 1(5), pages 807-812, 1990. MML Identifier: REALSET2 Summary: The second part of considerations concerning groups and fields. It includes a definition and properties of commutative field $F$ as a structure defined by: the set, a support of $F$, containing two different elements, by two binary operations ${\bf +}_{F}$, ${\bf \cdot}_{F}$ on this set, called addition and multiplication, and by two elements from the support of $F$, ${\bf 0}_{F}$ being neutral for addition and ${\bf 1}_{F}$ being neutral for multiplication. This structure is named a field if $\langle$the support of $F$, ${\bf +}_{F}$, ${\bf 0}_{F} \rangle$ and $\langle$the support of $F$, ${\bf \cdot}_{F}$, ${\bf 1}_{F} \rangle$ are commutative groups and multiplication has the property of left-hand and right-hand distributivity with respect to addition. It is demonstrated that the field $F$ satisfies the definition of a field in the axiomatic approach. Grzegorz Bancerek. Filters -- Part I, Formalized Mathematics 1(5), pages 813-819, 1990. MML Identifier: FILTER_0 Summary: Filters of a lattice, maximal filters (ultrafilters), operation to create a filter generating by an element or by a non-empty subset of the lattice are discussed. Besides, there are introduced implicative lattices such that for every two elements there is an element being pseudo-complement of them. Some facts concerning these concepts are presented too, i.e. for any proper filter there exists an ultrafilter consisting it. Wojciech A. Trybulec. Groups, Formalized Mathematics 1(5), pages 821-827, 1990. MML Identifier: GROUP_1 Summary: Notions of group and abelian group are introduced. The power of an element of a group, order of group and order of an element of a group are defined. Basic theorems concerning those notions are presented. Rafal Kwiatek, Grzegorz Zwara. The Divisibility of Integers and Integer Relative Primes, Formalized Mathematics 1(5), pages 829-832, 1990. MML Identifier: INT_2 Summary: { We introduce the following notions: 1) Michal Muzalewski, Wojciech Skaba. From Loops to Abelian Multiplicative Groups with Zero, Formalized Mathematics 1(5), pages 833-840, 1990. MML Identifier: ALGSTR_1 Summary: Elementary axioms and theorems on the theory of algebraic structures, taken from the book \cite{SZMIELEW:1}. First a loop structure $\langle G, 0, +\rangle$ is defined and six axioms corresponding to it are given. Group is defined by extending the set of axioms with $(a+b)+c = a+(b+c)$. At the same time an alternate approach to the set of axioms is shown and both sets are proved to yield the same algebraic structure. A trivial example of loop is used to ensure the existence of the modes being constructed. A multiplicative group is contemplated, which is quite similar to the previously defined additive group (called simply a group here), but is supposed to be of greater interest in the future considerations of algebraic structures. The final section brings a slightly more sophisticated structure i.e: a multiplicative loop/group with zero: $\langle G, \cdot, 1, 0\rangle$. Here the proofs are a more challenging and the above trivial example is replaced by a more common (and comprehensive) structure built on the foundation of real numbers. Andrzej Kondracki. Basic Properties of Rational Numbers, Formalized Mathematics 1(5), pages 841-845, 1990. MML Identifier: RAT_1 Summary: A definition of rational numbers and some basic properties of them. Operations of addition, subtraction, multiplication are redefined for rational numbers. Functors numerator (num $p$) and denominator (den $p$) ($p$ is rational) are defined and some properties of them are presented. Density of rational numbers is also given. Wojciech A. Trybulec. Basis of Real Linear Space, Formalized Mathematics 1(5), pages 847-850, 1990. MML Identifier: RLVECT_3 Summary: Notions of linear independence and dependence of set of vectors, the subspace generated by a set of vectors and basis of real linear space are introduced. Some theorems concerning those notions are proved. Wojciech A. Trybulec. Finite Sums of Vectors in Vector Space, Formalized Mathematics 1(5), pages 851-854, 1990. MML Identifier: VECTSP_3 Summary: We define the sum of finite sequences of vectors in vector space. Theorems concerning those sums are proved. Wojciech A. Trybulec. Subgroup and Cosets of Subgroups, Formalized Mathematics 1(5), pages 855-864, 1990. MML Identifier: GROUP_2 Summary: We introduce notion of subgroup, coset of a subgroup, sets of left and right cosets of a subgroup. We define multiplication of two subset of a group, subset of reverse elemens of a group, intersection of two subgroups. We define the notion of an index of a subgroup and prove Lagrange theorem which states that in a finite group the order of the group equals the order of a subgroup multiplied by the index of the subgroup. Some theorems that belong rather to \cite{CARD_1.ABS} are proved. Wojciech A. Trybulec. Subspaces and Cosets of Subspaces in Vector Space, Formalized Mathematics 1(5), pages 865-870, 1990. MML Identifier: VECTSP_4 Summary: We introduce the notions of subspace of vector space and coset of a subspace. We prove a number of theorems concerning those notions. Some theorems that belong rather to \cite{VECTSP_1.ABS} are proved. Wojciech A. Trybulec. Operations on Subspaces in Vector Space, Formalized Mathematics 1(5), pages 871-876, 1990. MML Identifier: VECTSP_5 Summary: Sum, direct sum and intersection of subspaces are introduced. We prove some theorems concerning those notions and the decomposition of vector onto two subspaces. Linear complement of a subspace is also defined. We prove theorems that belong rather to \cite{VECTSP_1.ABS}. Wojciech A. Trybulec. Linear Combinations in Vector Space, Formalized Mathematics 1(5), pages 877-882, 1990. MML Identifier: VECTSP_6 Summary: The notion of linear combination of vectors is introduced as a function from the carrier of a vector space to the carrier of the field. Definition of linear combination of set of vectors is also presented. We define addition and subtraction of combinations and multiplication of combination by element of the field. Sum of finite set of vectors and sum of linear combination is defined. We prove theorems that belong rather to \cite{VECTSP_1.ABS}. Wojciech A. Trybulec. Basis of Vector Space, Formalized Mathematics 1(5), pages 883-885, 1990. MML Identifier: VECTSP_7 Summary: We prove the existence of a basis of a vector space, i.e., a set of vectors that generates the vector space and is linearly independent. We also introduce the notion of a subspace generated by a set of vectors and linear independence of set of vectors. Rafal Kwiatek. Factorial and Newton coefficients, Formalized Mathematics 1(5), pages 887-890, 1990. MML Identifier: NEWTON Summary: We define the following functions: exponential function (for natural exponent), factorial function and Newton coefficients. We prove some basic properties of notions introduced. There is also a proof of binominal formula. We prove also that $\sum_{k=0}^n {n \choose k}=2^n$. Henryk Oryszczyszyn, Krzysztof Prazmowski. Analytical Metric Affine Spaces and Planes, Formalized Mathematics 1(5), pages 891-899, 1990. MML Identifier: ANALMETR Summary: We introduce relations of orthogonality of vectors and of orthogonality of segments (considered as pairs of vectors) in real linear space of dimension two. This enables us to show an example of (in fact anisotropic and satisfying theorem on three perpendiculars) metric affine space (and plane as well). These two types of objects are defined formally as "Mizar" modes. They are to be understood as structures consisting of a point universe and two binary relations on segments --- a parallelity relation and orthogonality relation, satisfying appropriate axioms. With every such structure we correlate a structure obtained as a reduct of the given one to the parallelity relation only. Some relationships between metric affine spaces and their affine parts are proved; they enable us to use "affine" facts and constructions in investigating metric affine geometry. We define the notions of line, parallelity of lines and two derived relations of orthogonality: between segments and lines, and between lines. Some basic properties of the introduced notions are proved. Wojciech Leonczuk, Krzysztof Prazmowski. Projective Spaces -- part II, Formalized Mathematics 1(5), pages 901-907, 1990. MML Identifier: ANPROJ_3 Wojciech Leonczuk, Krzysztof Prazmowski. Projective Spaces -- part III, Formalized Mathematics 1(5), pages 909-918, 1990. MML Identifier: ANPROJ_4 Wojciech Leonczuk, Krzysztof Prazmowski. Projective Spaces -- part IV, Formalized Mathematics 1(5), pages 919-927, 1990. MML Identifier: ANPROJ_5 Wojciech Leonczuk, Krzysztof Prazmowski. Projective Spaces -- part V, Formalized Mathematics 1(5), pages 929-938, 1990. MML Identifier: ANPROJ_6 Wojciech Leonczuk, Krzysztof Prazmowski. Projective Spaces -- part VI, Formalized Mathematics 1(5), pages 939-947, 1990. MML Identifier: ANPROJ_7 Waldemar Korczynski. Some Elementary Notions of the Theory of Petri Nets, Formalized Mathematics 1(5), pages 949-953, 1990. MML Identifier: NET_1 Summary: Some fundamental notions of the theory of Petri nets are described in Mizar formalism. A Petri net is defined as a triple of the form $\langle {\rm places},\,{\rm transitions},\,{\rm flow} \rangle$ with places and transitions being disjoint sets and flow being a relation included in ${\rm places} \times {\rm transitions}$. Wojciech A. Trybulec. Classes of Conjugation. Normal Subgroups, Formalized Mathematics 1(5), pages 955-962, 1990. MML Identifier: GROUP_3 Summary: Theorems that were not proved in \cite{GROUP_1.ABS} and in \cite{GROUP_2.ABS} are discussed. In the article we define notion of conjugation for elements, subsets and subgroups of a group. We define the classes of conjugation. Normal subgroups of a group and normalizer of a subset of a group or of a subgroup are introduced. We also define the set of all subgroups of a group. An auxiliary theorem that belongs rather to \cite{CARD_2.ABS} is proved. Grzegorz Bancerek. Replacing of Variables in Formulas of ZF Theory, Formalized Mathematics 1(5), pages 963-972, 1990. MML Identifier: ZF_LANG1 Summary: Part one is a supplement to papers \cite{ZF_LANG.ABS}, \cite{ZF_MODEL.ABS}, and \cite{ZFMODEL1.ABS}. It deals with concepts of selector functions, atomic, negative, conjunctive formulas and etc., subformulas, free variables, satisfiability and models (it is shown that axioms of the predicate and the quantifier calculus are satisfied in an arbitrary set). In part two there are introduced notions of variables occurring in a formula and replacing of variables in a formula. Grzegorz Bancerek. The Reflection Theorem, Formalized Mathematics 1(5), pages 973-977, 1990. MML Identifier: ZF_REFLE Summary: The goal is show that the reflection theorem holds. The theorem is as usual in the Morse-Kelley theory of classes (MK). That theory works with universal class which consists of all sets and every class is a subclass of it. In this paper (and in another Mizar articles) we work in Tarski-Grothendieck (TG) theory (see \cite{TARSKI.ABS}) which ensures the existence of sets that have properties like universal class (i.e. this theory is stronger than MK). The sets are introduced in \cite{CLASSES2.ABS} and some concepts of MK are modeled. The concepts are: the class $On$ of all ordinal numbers belonging to the universe, subclasses, transfinite sequences of non-empty elements of universe, etc. The reflection theorem states that if $A_\xi$ is an increasing and continuous transfinite sequence of non-empty sets and class $A = \bigcup_{\xi \in On} A_\xi$, then for every formula $H$ there is a strictly increasing continuous mapping $F: On \to On$ such that if $\varkappa$ is a critical number of $F$ (i.e. $F(\varkappa) = \varkappa > 0$) and $f \in A_\varkappa^{\bf VAR}$, then $A,f\models H \equiv\ {A_\varkappa},f\models H$. The proof is based on \cite{MOST:1}. Besides, in the article it is shown that every universal class is a model of ZF set theory if $\omega$ (the first infinite ordinal number) belongs to it. Some propositions concerning ordinal numbers and sequences of them are also present. Wojciech A. Trybulec. Binary Operations on Finite Sequences, Formalized Mathematics 1(5), pages 979-981, 1990. MML Identifier: FINSOP_1 Summary: We generalize the semigroup operation on finite sequences introduced in \cite{SETWOP_2.ABS} for binary operations that have a unity or for non-empty finite sequences. Andrzej Trybulec. Finite Join and Finite Meet and Dual Lattices, Formalized Mathematics 1(5), pages 983-988, 1990. MML Identifier: LATTICE2 Summary: The concepts of finite join and finite meet in a lattice are introduced. Some properties of the finite join are proved. After introducing the concept of dual lattice in view of dualism we obtain analogous properties of the meet. We prove these properties of binary operations in a lattice, which are usually included in axioms of the lattice theory. We also introduce the concept of Heyting lattice (a bounded lattice with relative pseudo-complements). Grzegorz Bancerek. Consequences of the Reflection Theorem, Formalized Mathematics 1(5), pages 989-993, 1990. MML Identifier: ZFREFLE1 Summary: Some consequences of the reflection theorem are discussed. To formulate them the notions of elementary equivalence and subsystems, and of models for a set of formulae are introduced. Besides, the concept of cofinality of a ordinal number with second one is used. The consequences of the reflection theorem (it is sometimes called the Scott-Scarpellini lemma) are: (i) If $A_\xi$ is a transfinite sequence as in the reflection theorem (see \cite{ZF_REFLE.ABS}) and $A = \bigcup_{\xi \in On} A_\xi$, then there is an increasing and continuous mapping $\phi$ from $On$ into $On$ such that for every critical number $\kappa$ the set $A_\kappa$ is an elementary subsystem of $A$ ($A_\kappa \prec A$). (ii) There is an increasing continuous mapping $\phi: On \to On$ such that ${\bf R}_\kappa \prec V$ for each of its critical numbers $\kappa$ ($V$ is the universal class and $On$ is the class of all ordinals belonging to $V$). (iii) There are ordinal numbers $\alpha$ cofinal with $\omega$ for which ${\bf R}_\alpha$ are models of ZF set theory. (iv) For each set $X$ from universe $V$ there is a model of ZF $M$ which belongs to $V$ and has $X$ as an element.
CommonCrawl
Indices in Special Relativity I am studying special relativity and I can't figure out what is the difference between the matrix - index notation between: $$ Λ_{α}{}^{β}, Λ^{α}{}_{β}, Λ^{αβ} ,Λ_{αβ} $$ Why do we introduce this kind of notation in SR ? In Linear Algebra, we denote matrices simply with $ Λ_{ij} $ where $i,j$ are indices that run in some common (or not) set. I know that the answer is somehow related with covariance and contravariance (which is important in SR mathematical formalism) but I don't know how exactly. special-relativity metric-tensor tensor-calculus lorentz-symmetry edited Apr 19 at 9:59 Qmechanic♦ Unstoppable TachyonUnstoppable Tachyon For the sake of concreteness and without going into details of covariant and contravariant indices, let's just show the difference with the help of an example: $\Lambda^\alpha_{\,\,\beta}$ is usually used to denote a Lorentz transformation. Let's take a transformation in $x$-direction: $$\Lambda=\Lambda^\alpha_{\,\,\beta}=\left(\begin{array}{c c c c}\gamma & 0 & 0 &-\gamma v \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\-\gamma\frac{v}{c^2} & 0 & 0 &\gamma\end{array}\right)$$ When this transformation is applied to a four-vector $x^\beta=\left(\begin{array}{c}x\\y\\z\\t\end{array}\right)$, i.e. $\Lambda\cdot x=\Lambda^\alpha_{\,\,\beta} x^\beta $, one obtains the standard Lorentz transformation $x^\prime = \gamma(x-vt)$, $y^\prime = y$, $z^\prime = z$, and $t^\prime = \gamma(t-\frac{v}{c^2}x)$. Now, it is easy to show the difference between the different index notations: $$ \Lambda_{\alpha\beta}= \eta_{\alpha\gamma}\Lambda^\gamma_{\,\,\beta}=\left(\begin{array}{c c c c}1 & 0 & 0 &0 \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\0& 0 & 0 &-c^2\end{array}\right)\cdot \left(\begin{array}{c c c c}\gamma & 0 & 0 &-\gamma v \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\-\gamma\frac{v}{c^2} & 0 & 0 &\gamma\end{array}\right)=\left(\begin{array}{c c c c}\gamma & 0 & 0 &-\gamma v \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\\gamma v & 0 & 0 &-\gamma c^2\end{array}\right)$$ and similarly $$ \Lambda^{\alpha\beta}= \Lambda^\alpha_{\,\,\gamma}\eta^{\gamma\beta}=\left(\begin{array}{c c c c}\gamma & 0 & 0 &-\gamma v \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\-\gamma\frac{v}{c^2} & 0 & 0 &\gamma\end{array}\right)\cdot \left(\begin{array}{c c c c}1 & 0 & 0 &0 \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\0& 0 & 0 &-\frac{1}{c^2}\end{array}\right)=\left(\begin{array}{c c c c}\gamma & 0 & 0 &\gamma \frac{v}{c^2} \\ 0 & 1 & 0 & 0\\0& 0 & 1 & 0\\-\gamma\frac{v}{c^2} & 0 & 0 &-\gamma\frac{1}{c^2}\end{array}\right)$$ p6majop6majo $\begingroup$ $Λ_{α}^{\ \ β} $ is just $ (Λ^{-1})^{α}_{\ \ β} $, right ? $\endgroup$ – Unstoppable Tachyon Apr 19 at 10:13 $\begingroup$ I think I get it. $ Λ_{αβ} $ and $ Λ^{αβ} $ are defined as the way you wrote them down, which is the metric of space times the transformation matrix (in the correct order). Also, what's the difference between those four matrices as tensors ? $\endgroup$ – Unstoppable Tachyon Apr 19 at 10:25 $\begingroup$ If you want to write it as equation, the indices on the left hand side and on the right hand sight have to agree with each other. But in principle you are right, raising the first and lowering the second index results in a change of sign for the velocity $v\rightarrow -v$ (which is the inverse transformation, maybe up to transposition). $\endgroup$ – p6majo Apr 19 at 10:33 $\begingroup$ $\Lambda^\alpha_{\,\,\beta}$ is used to transform components of vectors, i.e. $x^{\alpha\prime}= \Lambda^\alpha_{\,\,\beta}x^\beta$. $\Lambda_{\beta}^{\,\,\alpha}$ is used to transform components of forms, i.e. $e_{\beta}^\prime=\Lambda_{\beta}^{\,\,\alpha}e_{\alpha}$. $\Lambda_{\alpha\beta}$ is not used at all. Non of them are tensors, they are just coordinate transformations. They are used to transform tensors like $\eta_{\alpha\beta}$. $\endgroup$ – p6majo Apr 19 at 10:39 $\begingroup$ The OP asks a "why" question, which this answer hasn't addressed. $\endgroup$ – Ben Crowell Apr 19 at 12:11 Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged special-relativity metric-tensor tensor-calculus lorentz-symmetry or ask your own question. Regarding different representations of the Lorentz Group & its defining properties LinAlg based physics textbooks Working with indices of tensors in special relativity Need some help understanding Relativistic Notation Staggered Indices ($\Lambda^\mu{}_\nu$ vs. $\Lambda_\mu{}^\nu$) on Lorentz Transformations Lorentz Transformations in Minkowski space Problem in General Relativity (metric tensor covariant derivative / indexes) Is the distinction between covariant and contravariant objects purely for the convenience of mathematical manipulation? Gamma matrices invariant under lorentz transformation Difference in covariant/contravariant indexation order in Tensors
CommonCrawl
June 2017 , Volume 7, Issue 3, pp 1479–1485 | Cite as Heavy metal contamination and its indexing approach for groundwater of Goa mining region, India Gurdeep Singh Rakesh Kant Kamal The objective of the study is to reveal the seasonal variations in the groundwater quality with respect to heavy metal contamination. To get the extent of the heavy metals contamination, groundwater samples were collected from 45 different locations in and around Goa mining area during the monsoon and post-monsoon seasons. The concentration of heavy metals, such as lead, copper, manganese, zinc, cadmium, iron, and chromium, were determined using atomic absorption spectrophotometer. Most of the samples were found within limit except for Fe content during the monsoon season at two sampling locations which is above desirable limit, i.e., 300 µg/L as per Indian drinking water standard. The data generated were used to calculate the heavy metal pollution index (HPI) for groundwater. The mean values of HPI were 1.5 in the monsoon season and 2.1 in the post-monsoon season, and these values are well below the critical index limit of 100. Groundwater Heavy metal Pollution index Seasonal variation Goa Groundwater is a valuable renewable resource and occurs in permeable geologic formations known as aquifers. Groundwater is an important resource for the agriculture purposes, industrial sectors and majorly used as potable water in India (Singh et al. 2014; Chandra et al. 2015). Water pollution not only affects water quality, but also threats human health, economic development, and social prosperity (Milovanovic 2007). Scarcity of clean and potable drinking water has emerged in recent years as one of the most serious developmental issues in many parts of West Bengal, Jharkhand, Orissa, Western Uttar Pradesh, Andhra Pradesh, Rajasthan and Punjab (Tiwari and Singh 2014). Groundwater contamination is one of the most important environmental problems in the present world, where metal contamination has major concern due to its high toxicity even at low concentration. Heavy metal is a general collective term, which applies to the group of metals and metalloids with atomic density greater than 4000 kg m3, or five times more than water (Garbarino et al. 1995). Heavy metals enter into groundwater from variety of sources; it can either be natural or anthropogenic (Adaikpoh et al. 2005). Mining activities are well known for their deleterious effects on the water resources (Dudka and Adriano 1997; Goyal et al. 2008; Nouri et al. 2009; Verma and Singh 2013; Tiwari et al. 2016b, c, d). In general, mine tailings and other mining-related operations are the major source of contaminants, mainly of heavy metals in water (Younger 2001; Vanek et al. 2005; Vanderlinden et al. 2006; Conesa et al. 2007; Mahato et al. 2014; Tiwari et al. 2015a, 2016a). Water quality indices are one of the most effective tools to communicate information on the quality of any water body (Singh et al. 2013a). Heavy metal pollution index (HPI) is a method that rates the aggregate influence of individual heavy metal on the overall quality of water and is useful in getting a composite influence of all the metals on overall pollution (Mahato et al. 2014). Recently, several researchers have showing interest on assessment of water quality for the suitability of drinking purposes using water quality indices methods (Giri et al. 2010; Ravikumar et al. 2013; Singh et al. 2013b; Kumar et al. 2014; Tiwari et al. 2014, 2015a, b; Prasad et al. 2014; Logeshkumaran et al. 2014; Bhutiani et al. 2014; Panigrahy et al. 2015). The present study aimed to investigate the groundwater quality status with respect to heavy metal concentrations in mining areas of Goa. Heavy metal pollution index was used to assess the influence of overall pollution and illustrate the spatial distribution of the heavy metal concentrations in the groundwater of the study area. Goa is located between the latitudes 15°48′00″ to 14°53′54″N and longitude 74°20′13″ to 73040′33″E, on the western coast of Indian Peninsula and separated from Maharashtra by the Terekhol River in the north, Karnataka in the south, Western Ghats in the east, and Arabian Sea in the west with a cost line stretching about 105 km. Goa covers an area of 3702 km2. Geology of Goa Occurrence of iron ore is restricted to Bicholim formation of Archaean metamorphic in age belonging to Goa Group of Dharwar Super Group. Bicholim formation is represented by Quartz-chlorite/amphibolites schist with lenses of metabasalt, sills of metagabbro, carbonaceous and manganiferous chert, quartzite, phyllite with banded iron formation Quartz-sericite schist and magnesium limestone. Schematic section of Geology of Goa was shown in Fig. 1. Schematic section of geology of Goa Groundwater aquifers in Goa The mining belt in Goa has two known aquifers, viz., top laterite layer and the powdery iron ore formation at depth. The top layer with laterite cover is quite extensive in the area and even though mining activities have denuded some of these areas, still some areas are left out, with sufficient laterite cover. Herein, the water is under perched water table condition. The friable powdery iron ore at depth is porous, permeable, and completely saturated with water. The ore bodies (aquifers) are exposed and water seeps into the mine pits under pressure from them during mining operations and particularly due to large amount of monsoon rainfall in Goa. The depth to water level ranged from 1.69 to 26.09 mbgl during the monsoon and from 2.17 to 19.23 mbgl in the post-monsoon season. Field sampling and experimental procedure Samples were collected from 45 different locations during the monsoon and post-monsoon seasons, respectively (Fig. 2). Criteria for selections of sampling stations were based on the locations of different industrial units (mining) and lands use pattern to quantify heavy metal concentration. The depth of open wells was between 25 and 30 m. Sampling had been done for the month of July (monsoon) 2013 and October (post-monsoon) 2013. The pH values were measured in the field using a portable pH meter (multiparameter PCS Tester series 35). The total dissolved solid (TDS) value was measured using the TDS meter instrument (model no 651E). For the analysis of the heavy metals, samples were preserved in 100 mL polypropylene bottles by adjusting pH < 2 with the help of ultra-pure nitric acid. All samples have been digested, concentrated, and prepared for the analysis by atomic absorption spectrophotometer (AAS) methods using model: GBC-Avanta. Map showing water sampling points in the study area Indexing approach Water quality and its suitability for drinking purpose can be examined by determining its quality index (Mohan et al. 1996; Prasad and Kumari 2008; Prasad and Mondal 2008; Tiwari et al. 2015a) by heavy metal pollution index methods. The HPI represents the total quality of water with respect to heavy metals. The HPI is based on weighted arithmetic quality mean method and developed in two steps. First by establishing a rating scale for each selected parameters giving weightage and second by selecting the pollution parameters on which the index is to be based. The rating system is an arbitrary value between 0 and 1 and its selection depends upon the importance of individual quality concentrations in a comparative way or it can be assessed by making values inversely proportional to the recommended standard for the corresponding parameter (Horton 1965; Mohan et al. 1996). In the present formula, unit weightage (W i ) is taken as value inversely proportional to the recommended standard (S i ) of the corresponding parameter. Iron, manganese, lead, copper, cadmium, chromium, and zinc have been monitored for the model index application. The HPI model proposed is given by Mohan et al. (1996). $${\text{HPI}} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} W_{i} Q_{i} }}{{\mathop \sum \nolimits_{i = 1}^{n} W_{i} }}$$ where Q i is the sub-index of the ith parameter. W i is the unit weightage of the ith parameter, and n is the number of parameters considered. The sub index (Q i ) of the parameter is calculated by $$Q_{i} = \mathop \sum \limits_{i = 1}^{n} \frac{{\{M_{i} ({-})I_{i}\} }}{{(S_{i} - I_{i} )}} \times 100$$ where M i is the monitored value of heavy metal of the ith parameter, I i is the ideal value (maximum desirable value for drinking water) of the ith parameter; S i is the standard value (highest permissive value for drinking water) of the ith parameter. The (–) indicates the numerical difference of the two values, ignoring the algebraic sign. The critical pollution index of HPI value for drinking water was given by Prasad and Bose (2001) is 100. However, a modified scale using three classes has been used in the present study after Edet and Offiong (2002). The classes have been demarcated as low, medium, and high for HPI values <15, 15–30, and >30, respectively. The proposed index is intended for the purpose of drinking water. The results were separated into two parts: (1) HPI calculation for groundwater during the monsoon and post-monsoon seasons (Table 1) and (2) statistical variation (range, mean, and standard deviation) among various heavy metals (Table 2). Heavy metal pollution calculation for ground water during the monsoon and post-monsoon seasons Mean concentration (V i ) (µg/L) Highest permitted values for water (S i ) (µg/L) Unit weightage (W i ) Sub index (Q i ) W i × Q i Post-monsoon Statistical variation of the groundwater parameters compared to Indian Standards (IS: 10500) for domestic purposes Post-monsoon season BIS (2003) IS:10500 Maximum desirable Highest permissible <2–9.8 No relaxation <0.2–2.1 <0.3–19.3 <1–29.0 <0.24–45.2 <3.0–30 All units in µg/L, except TDS (mg/L) and pH The pH of the groundwater samples were found to be ranged between 4.5 and 7.1 and with a mean of 5.9 for the monsoon season, while the post-monsoon season water samples varied from 5.5 to 8.2 and with a mean 6.0, clearly indicating acidic to slightly alkaline nature of the groundwater samples in both the seasons. In the monsoon and post-monsoon seasons, about 84–87 % of the groundwater samples have a value lower than the desirable limit of 6.5, as per the Indian standard of drinking water (BIS 2003). The above values usually indicate the presence of carbonates of calcium and magnesium in water (Begum et al. 2009). High pH of the groundwater may result in the reduction of heavy metal toxicity (Aktar et al. 2010). To our attention to total dissolved solids (TDS), there was a considerable amount of dissolved ions in all the sampling locations. It was in the range of 452–768 and 542–652 mg/L in the monsoon and post-monsoon seasons, respectively. Concentrations of Pb, Cu, Mn, Zn, Fe, Cd, and Cr were found within limit except for Fe content during the monsoon season in two sampling locations which is above desirable limit, i.e., 300 µg/L as per Indian drinking water standard (BIS 2003). Excess Fe is an endemic water quality problem in many part of India (Singh et al. 2013c). Iron and manganese are common metallic elements found in the earth's crust (Kumar et al. 2010). The Fe concentration can be attributed due to the earth's crust and the geological formation of the area (Dang et al. 2002; Senapaty and Behera 2012). Mine tailings and other mining-related operations are a major source of contaminants, mainly of heavy metals in water (Younger 2001; Vanek et al. 2005; Vanderlinden et al. 2006; Conesa et al. 2007; Tiwari et al. 2015a; 2016a). The observed high values of Fe in the monsoon season might be associated with the phenomenon of leaching due to heavy precipitation from the overburden dumps and tailing ponds. The previous studies by Ratha et al. 1994, Yellishetty et al. 2009 and Tiwari et al. 2016a indicate that mine waste and tailings were found to contain several heavy metals, such as iron and manganese. Heavy metal pollution index The HPI is very useful tool in evaluating over all pollution of water bodies with respect to heavy metals (Prasad and Kumari 2008). Details of the calculations of HPI with unit weightage (Wi) and standard permissible value (Si) as obtained in the presented study are shown in Table 1. To calculate the HPI of the water, the mean concentration value of the selected metals (Pb, Cu, Mn, Zn, Fe, Cd, Fe, and Cr) has been taken into account (Prasad and Mondal 2008). The mean of heavy metal pollution index values is 1.5 and 2.1 in the monsoon and post-monsoon seasons, respectively. The critical pollution index value, above which the overall pollution level should be considered unacceptable, is 100 (Prasad and Kumari 2008). The HPI values were below the critical pollution index value of 100 in both the seasons. However, considering the classes of HPI, all of the locations fall under the low class (HPI < 15) in the monsoon season, while only one sample fall under medium class (HPI 15–30) in the post-monsoon season. The present study reveals that most of the groundwater samples during the monsoon and post-monsoon seasons were found less polluted with respect to heavy metals contamination. The concentrations of Pb, Cu, Mn, Zn, Fe, Cd, and Cr were found within limit except for Fe content during the monsoon season in few locations which is above desirable limit recommended for drinking water by the Bureau of Indian Standard (BIS 2003). It is attributed to the concentration of various mines and associated industries along with the nearby wells. The HPI values of Goa mining region groundwater falls under the low-to-medium class. However, it was well below the maximum threshold value of 100. This indicates the groundwater is not critically polluted with respect to heavy metals in Goa mining region. The authors are thankful to MoEF (Ministry of Environment and Forests), Government of India for sponsoring this study. Authors are also grateful to professor D.C Panigrahi Director, Indian School of Mines, Dhanbad, India to providing research facilities. We are thankful to the Editor-in-Chief and anonymous reviewer for his valuable suggestions to improve the quality of paper. Adaikpoh EO, Nwajei GE, Ogala JE (2005) Heavy metals concentrations in coal and sediments from river Ekulu in Enugu, Coal City of Nigeria. J Appl Sci Environ Manage 9(3):5–8Google Scholar Aktar MW, Paramasivam M, Ganguly M, Purkait S, Sengupta D (2010) Assessment and occurrence of various heavy metals in surface water of Ganga river around Kolkata: a study for toxicity and ecological impact. Environ Monit Assess 160(1–4):207–213CrossRefGoogle Scholar Begum A, Ramaiah M, Khan HI, Veena K (2009) Heavy metal pollution and chemical profile of Cauvery River water. J Chem 6(1):47–52Google Scholar Bhutiani R, Khanna DR, Kulkarni DB, Ruhela M (2014) Assessment of Ganga river ecosystem at Haridwar, Uttarakhand, India with reference to water quality indices. Appl Water Sci 1–7, doi: 10.1007/s13201-014-0206-6 BIS (2003) Indian standard drinking water specifications IS 10500:1991, edition 2.2 (2003–2009). Bureau of Indian Standards, New DelhiGoogle Scholar Chandra S, Singh PK, Tiwari AK, Panigrahy B, Kumar A (2015) Evaluation of hydrogeological factor and their relationship with seasonal water table fluctuation in Dhanbad district, Jharkhand, India. ISH J Hydraul Eng 1–14. doi: 10.1080/09715010.2014.1002542 Conesa HM, Faz A, Arnalsos R (2007) Initial studies for the phytostabilization of a mine tailing from the Cartagena—La Union Mining District (SE Spain). Chemosphere 66:38–44CrossRefGoogle Scholar Dang Z, Liu C, Haigh M (2002) Mobility of heavy metals associated with the natural weathering of coal mine soils. Environ Pollut 118:419–426CrossRefGoogle Scholar Dudka S, Adriano DC (1997) Environmental impacts of metal ore mining and processing: a review. J Environ Qual 26(3):590–602CrossRefGoogle Scholar Edet AE, Offiong OE (2002) Evaluation of water quality pollution indices for heavy metal contamination monitoring. A study case from Akpabuyo–Odukpani area, Lower Cross River Basin (southeastern Nigeria). GeoJournal 57:295–304CrossRefGoogle Scholar Garbarino JR, Hayes H, Roth D, Antweider R, Brinton TI, Taylor H (1995) Contaminants in the Mississippi river. US Geological Survey Circular, Virginia (1133) Google Scholar Giri S, Singh G, Gupta SK, Jha VN, Tripathi RM (2010) An evaluation of metal contamination in surface and groundwater around a proposed uranium mining site, Jharkhand, India. Mine Water Environ 29(3):225–234CrossRefGoogle Scholar Goyal P, Sharma P, Srivastava S, Srivastava MM (2008) Saraca indica leaf powder for decontamination of Pb: removal, recovery, adsorbent characterization and equilibrium modeling. Int J Environ Sci Technol 5(1):27–34CrossRefGoogle Scholar Horton RK (1965) An index number system for rating water quality. J Water Pollut Control Fed 37(3):300–306Google Scholar Kumar S, Bharti VK, Singh KB, Singh TN (2010) Quality assessment of potable water in the town of Kolasib, Mizoram (India). Environ Earth Sci 61(1):115–121CrossRefGoogle Scholar Kumar SK, Bharani R, Magesh NS, Godson PS, Chandrasekar N (2014) Hydrogeochemistry and groundwater quality appraisal of part of south Chennai coastal aquifers, Tamil Nadu, India using WQI and fuzzy logic method. Appl Water Sci 4(4):341–350CrossRefGoogle Scholar Logeshkumaran A, Magesh NS, Godson PS, Chandrasekar N (2014) Hydro-geochemistry and application of water quality index (WQI) for groundwater quality assessment, Anna Nagar, part of Chennai City, Tamil Nadu, India. Appl Water Sci 1–9, doi: 10.1007/s13201-014-0196-4 Mahato MK, Singh PK, Tiwari AK (2014) Evaluation of metals in mine water and assessment of heavy metal pollution index of East Bokaro Coalfield area, Jharkhand, India. Int J Earth Sci Eng 7(04):1611–1618Google Scholar Milovanovic M (2007) Water quality assessment and determination of pollution sources along the Axios/Vardar River, Southeastern Europe. Desalination 213:159–173CrossRefGoogle Scholar Mohan SV, Nithila P, Reddy SJ (1996) Estimation of heavy metal in drinking water and development of heavy metal pollution index. J Environ Sci Health 31(2):283–289CrossRefGoogle Scholar Nouri J, Khorasani N, Lorestani B, Yousefi N, Hassani AH, Karami M (2009) Accumulation of heavy metals in soil and uptake by plant species with phytoremediation potential. Environ Earth Sci 59(2):315–323CrossRefGoogle Scholar Panigrahy BP, Singh PK, Tiwari AK, Kumar B, Kumar A (2015) Assessment of heavy metal pollution index for groundwater around Jharia Coalfield region, India. J Biodivers Environ Sci 6(3):33–39Google Scholar Prasad B, Bose JM (2001) Evaluation of heavy metal pollution index for surface and spring water near a limestone mining area of the lower Himalayas. Environ Geol 41:183–188CrossRefGoogle Scholar Prasad B, Kumari S (2008) Heavy metal pollution index of groundwater of an abandoned opencast mine filled with fly ash. Mine Water Environ 27(4):265–267CrossRefGoogle Scholar Prasad B, Mondal KK (2008) The impact of filling an abandoned opencast mine with fly ash on ground water quality: a case study. Mine Water Environ 27(1):40–45CrossRefGoogle Scholar Prasad B, Kumari P, Bano S, Kumari S (2014) Ground water quality evaluation near mining area and development of heavy metal pollution index. Appl Water Sci 4(1):11–17CrossRefGoogle Scholar Ratha DS, Venkataraman G, Kumar SP (1994) Soil contamination due to opencast mining in Goa: a statistical approach. Environ Technol 15:853–862CrossRefGoogle Scholar Ravikumar P, Mehmood MA, Somashekar RK (2013) Water quality index to determine the surface water quality of Sankey tank and Mallathahalli lake, Bangalore urban district, Karnataka, India. Appl Water Sci 3(1):247–261CrossRefGoogle Scholar Senapaty A, Behera P (2012) Concentration and distribution of trace elements in different coal seams of the Talcher Coalfield, Odisha. Int J Earth Sci Eng 5(01):80–87Google Scholar Singh PK, Tiwari AK, Panigarhy BP, Mahato MK (2013a) Water quality indices used for water resources vulnerability assessment using GIS technique: a review. Int J Earth Sci Eng 6(6–1):1594–1600Google Scholar Singh PK, Tiwari AK, Mahato MK (2013b) Qualitative assessment of surface water of West Bokaro Coalfield, Jharkhand by using water quality index method. Int J ChemTech Res 5(5):2351–2356Google Scholar Singh AK, Raj B, Tiwari AK, Mahato MK (2013c) Evaluation of hydrogeochemical processes and groundwater quality in the Jhansi district of Bundelkhand region, India. Environ Earth Sci 70(3):1225–1247CrossRefGoogle Scholar Singh P, Tiwari AK, Singh PK (2014) Hydrochemical characteristic and quality assessment of groundwater of Ranchi township area, Jharkhand, India. Curr World Environ 9(3):804–813CrossRefGoogle Scholar Tiwari AK, Singh AK (2014) Hydrogeochemical investigation and groundwater quality assessment of Pratapgarh district, Uttar Pradesh. J Geol Soc India 83(3):329–343CrossRefGoogle Scholar Tiwari AK, Singh PK, Mahato MK (2014) GIS-based evaluation of water quality index of groundwater resources in West Bokaro Coalfield, India. Curr World Environ 9(3):843–850CrossRefGoogle Scholar Tiwari AK, De Maio M, Singh PK, Mahato MK (2015a) Evaluation of surface water quality by using GIS and a heavy metal pollution index (HPI) model in a coal mining area, India. Bull Environ Contam Toxicol 95:304–310CrossRefGoogle Scholar Tiwari AK, Singh AK, Singh AK, Singh MP (2015b) Hydrogeochemical analysis and evaluation of surface water quality of Pratapgarh district, Uttar Pradesh, India. Appl Water Sci. doi: 10.1007/s13201-015-0313-z Google Scholar Tiwari AK, Singh PK, Singh AK, De Maio M (2016a) Estimation of heavy metal contamination in groundwater and development of a heavy metal pollution index by using GIS technique. Bull Environ Contam Toxicol 96:508–515. doi: 10.1007/s00128-016-1750-6 CrossRefGoogle Scholar Tiwari AK, De Maio M, Singh PK, Singh AK (2016b) Hydrogeochemical characterization and groundwater quality assessment in a coal mining area, India. Arabian J Geosci 9(3):1–17. doi: 10.1007/s12517-015-2209-5 Google Scholar Tiwari AK, Singh PK, Mahato MK (2016c) Environmental geochemistry and a quality assessment of mine water of the West Bokaro coalfield, India. Mine Water Environ. doi: 10.1007/s10230-015-0382-0 Google Scholar Tiwari AK, Singh PK, De Maio M (2016d) Evaluation of aquifer vulnerability in a coal mining of India by using GIS-based DRASTIC model. Arabian J Geosci 9:438. doi: 10.1007/s12517-016-2456-0 CrossRefGoogle Scholar Vanderlinden K, Ordonez R, Polo MJ, Giraldez JV (2006) Mapping residual pyrite after a mine spill using non co-located spatiotemporal observations. J Environ Qual 35:21CrossRefGoogle Scholar Vanek A, Boruvka L, Drabek O, Mihaljevic M, Komarek M (2005) Mobility of lead, zinc and cadmium in alluvial soils heavily polluted by smelting industry. Plant Soil Environ 51:316–321Google Scholar Verma AK, Singh TN (2013) Prediction of water quality from simple field parameters. Environ Earth Sci 69(3):821–829CrossRefGoogle Scholar Yellishetty M, Ranjith PG, Kumar DL (2009) Metal concentrations and metal mobility in unsaturated mine wastes in mining areas of Goa, India. Resour Conserv Recycl 53(7):379–385CrossRefGoogle Scholar Younger PL (2001) Mine water pollution in Scotland: nature, extent and preventative strategies. Sci Total Environ 265(1–3):309–326CrossRefGoogle Scholar 1.Vinoba Bhave UniversityHazaribaghIndia 2.Department of Environmental Science and EngineeringIndian School of MinesDhanbadIndia Singh, G. & Kamal, R.K. Appl Water Sci (2017) 7: 1479. https://doi.org/10.1007/s13201-016-0430-3
CommonCrawl
Areas Between Curves 259 Practice Problems Calculus for Scientists and Engineers: Early Transcendental Determine the area of the shaded region bounded by the curve $x^{2}=y^{4}\left(1-y^{3}\right)$ (see figure). (FIGURE CANNOT COPY) Regions Between Curves Either method Use the most efficient strategy for computing the area of the following regions. The region in the first quadrant bounded by $y=x^{-1}, y=4 x,$ and$$y=x / 4$$ Use any method (including geometry) to find the area of the following regions. In each case, sketch the bounding curves and the region in question. The region between the line $y=x$ and the curve $y=2 x \sqrt{1-x^{2}}$ in the first quadrant Chemistry: Introducing Inorganic, Organic and Physical Chemistry 3rd A vessel of volume $50.0 \mathrm{dm}^{3}$ contains $2.50 \mathrm{mol}$ of argon and $1.20 \mathrm{mol}$ of nitrogen at $273.15 \mathrm{K}$ (i) Calculate the partial pressure in bar of each gas. (ii) Calculate the total pressure in bar. (iii) How many additional moles of nitrogen must be pumped into the vessel in order to raise the pressure to 5 bar? (Sections $8.2 \text { and } 8.3)$ A sample of gas has a volume of $346 \mathrm{cm}^{3}$ at $25^{\circ} \mathrm{C}$ when the pressure is 1.00 atm. What volume will it occupy if the conditions are changed to $35^{\circ} \mathrm{C}$ and 1.25 atm? (Section 8.2) A sealed flask holds $10 \mathrm{dm}^{3}$ of gas. What is this volume in (a) $\left.\mathrm{cm}^{3},(\mathrm{b}) \mathrm{m}^{3}, \text { (c) litres? (Section } 1.2\right)$ Volumes by Cylindrical Shells Change of variables Suppose $f(x)>0$ for all $x$ and $\int_{0}^{4} f(x) d x=10 .$ Let $R$ be the region in the first quadrant bounded by the coordinate axes, $y=f\left(x^{2}\right),$ and $x=2 .$ Find the volume of the solid generated by revolving $R$ around the $y$ -axis. Volume by Shells Water in a bowl A hemispherical bowl of radius 8 inches is filled to a depth of $h$ inches, where $0 \leq h \leq 8(h=0$ corresponds to an empty bowl). Use the shell method to find the volume of water in the bowl as a function of $h$. (Check the special cases $h=0$ and $h=8 .)$ Find the volume of the following solids using the method of your choice. The solid formed when the region bounded by $y=\sqrt{x},$ the $x$ -axis, and $x=4$ is revolved about the $x$ -axis Physical Chemistry 4th How much work is required to bring two protons from an infinite distance of separation to $0.1 \mathrm{nm} ?$ Calculate the answer in joules using the protonic charge $1.602 \times 10^{-19} \mathrm{C}$. What is the work in $\mathrm{kJ} \mathrm{mol}^{-1}$ for a mole of proton pairs? Electrochemical Equilibrium How high can a person (assume a weight of $70 \mathrm{kg}$ ) climb on one ounce of chocolate, if the heat of combustion $(628 \mathrm{kJ}) \mathrm{can}$ be converted completely into work of vertical displacement? Fundamentals of Thermodynamics 6th (physics, engineering) A steam turbine has an inlet of $2 \mathrm{kg} / \mathrm{s}$ water at 1000 $\mathrm{kPa}$ and $350^{\circ} \mathrm{C}$ with velocity of $15 \mathrm{m} / \mathrm{s}$. The exit is at $100 \mathrm{kPa}, x=1,$ and very low velocity. Find the specific work and the power produced. First Law Analysis for a Control Volume Average Value of a Function 86 Practice Problems Calculus: Early Transcendental Functions Find the average value of the function on the given interval. $f(x)=x^{2}-1,[1,3]$ The Fundamental Theorem of Calculus Compute the average value of the function on the given interval. $$f(x)=2 x+1,[0,4]$$ The Definite Integral University Calculus: Early Transcendentals 4th Average value If $f$ is continuous, the average value of the polar coordinate $r$ over the curve $r=f(\theta), \alpha \leq \theta \leq \beta,$ with respect to $\theta$ is given by the formula $$r_{\mathrm{av}}=\frac{1}{\beta-\alpha} \int_{\alpha}^{\beta} f(\theta) d \theta$$ Use this formula to find the average value of $r$ with respect to $\theta$ over the following curves $(a>0)$ a. The cardioid $r=a(1-\cos \theta)$ b. The circle $r=a$ c. The circle $r=a \cos \theta, \quad-\pi / 2 \leq \theta \leq \pi / 2$ Parametric Equations and Polar Coordinates Areas and Lengths in Polar Coordinates Arc Length Consider a particle that moves in a plane according to the equations $x=\sin t^{2}$ and $y=\cos t^{2}$ with a starting position (0,1) at $t=0$ a. Describe the path of the particle, including the time required to return to the starting position. b. What is the length of the path in part (a)? c. Describe how the motion of this particle differs from the motion described by the equations $x=\sin t$ and $y=\cos t$ d. Now consider the motion described by $x=\sin t^{n}$ and $y=\cos t^{n}$ where $n$ is a positive integer. Describe the path of the particle, including the time required to return to the starting position. e. What is the length of the path in part (d) for any positive integer $n ?$ f. If you were watching a race on a circular path between two runners, one moving according to $x=\sin t$ and $y=\cos t$ and one according to $x=\sin t^{2}$ and $y=\cos t^{2},$ who would win and when would one runner pass the other? Vectors and Vector-Valued Functions Length of Curves Consider the spiral $r=4 \theta,$ for $\theta \geq 0$ a. Use a trigonometric substitution to find the length of the spiral, for $0 \leq \theta \leq \sqrt{8}$ b. Find $L(\theta),$ the length of the spiral on the interval $[0, \theta],$ for any $\theta \geq 0$ c. Show that $L^{\prime}(\theta)>0 .$ Is $L^{\prime \prime}(\theta)$ positive or negative? Interpret your answers. Determine whether the following curves use arc length as a parameter. If not, find a description that uses arc length as a parameter. $$\mathbf{r}(t)=\left\langle t^{2}, 2 t^{2}, 4 t^{2}\right\rangle, \text { for } 1 \leq t \leq 4$$ Area of a Surface of Revolution Engineering Mechanics: Statics and Dynamics 14th (physics) Determine the radii of gyration $k_{x}$ and $k_{y}$ for the solid formed by revolving the shaded area about the $y$ axis. The density of the material is $\rho$ Three-Dimensional Kinetics of a Rigid Body Determine the moment of inertia $I_{y}$ of the object formed by revolving the shaded area about the line $x=5 \mathrm{ft}$ Express the result in terms of the density of the material, $\rho$ Determine the product of inertia $I_{x y}$ of the object formed by revolving the shaded area about the line $x=5 \mathrm{ft}$ Express the result in terms of the density of the material, $\rho$ 21st Century Astronomy 5th (physics) Neutral hydrogen emits radiation at a radio wavelength of $21 \mathrm{cm}$ when an atom drops from a higher-energy spin state to a lower-energy spin state. On average, each atom remains in the higher energy state for 11 million years $\left(3.5 \times 10^{14}$ seconds) \right. a. What is the probability that any given atom will make the transition in 1 second? b. If there are $6 \times 10^{59}$ atoms of neutral hydrogen in a $500-M_{\text {sun }}$ cloud, how many photons of 21 -cm radiation will the cloud emit each second? c. How does this number compare with the $1.8 \times 10^{45}$ photons emitted each second by a solar-type star? The Interstellar Medium and Star Formation University Physics Volume 3 Describe the ground state of hydrogen in terms of wave function, probability density, and atomic orbitals. Understandable Statistics, Concepts and Methods 12th Expand Your Knowledge: Multinomial Probability Distribution Consider a multinomial experiment. This means the following: 1. The trials are independent and repeated under identical conditions. 2. The outcomes of each trial falls into exactly one of $k \geq 2$ categories. 3. The probability that the outcomes of a single trial will fall into ith category is $p_{i}$ (where $i=1,2 \ldots, k$ ) and remains the same for each trial. Furthermore, $p_{1}+p_{2}+\ldots+p_{k}=1$ 4. Let $r_{i}$ be a random variable that represents the number of trials in which the outcomes falls into category $i$. If you have $n$ trials, then $r_{1}+r_{2}+\ldots$ $+r_{k}=n .$ The multinational probability distribution is then $$P\left(r_{1}, r_{2}, \cdots r_{k}\right)=\frac{n !}{r_{1} ! r_{2} ! \cdots r_{2} !} p_{1}^{r_{1}} p_{2}^{(2)} \cdots p_{k}^{r_{2}}$$ How are the multinomial distribution and the binomial distribution related? For the special case $k=2,$ we use the notation $r_{1}=r, r_{2}=n-r, p_{1}=p$ and $p_{2}=q .$ In this special case, the multinomial distribution becomes the binomial distribution. The city of Boulder, Colorado is having an election to determine the establishment of a new municipal electrical power plant. The new plant would emphasize renewable energy (e.g., wind, solar, geothermal). A recent large survey of Boulder voters showed $50 \%$ favor the new plant, $30 \%$ oppose it, and $20 \%$ are undecided. Let $p_{1}=0.5, p_{2}=0.3,$ and $p_{3}=0.2 .$ Suppose a random sample of $n=6$ Boulder voters is taken. What is the probability that (a) $r_{1}=3$ favor, $r_{2}=2$ oppose, and $r_{3}=1$ are undecided regarding the new power plant? (b) $r_{1}=4$ favor, $r_{2}=2$ oppose, and $r_{3}=0$ are undecided regarding the new power plant? The Binomial Probability Distribution and Related Topics Binomial Probabilities Find Volume by: Method of Cross Sections The base of a solid $V$ is the region bounded by $y=x^{2}$ and $y=2-x^{2} .$ Find the volume if $V$ has (a) square cross sections, (b) semicircular cross sections and (c) equilateral triangle cross sections perpendicular to the $x$ -axis. Applications of the Definite Integral volume: Slicing, Disks and Washers A pottery jar has circular cross sections of radius $4+\sin \frac{x}{2}$ inches for $0 \leq x \leq 2 \pi .$ Sketch a picture of the jar and compute its volume. Calculus Early Transcendentals 3rd Edition Solids from integrals Sketch a solid of revolution whose volume by the disk method is given by the following integrals. Indicate the function that generates the solid. Solutions are not unique. $$\text { a. } \int_{0}^{\pi} \pi \sin ^{2} x d x$$ $$\text { b. } \int_{0}^{2} \pi\left(x^{2}+2 x+1\right) d x$$ Volume by Slicing Find Volume by: Method of Disks (rings) Suppose that the square consisting of all points $(x, y)$ with $-1 \leq x \leq 1$ and $-1 \leq y \leq 1$ is revolved about the $y$ -axis. Show that the volume of the resulting solid is $2 \pi$ Calculus of a Single Variable The integral represents the volume of a solid. Describe the solid. $$\pi \int_{0}^{\pi / 2} \sin ^{2} x d x$$ Volume: The Disk Method Find the volume generated by rotating the given region about the specified line. $$R_{1} \text { about } x=0$$ Find Volume by: Method of Cylinders Sketch the region, draw in a typical shell, identify the radius and height of each shell and compute the volume. The region bounded by $y=x^{2}$ and the $x$ -axis, $-1 \leq x \leq 1$ revolved about $x=2$. Thomas Calculus Use the shell method to find the volumes of the solids generated by revolving the regions bounded by the curves and lines in Exercises $x=2 y-y^{2}, \quad x=0$ Applications Of Definite Integrals Calculus Volume 2 Consider the function $y=f(x),$ which decreases from $f(0)=b$ to $f(1)=0 .$ Set up the integrals for determining the volume, using both the shell method and the disk method, of the solid generated when this region, with $x=0$ and $y=0, \quad$ is rotated around the $y$ -axis. Prove that both methods approximate the same volume. Which method is easier to apply? (Hint: since $f(x)$ is one-to-one, there exists an inverse $f^{-1}(y) . )$ Volumes of Revolution: Cylindrical Shells Center of Mass (Centroids) Locate the centroid $\bar{y}$ of the shaded area. Center of Gravity and Centroid Locate the centroid $\bar{x}$ of the shaded area. Natural Log Defined as an Integral Evaluate the integral. $$\int_{0}^{1} \frac{x^{2}}{x^{3}-4} d x$$ The natural Logarithm as an Integral Solve the equation. Check your answer. \ln x=-2 Exponential Logarithmic Functions Natural Logarithms Solve each equation. \log 5 x+3=3.7
CommonCrawl